diff --git a/MCM/2009/2009ICM/2009ICM.md b/MCM/2009/2009ICM/2009ICM.md new file mode 100644 index 0000000000000000000000000000000000000000..0c0af241483c00c9219100703fae7c8493d5a9f3 --- /dev/null +++ b/MCM/2009/2009ICM/2009ICM.md @@ -0,0 +1,1954 @@ +# The U + +# M + +# Publisher + +COMAP, Inc. + +# Executive Publisher + +Solomon A. Garfunkel + +# ILAP Editor + +Chris Arney + +Associate Director, Mathematics Division + +Program Manager, Cooperative Systems + +Army Research Office P.O.Box 12211 + +Research Triangle Park, NC 27709-2211 + +david.arney1@arl.army.mil + +# On Jargon Editor + +Yves Nievergelt + +Dept. of Mathematics Eastern Washington Univ. + +Cheney, WA 99004 + +ynievergelt@ewu.edu + +# Reviews Editor + +James M. Cargal +Mathematics Dept. +Troy University— + +Montgomery Campus + +231 Montgomery St. + +Montgomery, AL 36104 + +jmcargal@sprintmail.com + +# Chief Operating Officer + +Laurie W. Aragón + +# Production Manager + +George W. Ward + +# Production Editor + +Joyce Barnes + +# Distribution + +John Tomicek + +# Graphic Designer + +Daiva Chauhan + +# AP Journal + +# Vol. 30, No. 2 + +# Editor + +Paul J. Campbell + +Beloit College + +700 College St. + +Beloit, WI 53511-5595 + +campbell@beloit.edu + +# Associate Editors + +Don Adolphson + +Chris Arney + +Aaron Archer + +Ron Barnes + +Arthur Benjamin + +Robert Bosch + +James M. Cargal + +Murray K. Clayton + +Lisette De Pillis + +James P. Fink + +Solomon A. Garfunkel + +William B. Gearhart + +William C. Giauque + +Richard Haberman + +Jon Jacobsen + +Walter Meyer + +Yves Nievergelt + +Michael O'Leary + +Catherine A. Roberts + +John S. Robertson + +Philip D. Straffin + +J.T. Sutcliffe + +Brigham Young Univ. + +Army Research Office + +AT&T Shannon Res. Lab. + +U. of Houston-Downtn + +Harvey Mudd College + +Oberlin College + +Troy U.-Montgomery + +U. of Wisc.—Madison + +Harvey Mudd College + +Gettysburg College + +COMAP, Inc. + +Calif. State U., Fullerton + +Brigham Young Univ. + +Southern Methodist U. + +Harvey Mudd College + +Adelphi University + +Eastern Washington U. + +Towson University + +College of the Holy Cross + +Georgia Military College + +Beloit College + +St. Mark's School, Dallas + +# Vol. 30, No. 2 2009 + +# Table of Contents + +# Guest Editorial + +Discrete Math First! + +Chris Arney 93 + +# Special Section on the ICM + +Results of the 2009 Interdisciplinary Contest in Modeling + +Chris Arney 99 + +Rebalancing Human-Influenced Ecosystems + +YuanSi Zhang, ShuoPeng Wang, and + +Ning Cui 121 + +Striving for Balance: Why Reintroducing More Species to Fish Farm Ecosystem Yields Bigger Profits + +Sean Clement, Timothy Newlin, and + +Joseph Lucas 141 + +Authors' Commentary: The Outstanding Coral Reef Papers + +Melissa Garren and Joseph Myers 159 + +Judges' Commentary: The Outstanding Coral Reef Papers + +Sheila Miller, Melissa Garren, and Rodney Sturdivant.... 163 + +# On Jargon + +Ptolemy to Fourier: Epicycles + +Fawaz Hjouj 169 + +# Reviews 173 + +# Guest Editorial Discrete Math First! + +Chris Arney +Division Chief, Mathematical Sciences +Division Chief, Network Sciences +Program Manager, Cooperative Systems +U.S. Army Research Office +P.O. Box 12211 +Research Triangle Park, NC 27709-2211 +david.arney1@us.army.mil + +# Introduction + +Recently, this Journal published several intriguing editorials on calculus and modeling. James Cargal established the bounds of modeling as a science and the limitations of mathematical problem solving [2007]. I certainly agree with his points on the art and science of modeling. Paul Campbell [2006] and Underwood Dudley [2008] debated the viability of the calculus course. I guess I am in an agreeable mood, since I also support Campbell's statement that we really do need to change the way we teach calculus. + +I agree so much with Dudley's points made in rebuttal that I will be foolish (his word) and advocate for + +discrete mathematics as the standard first-year college mathematics course (vs. a crappy or even a superb calculus course). + +First, let me be clear: I love calculus (both as a liberal art and as a professional tool). It is wonderful mathematics that can enrich and empower one's life. I also agree with Dudley that even though some students do not fully understand the concepts and theories of calculus, it should still be taught—and we should continue to reform, refine, improve, and enhance its teaching. And I strongly agree with him that mathematics is good for students because it can and does develop thinking and problem solving skills. I believe we (mathematics educators) should be pleased by what we are doing and confident we are having a positive impact on students in + +The UMAP Journal 30 (2) (2009) 93-97. ©Copyright 2009 by COMAP, Inc. All rights reserved. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice. Abstracting with credit is permitted, but copyrights for components of this work owned by others than COMAP must be honored. To copy otherwise, to republish, to post on servers, or to redistribute to lists requires prior permission from COMAP. + +calculus and in modeling and other math courses as well. + +Here is where I go a step further than Dudley: I strongly believe that we would have even greater success if we taught discrete modeling as the standard course to our first-year students. Such a course should include copious modeling and modern interdisciplinary problem solving. If we do this well and prepare and motivate our students to take the calculus, that course can be more like the one that Campbell advocates—full of rich ideas and fuller applications, resulting in improved student understanding of mathematics. + +# Background and Perspective + +Before I provide my case, let me give a little background and perspective. Mathematics, as a tool for humans, has always been useful. Benjamin Franklin highlighted this in his 1735 essay "Of the usefulness of mathematics" [1735]: + +There has not been any science so much esteemed and honored as this of mathematics, nor with so much industry and vigilance become the care of great men, and labored in by the potentates of the world, viz. emperors, kings, princes, etc. + +Students who learn mathematics can understand, interpret, and predict the behavior of real-life phenomena, and then share what they learn with people all over the world. Likewise, mathematics as a liberal art has always developed thinking skills. Mathematics requires one to think abstractly, conceptually, and systematically. Alfred North Whitehead in his Preface to Universal Algebra wrote "The whole of mathematics consists in the organization of a series of aids to the imagination in the process of reasoning" [1898, as quoted in Moritz 1914, 6]. + +In today's world, many people face thinking, reasoning, and quantitative challenges. Today's college-educated managers and professionals are required to process data and synthesize information, use and understand information technology, optimize elaborate plans, confront complexity, think through difficult challenges, and leverage new technologies. To meet these challenges and to insure that our future citizens anticipate and respond effectively to the uncertainties of a changing world, college core mathematics programs need to develop students as + +creative, confident, competent problem-solvers and clear, critical thinkers. + +The essential components of modern undergraduate mathematics are + +- modeling (forming and analyzing problems, using technical tools, and implementing solutions) and +- inquiry (formulating questions, moving toward answers and more questions, generalizing, seeking understanding, connecting topics and ideas). + +College graduates need to use technological tools to solve problems from every facet of life (physical sciences, life sciences, social sciences, behavioral sciences, political sciences, technology, and humanities). Our students need to study mathematics because of its importance in the everyday world and to develop their way of thinking. Undergraduate mathematics must challenge the mind to dream, to hope, to believe, and then provide the skills and the tools needed to achieve those dreams. + +# Beyond Just Mathematics + +Also needed are interdisciplinary experiences that give students the opportunity to connect their mathematics to real problems involving aspects of many disciplines. I believe what Descartes wrote: + +Hence we must believe that all the sciences are so interconnected, that it is much easier to study them all together than to isolate one from all the others. If, therefore, anyone wishes to search out the truth of things in serious earnest, he ought not select one special science, for all the sciences are cojoined with each other and interdependent. + +—Descartes [1629] + +It is imperative that our nation's colleges design and implement courses that integrate important topics and connect to other disciplines, along with developing skills in using technology, and solving problems. The curriculum needs to be tied together with student-growth threads of important attitudes and skills in student development as life-long learners who are able to formulate questions, research answers, reach logical conclusions, and make informed decisions. + +I believe that discrete modeling is best suited to prepare students for success in the future era of the information age, where new concepts like complexity, network science, and information science will be prevalent. Such a course is most appropriate in scope and complexity to give students an awareness of the discipline of mathematics. + +The basic concept in discrete dynamical modeling is that the future is predicted by understanding the present and adding to it the hypothesized change over the interval of interest. Discrete dynamical models (difference equations) are solvable numerically by iteration, so students are not restricted by solution techniques but are free to think, model, and analyze problems. The prerequisite mathematics to learn and perform elementary discrete dynamical modeling is algebra. Therefore, this topic is accessible for first-year college students without an investment in learning the more-sophisticated calculus concepts needed to study continuous dynamics (differential equations). Many discrete mathematics topics, especially the modeling, reasoning, and computing, that are traditionally covered in higher-level courses are accessible to freshmen taking an introductory dis + +crete dynamical modeling course. Through a first-year discrete modeling course, the foundations of our new sciences are available to all students at the core level. + +A valuable set of goals for a core mathematics course might include: + +- students acquiring fundamental knowledge for future application; +- students developing sound, logical thought processes relevant to future science; and +- students learning how to solve problems. + +By achieving these goals, successful students could formulate intelligent questions, reason and research solutions using scientific principles, and be confident and independent in their future work. + +A discrete modeling course can accomplish these goals via study of + +- linear and nonlinear difference equations; +- systems of equations, along with the matrix-algebra concepts of eigenvalues and eigenvectors; +- analytic, numeric, and graphic solution methods and analysis; +- conjecturing; +- long-term behavior through determination of equilibria and stability; +- proportionality modeling; and +- applied problem solving. + +Throughout such a course, major mathematical themes can be studied, including functions, limits, dynamics, accumulation, vectors, and modeling. The COMAP-sponsored team-written textbook Principles and Practice of Mathematics [Meyer 1997] (of which I was a co-author) presents these ideas; other books also cover this subject at a first-year level. + +Mathematics is like life. Both are rewarding, challenging, offer great gifts, inspire great dreams, and hold great promise. I believe that discrete modeling is the best course to deliver that promise to our first-year students. + +# References + +Campbell, Paul J. 2006. Calculus is crap. The UMAP Journal 27 (1) (2006) 415-430. +Cargal, James M. 2007. The art of modeling. *The UMAP Journal* 28 (1) (2007): 1-4. +Descartes, René. 1629. Regulae ad directionem ingenii [Rules for the direction of the mind]. 1911. In The Philosophical Works of Descartes, vol. 1, trans. Elizabeth S. Haldane and G.R.T. Ross, 2. Cambridge, UK: Cambridge University Press. + +Dudley, Underwood. 2008. Calculus isn't crap. The UMAP Journal 29 (1) (2008): 1-4. +Franklin, Benjamin. 1735. Of the usefulness of mathematics. The Pennsylvania Gazette 2 (30 October 1735). Quoted in Moritz [1914, 44]. Omitted "for lack of evidence of Franklin's authorship" from The Papers of Benjamin Franklin, vol. 2, edited by Leonard W. Labaree, Whitfield J. Bell, Jr., Helen C. Boatfield, and Helene H. Fineman, 126-127. New Haven, CT: Yale University Press, 1960. +Meyer, Walter (ed.). 1996. Principles and Practice of Mathematics. New York: Springer-Verlag. +Moritz, Robert Edouard. 1914. Memorabilia Mathematica; or, The Philomath's Quotation-Book. New York: Macmillan. 1942. Reprint. Mathematical Association of America. 1993. Reprint. Washington, DC: Mathematical Association of America. 1958. Reprint under the title On Mathematics and Mathematicians. New York: Dover. + +# About the Author + +Chris Arney graduated from West Point and became an intelligence officer. His studies resumed at Rensselaer Polytechnic Institute with an M.S. (computer science) and a Ph.D. (mathematics). He spent most of his military career as a mathematics professor at West Point, before becoming Dean of the School of Mathematics and Sciences and Interim Vice President for Academic Affairs at the College of Saint Rose in Albany, NY. Chris has authored 20 books, written more than 100 technical articles, and given more than 200 presentations and 30 faculty development workshops. His technical interests include mathemat + +![](images/9bee99db1afb4e14d90a60f4ac9e2f06040fc40c79b1ae7fbf4800b92b72ac26.jpg) + +ical modeling, cooperative systems, and the history of mathematics and science; his teaching interests include using technology and interdisciplinary problems to improve undergraduate teaching and curricula; his hobbies include reading and mowing his lawn. Chris is Director of the Mathematical Sciences Division of the Army Research Office, where he researches cooperative systems, particularly in information networks, pursuit-evasion modeling, and robotics. He is co-director of COMAP's Interdisciplinary Contest in Modeling (ICM) $^{\text{®}}$ and the editor for the Journal's ILAP (Interdisciplinary Lively Applications Project) Modules. In August 2009, he will rejoin the faculty at West Point, where his daughter Kristin also teaches. + +# Modeling Forum + +# Results of the 2009 Interdisciplinary Contest in Modeling + +Chris Arney, ICM Co-Director +Division Chief, Mathematical Sciences Division +Program Manager, Cooperative Systems +Army Research Office +PO Box 12211 +Research Triangle Park, NC 27709-2211 +David.Arney1@arl.army.mil + +# Introduction + +A total of 374 teams from four countries spent a weekend in February working in the 11th Interdisciplinary Contest in Modeling (ICM) $^{\text{®}}$ . This year's contest began on Thursday, Feb. 5 and ended on Monday, Feb. 9, 2009. During that time, teams of up to three undergraduate or high school students researched, modeled, analyzed, solved, wrote, and submitted their solutions to an open-ended interdisciplinary modeling problem involving marine ecology. After the weekend of challenging and productive work, the solution papers were sent to COMAP for judging. Two of the top papers, which were judged to be Outstanding by the expert panel of judges, appear in this issue of The UMAP Journal. + +COMAP's Interdisciplinary Contest in Modeling (ICM), along with its sibling contest, the Mathematical Contest in Modeling (MCM) $^{\text{®}}$ , involves students working in teams to model and analyze an open problem. Centering its educational philosophy on mathematical modeling, COMAP supports the use of mathematical tools to explore real-world problems. It serves society by developing students as problem solvers in order to become better informed and prepared as citizens, contributors, consumers, workers, and community leaders. The ICM and MCM are examples of COMAP's efforts in working towards its goals. + +This year's environmental sciences problem was challenging in its demand for teams to utilize many aspects of science, mathematics, and analysis in their modeling. The problem required teams to understand the complexity of marine ecology and aquaculture systems and to model that complexity to reverse current environmental destruction while retaining financial prosperity of the aquaculture. To accomplish their tasks, the teams had to consider many difficult and complex issues. The problem also included the requirements of the ICM to use thorough analysis, research, creativity, and effective communication. The author of the problem was marine biology researcher Melissa Garren of the Scripps Institute of Oceanography. + +All members of the 374 competing teams are to be congratulated for their excellent work and dedication to modeling and problem solving. The judges remarked that this year's problem was challenging and demanding in many aspects of modeling and problem solving. + +Next year, we will continue the environmental science theme for the contest problem. Teams preparing for the 2010 contest should consider reviewing interdisciplinary topics in the area of environmental issues. + +# Creating Food Systems: Rebalancing Human-Influenced Ecosystems + +# Background + +Less than $1\%$ of the ocean floor is covered by coral. Yet $25\%$ of the ocean's biodiversity is supported in these areas. Thus, conservationists are concerned when coral disappears, since the biodiversity of the region disappears shortly thereafter. + +Consider an area in the Philippines located in a narrow channel between Luzon Island and Santiago Island in Bolinao, Pangasinan, that used to be filled with coral reef and supported a wide range of species (Figure 1). The onceplentiful biodiversity of the area has been dramatically reduced with the introduction of commercial milkfish (Chanos chanos) farming in the mid 1990s. It's now mostly muddy bottom, the once living corals are long since buried, and there are few wild fish remaining, due to overfishing and loss of habitat. + +While it is important to provide enough food for the human inhabitants of the area, it is equally important to find innovative ways of doing so that allow the natural ecosystem to continue thriving; that is, establishing a desirable polyculture system that could replace the current milkfish monoculture. The ultimate goal is to develop a set of aquaculture practices that would not only support the human inhabitants financially and nutritionally, but simultaneously improve the local water quality to a point where reef-building corals could recolonize the ocean floor and co-exist with the farms. + +A desirable polyculture is a scenario where multiple economically valuable + +species are farmed together and the waste of one species is the food for another. For example, the waste of a fin-fish can be eaten by filter feeders, and excess nutrients from both fish and filter feeders can be absorbed by algae which can also be sold, either as food or commercially useful byproducts. Not only does this reduce the amount of nutrient input from the fish farming into the surrounding waters, it also increases the amount of profit a farmer can make by using the fish waste to generate a greater quantity of usable products (mussels, seaweed, etc.) + +For modeling purposes, the primary animal organisms involved in these biodiverse environments can be partitioned into + +- predatory fish (phylum Chordata, subphylum Vertebrata); +- herbivorous fish (phylum Chordata, subphylum Vertebrata); +- molluscs (such as mussels, oysters, clams, and snails) (phylum Mollusca); +- crustaceans (such as crabs, lobsters, barnacles, and shrimp) (phylum Arthropoda, subphylum Crustacea); +- echinoderms (such as starfish, sea cucumbers, and sea urchins) (phylum Echinodermata); and +- algae. + +By feeding type, there are + +- primary producers (photosynthesizers—these can be single-cell phytoplankton, cyanobacteria, or multicellular algae); +- filter feeders (they strain plankton, organic particles, and sometimes bacteria out of the water); +- deposit feeders (they eat mud and digest the organic molecules and nutrients out of it); +- herbivores (they eat primary producers); and +- predators (carnivores). + +Just as on land, most of the carnivores eat herbivores or smaller carnivores, but in the ocean they can also eat many of the filter feeders and deposit feeders. Most animals have growth efficiencies of $10 - 20\%$ , so $80 - 90\%$ of what they ingest ends up as waste in one form or another (some dissipated heat, some physical waste, etc.). + +The role of coral in this biodiverse environment is largely to partition the space and allow species to condense and coexist by giving a large number of species each its own chance at a livable environment in a relatively small space: the aquatic analogue of high-rise urbanization. Coral also provides some filter feeding, which helps clean the water. + +The ability of an area to support coral depends on many factors, the most important of which is water quality. For example, corals in Bolinao are able to + +live and reproduce in waters that contain half a million to a million bacteria per milliliter and $0.25\mu \mathrm{g}$ chlorophyll per liter (a proxy for phytoplankton biomass). The fish-pen channel currently sees levels upwards of 10 million bacteria per milliliter and $15\mu \mathrm{g}$ chlorophyll per liter. Excess nutrients from the milkfish farms encourage fast-growing algae to choke out coral growth, and particulate influx from the milkfish farms reduces corals ability to photosynthesize. Therefore, before coral larvae can begin to grow, acceptable water quality must be established. Other threats to coral include degradation from increasing ocean acidity due to increased atmospheric $\mathrm{CO}_{2}$ , and degradation from increasing ocean temperature due to global warming. These can be considered second-order threats, which we will not specifically address in this problem. + +# Problem Statement + +The challenge for this problem is to come up with viable polyculture systems to replace the current monoculture farming of milkfish, so as to improve water quality sufficiently that coral larvae could begin settling and recolonizing the area. Your polyculture scenario should be economically interesting and environmentally friendly both in the short and long term. + +# 1. Model the Original Bolinao Coral Reef Ecosystem before Fishfarm Introduction + +Develop a model of an intact coral reef foodweb containing the milkfish as the only predatory fish species, one particular herbivorous fish (of your choice), one mollusc species, one crustacean species, one echinoderm species, and one algae species. Specify the numbers of each species present in a way that you find reasonable; cite the sources you use or show the estimates you make in arriving at these population numbers. In articulating your model, specify how each species interacts with the others. Show how your model predicts a steady-state level of water quality sufficient for the continued healthy growth of your coral species. If your model does not yield a high-enough level of water quality, then adjust your number of each species in a way that you find most reasonable until you do achieve a satisfactory quality level, and describe clearly which species numbers you adjusted and why your changes were reasonable. + +# 2. Model the Current Bolinao Milkfish Monoculture + +a. First examine the impact if milkfish farming were to suppress other animal species. Do this by removing (setting the population to zero of) all herbivorous fish, all molluscs, all crustaceans, and all echinoderms. Set all other populations to be the same as in your full model above. Since you have removed the milkfish's natural food supply, you will need to introduce a constant term that models farmer-feeding of the penned milkfish; choose this term to keep your model in equilibrium. What steady-state level of water quality does your model now predict? Is water quality sufficient for the continued healthy growth + +of your coral species? Compare and describe how your result compares to observations. + +b. Milkfish farming does not totally suppress all other animal species and water quality is probably not as bad as your results from part 2a. suggest, so use your model to simulate the current Bolinao situation by reintroducing all deleted species and adjust only those populations until water quality matches that currently observed in Bolinao. Compare your populations with those currently observed in Bolinao and discuss what changes to your model could bring your population predictions into closer agreement with observations. + +# 3. Model the Remediation of Bolinao via Polyculture + +You now strive to replace the current monoculture with a polyculture industry, seeking to make the water clear enough that the original reef ecosystem that you modeled in part 1 can re-establish itself without any help from humans. The idea is to introduce an interdependent set of species such that, whatever feed the milkfish farmer puts in, the combination of all of the "livestock" will use it entirely so that there are no (or only minimal) leftover nutrients and particles (feed and feces) falling onto the newly-growing reef habitat below. Additionally, you seek to commercially harvest edible biomass from this polyculture in order to feed humans and increase value. + +a. Develop a commercial polyculture to remediate Bolinao. Do this by starting with your "current" penned model from part 2b, and introduce into it additional species that both help clean the water and yield valuable, harvestable biomass. For example, you could line the pens with mussels, oysters, clams, or other economically-valuable filter feeder to remove some of the waste from the milkfish. Economically-valuable algae could be grown on the sides of the pens near the surface (where they get enough light), and some of these could feed the small herbivorous fish that feed the milkfish. Clearly present your model and its steady-state populations. + +b. Report on the outputs of your model. What did you optimize, what constraints did you enforce, and why? What water quality does your model yield? How much harvest does your model yield, and what is its economic value? How much does it cost you to further improve water quality? In other words, from your optimal scenario, how many dollars of harvest does it cost to improve water quality by one unit? + +# 4. Science + +Discuss the harvesting of each species for human consumption. How do we use your model for predicting or understanding harvesting for human consumption? Does a harvested pound of carnivorous fish count the same as a harvested pound of seaweed, so that we seek to maximize the total weight harvested; or do we differentiate by value (as measured by price of each harvested species), so that we seek to maximize the value of the harvest? Or do we seek to maximize the total value of harvest minus cost of milkfish feed? Should + +we define the value of edible biomass as the sum of the values of each species harvested, minus the cost of milkfish feed? + +# 5. Maximize the Value of the Total Harvest + +We now wish to maintain an acceptable (maximal) level of water quality while harvesting a high (maximal) value of marketable biomass from all living species in the model for human consumption (edible and saleable byproducts are equally legitimate ways to maximize value). Change your model to harvest a constant amount from each species. What is the total value of biomass (as defined above) that you can harvest and the corresponding water quality? Try different harvesting strategies and different levels of milkfish feeding (always choosing values that will keep your model in equilibrium), and graph water quality as a function of harvest value. What strategy is optimal and what is the optimal harvest? + +# 6. Call to Action + +Write an information paper to the director of the Pacific Marine Fisheries Council summarizing your findings on the relationship between biodiversity and water quality for coral growth. Include a strategy for remediating an area like Bolinao and how long it will take to remediate. Present your optimal harvesting/feeding strategy from part 5 above along with persuasive justification, and present suggested fishing/harvest quotas that will implement your plan. Show the leverage of your strategy by presenting the ratio of the harvest value under your plan to the harvest value under the current Bolinao scenario. Discuss the pros and cons from an ecological perspective of implementing your polyculture system. + +# Getting Started References + +```txt +http://en.wikipedia.org/wiki/Integrated_Multi-trophic_Aquaculture +http://en.wikipedia.org/wiki/Coral_reef +http://www.seaworld.org/infobooks/Coral/home.html +``` + +# Supplementary Information + +Tables 1-3 are representative of the data that you will be able to find through public searches. These data may not be complete for your purposes and are intended only to help give you ideas on how to get started. You should use the best-suited and most complete data that you find. + +# References for Information found in the Tables + +Cruz-Rivera, Edwin, and Valerie J. Paul. 2006. Feeding by coral reef mesograzers: Algae or cyanobacteria? Coral Reefs 25 (4) (November 2006): 617-627. + +![](images/8730ed27fbf544394abd35bc234bcf52b97c7bd91cb997ca62800658eaa4d558.jpg) +Figure 1. Map of the Bolinao area and the sites sampled for water quality data listed in Tables 1 and 2. Sites A and B have fairly healthy coral reefs, while Site C has fairly degraded reefs, Site D has a few corals still holding on but is mostly dead coral and algae at this point in time, and the area under the fish pens no longer has live coral at all. In the fish pen channel, farmers employ nets measuring roughly $10\mathrm{m} \times 10\mathrm{m} \times 8\mathrm{m}$ with stocking densities of 50,000 fish per pen and 10 pens per hectare. (Source: Garren et al. [2008]). + +Table 1. Water characteristics of Bolinao sites (from Garren et al. [2008]). + +
SiteDissolved Organic Carbon (DOC) (μM)Total Nitrogen (dissolved) (μM)Chl a (μg/L)Particulate Organic Carbon (POC) (μg/L)Total Nitrogen (particulate) (μg/L)
A69.7 ± 1.37.4 ± 0.40.25 ± 0.03106 ± 49 ± 15
B80.4 ± 2.98.0 ± 0.20.28 ± 0.03196 ± 5739 ± 15
C89.6 ± 1.714.2 ± 0.770.38 ± 0.03662 ± 6854 ± 17
D141 ± 2.930.5 ± 1.34.5 ± 0.2832 ± 33886 ± 45
Fish pens162 ± 18.539.8 ± 2.710.3 ± 0.2641 ± 6086 ± 18
+ +Table 2. Bacteria and particle abundances in Bolinao (from Garren et al. [2008]). + +
SiteVirus-like particles +abundance (#/ml) × 10^7Free-living bacteria +abundance (cells/ml) × 10^5Particle- +attached bacteria +abundance (cells/ml) × 10^2% of total bacteria attached to particles (%)# of particles per ml (particle defined as larger than 3 μm)Avg particle size (μm²)
Detritus (#) × 10^3Phytoplankton cells (#) × 10^2
A1.0 ± 0.075.4 ± 0.35.3 ± 2.2< 0.13.4 ± 0.21.6 ± 0.242.7
B0.8 ± 0.044.2 ± 0.63.9 ± 0.6< 0.14.4 ± 0.21.0 ± 0.119.7
C1.7 ± 0.13.0 ± 0.04113.7 ± 3.63.79.6 ± 0.81.1 ± 0.165.8
D7.0 ± 0.36.1 ± 0.6144.5 ± 5.62.314.4 ± 0.19.7 ± 0.7576.1
Fish pens6.1 ± 0.79.9 ± 0.3583.2 ± 28.15.611.3 ± 0.578.4 ± 5.5280.8
+ +Fox, Rebecca J., and David R. Bellwood. 2008. Direct versus indirect methods of quantifying herbivore grazing impact on a coral reef. *Marine Biology* 154 (2) (April 2008): 325-334. +Garren, Melissa, Steven Smriga, and Farooq Azam. 2008. Gradients of coastal fish farm effluents and their effect on coral reef microbes. *Environmental Microbiology* 10 (9) (September 2008): 2299-2312. +Hawkins, A.J.S. R.F.M. Smith, S.H. Tan, and Z.B. Yasin. 1998. Suspension-feeding behaviour in tropical bivalve molluscs: Perna viridis, Crassostrea belcheri, Crassostrea iradelei, Saccostrea cucculata and Pinctada margarifera. Marine Ecology Progress Series 166 (May 1998): 173-185. +Holmer, Marianne, Núria Marba, Jorge Terrados, Carlos M. Duarte, and Mike D. Fortes. 2002. Impacts of milkfish (Chanos chanos) aquaculture on carbon and nutrient fluxes in the Bolinao area, Philippines. *Marine Pollution Bulletin* 44 (7) (Jly 2002): 685-696. +McPherson, B.F. 1968. Feeding and oxygen uptake of the tropical sea urchin Eucidaris tribuloides (Lamarck). Biological Bulletin 135 (October 1968): 308-321. +Merino, German E., Raul H. Piedrahita, and Douglas E. Conklin. 2007. Ammonia and urea excretion rates of California halibut (Paralichthys californicus, Ayres) under farm-like conditions. Aquaculture 271 (1-4) (October 2007): 227-243. +Xu, Yongjian, Jianguang Fang, Qisheng Tang, Junda Lin, Guanzong Le, and Lv Liao. 2008. Improvement of water quality by the macroalga, Gracilaria lemaneiformis (Rhodophyta), near aquaculture effluent outlets. Journal of the World Aquaculture Society 39 (4): 549-555. +Yokoya, Nair S., and Eurico C. Oliveira. 1992. Temperature responses of economically important red algae and their potential for mariculture in Brazilian waters. Journal of Applied Phycology 4 (4) (December 1992): 339-345. + +Table 3. +Organism information. + +
OrganismData sourceTrophic classificationWhat it eatsHow much it eatsWhat it excretesValue when harvested
MilkfishHomer et al. [2002]predatorfish feed or smaller fishIn pens: 6.58 kg/m2of pen/5 months242-493 g dry weight of sediment/m2/day*$1,278 USD/metric ton (from Agribusiness Weekly)
Herbivorous fish (Siganus doliatus, a rabbitfish as representative)Fox and Bellwood [2008]herbivoremacro algae (fleshy algae)18-22 cm3of algae material/m2of reef/month
Crustaceans (data averaged over one crab (Menaethius monoceros) and one amphipod (Cymadusa imbroglio))Cruz-Rivera and Paul [2006]herbivoremacro algae and cyanobacteria10-20 mg wet weight of food /individual/dayValues on the Web
Molluscs (averaged over 5 species of mussels and oysters)Hawkins et al. [1998]filter feederparticles 1-16 μm in diameterThey clear 5-7 L of water/hr of particles and absorb 4-15 mg organic material/g dry soft tissue weight/hrValues on the Web
Echinoderm (urchin, Tripneustes gratilla, from the Philippines as representativeDy et al. [2002]herbivorefleshy algae0.05 g wet weight algae/g dry weight urchin/hr, where average dry weight of an individual was 6.9 g0.2-11.5 mg dry weight feces/g dry weight urchin
AlgaeYokoya and Oliveira [1992]primary producersunlight, carbon dioxide, nitrogen, phosphorus*****
+ +\*This sediment is approximately $10\%$ carbon, $0.4\%$ nitrogen, and $0.6\%$ phosphorus dry weight. +* * Depending on temperature, economically important red algae can double their mass (wet we +and as long as 50.0 days (Pterocladia capillacea). +**These organisms can extrude excess photosynthate in the form of dissolved organic carbon but this is a difficult number to quantify. +Simply keep in mind that this process is occurring as you think about the ecological perspective in part 6. + +# The Results + +The 374 solution papers were coded at COMAP headquarters so that names and affiliations of the authors were unknown to the judges. Each paper was then read preliminarily by "triage" judges at the U.S. Military Academy at West Point, NY. At the triage stage, the summary, the model description, and overall organization are the primary elements in judging a paper. Final judging by a team of modelers, analysts, and subject-matter experts took place in April. The judges classified the 374 submitted papers as follows: + +
OutstandingMeritoriousHonorableSuccessfulTotal
Coral reef236144192374
+ +The two papers that the judges designated as Outstanding appear in this special issue of The UMAP Journal, together with a commentary by the judges. We list those two Outstanding teams and the 36 Meritorious teams (and advisors) below. The complete list of all participating schools, advisors, and results is provided in the Appendix. + +# Outstanding Teams + +# Institution and Advisor + +# Team Members + +"Rebalancing Human-Influenced Ecosystems" + +China University of Mining and Technology + +Xuzhou, Jiangsu, China + +Xingyong Zhang + +YuanSi Zhang + +ShuoPeng Wang + +Ning Cui + +"Striving for Balance: Why Reintroducing More Species to Fish Farm Ecosystems Yields Bigger Profits" + +United States Military Academy + +West Point, NY + +Kristen Arney + +Sean Clement + +Timothy Newlin + +Joseph Lucas + +# Meritorious Teams (36) + +Asbury College, Mathematics and Computer Science, Wilmore, KY (Duk Lee) + +Asbury College, Mathematics and Computer Science, Wilmore, KY (Kenneth P. Rietz) + +Bandung Institute of Technology, Mathematics, Bandung, West Java, Indonesia (Agus Yodi Gunawan) + +Beijing University of Posts and Telecommunications, Computer Science and Technology, Beijing, China (Hongxiang Sun) + +California State University Monterey Bay, Mathematics, Seaside, CA (Hongde Hu) + +Carroll College, Mathematics, Engineering, and Computer Science, Helena, MT (Kelly Cline) + +Fudan University, Mathematical Sciences, Shanghai, China (Yuan Cao) + +Harbin Institute of Technology, Mathematics, Harbin, Heilongjiang, China (Qi Guo) + +Harbin Institute of Technology, Mathematics, Harbin, Heilongjiang, China (Yong Wang) + +Harvey Mudd College, Mathematics, Claremont, CA (Zach Dodds) + +Humboldt State University, Environmental Resources Engineering, Arcata, CA (Brad Finney) + +Jinan University, Mathematics, Guangzhou, Guangdong, China (Daiqiang Hu) + +National University of Defense Technology, Applied Mathematics, Changsha, Hunan, China (Lizhi Cheng) + +National University of Defense Technology, Mathematics and System Science, Changsha, Hunan, China (Mengda Wu) + +Northwestern Polytechnical University, Applied Mathematics, Xi'an, Shaanxi, China (Huayong Xiao) + +Northwestern Polytechnical University, Applied Mathematics, Xi'an, Shaanxi, China (Min Zhou) + +Olin College, Needham, MA (Burt S. Tilley) + +Peking University, Health Science Center, Beijing China (Zhiyu Tang) + +Peoples' Liberation Army University of Science and Technology, + +Command Automation, Nanjing, Jiangsu, China (Zhao Ying) + +Shandong University at Weihai, Mathematics and Statistics, Weihai, Shandong, China (Yang Bing and Cao Zhulou) + +Simpson College, Biology, Indianola, IA (Pat Singer) + +Simpson College, Mathematics, Indianola, IA (Debra Czarneski) + +Southeast University, Mathematics, Nanjing, Jiangsu, China (Zhizhong Sun) + +Southeast University, Mathematics, Nanjing, Jiangsu, China (Jun Huang) + +Southeast University, Mathematics, Nanjing, Jiangsu, China (Feng Wang) + +Southwest University, Mathematics, Chongqing, China (Lin Wei) + +University of International Business and Economics, International Trade and Economics, Beijing, China (Baomin Dong) + +University of Science and Technology of China, Electronic Engineering and Information Science, Hefei, Anhui, China (Yu He) + +Xidian University, Mathematics, Xi'an, Shaanxi, China (Xiaogang Qi) + +Xidian University, Science, Xi'an, Shaanxi, China (Hanwen Yu) + +Zhejiang University, Mathematics, Hangzhou, China (Biao Wu) + +Zhejiang University, Mathematics, Hangzhou, China (Yong Wu) + +Zhejiang University, Mathematics, Hangzhou, China (Zhongfei Zhang) + +Zhengzhou Information Engineering Institute, Zhengzhou, Henan, China (Jian Ping Du) + +Zhuhai College of Jinan University, Mathematical Modeling Innovative Practice Base, Zhuhai, Guangdong, China (Advisor Team) + +Zhuhai College of Jinan University, Mathematical Modeling Innovative Practice Base, Zhuhai, Guangdong, China (Yuanbiao Zhang) + +# Awards and Contributions + +Each participating ICM advisor and team member received a certificate signed by the Contest Directors and the Head Judge. Additional awards were presented to the team from the China University of Mining and Technology by the Institute for Operations Research and the Management Sciences (INFORMS). + +# Judging + +Contest Directors + +Chris Arney, Division Chief, Mathematical Sciences Division, + +Army Research Office, Research Triangle Park, NC + +Joseph Myers, Computing Sciences Division, Army Research Office, Re + +search Triangle Park, NC + +Associate Director + +Rodney Sturdivant, Dept. of Mathematical Sciences, + +U.S. Military Academy, West Point, NY + +Judges + +John Kobza, Dept. of Industrial Engineering, Texas Tech University, + +Lubbock, TX + +Sheila Miller, Dept. of Mathematical Sciences, U.S. Military Academy, + +West Point, NY + +Melissa Garren, Scripps Institution of Oceanography, La Jolla, CA + +Frank Wattenberg, Dept. of Mathematical Sciences, U.S. Military Academy, + +West Point, NY + +Triage Judges + +Dept. of Mathematical Sciences, U.S. Military Academy, West Point, NY: + +Amanda Beecher, Randy Boucher, Robert Burks, Pete Charbonneau, + +Eric Drake, Aaron Elliott, Bill Fehlman, Douglas Fletcher, Andy Glen, + +Tina Hartley, Alex Heidenberg, Donald Outing, Jon Roginski, Rodney + +Sturdivant, Frank Wattenberg, and Brian Winkel + +# Acknowledgments + +We thank: + +- INFORMS, the Institute for Operations Research and the Management Sciences, for its support in judging and providing prizes for the INFORMS winning team; +- IBM for their support for the contest; +- all the ICM judges and ICM Board members for their valuable and unflagging efforts; +- the staff of the U.S. Military Academy, West Point, NY, for hosting the triage and final judgings. + +# Cautions + +To the reader of research journals: + +Usually a published paper has been presented to an audience, shown to colleagues, rewritten, checked by referees, revised, and edited by a journal editor. Each of the team papers here is the result of undergraduates working on a problem over a weekend; allowing substantial revision by the authors could give a false impression of accomplishment. So these papers are essentially au naturel. Light editing has taken place: minor errors have been corrected, wording has been altered for clarity or economy, style has been adjusted to that of The UMAP Journal, and the papers have been edited for length. Please peruse these student efforts in that context. + +To the potential ICM Advisor: + +It might be overpowering to encounter such output from a weekend of work by a small team of undergraduates, but these solution papers are highly atypical. A team that prepares and participates will have an enriching learning experience, independent of what any other team does. + +# Appendix: Successful Participants + +KEY: + +$\mathrm{P} =$ Successful Participation + +H = Honorable Mention + +M = Meritorious + +$\mathrm{O} =$ Outstanding (published in this special issue) + +C denotes the ICM Problem + +
INSTITUTIONDEPT.CITYADVISORC
CALIFORNIA
Calif. State U. Monterey BayMathSeasideHongde HuM
Harvey Mudd C.MathClaremontZach DoddsM
Harvey Mudd C.MathClaremontFrancis SuH
Humboldt State U.Env'l Res. Eng.ArcataBrad FinneyM
IOWA
Simpson C.BiologyIndianolaClinton MeyerH
Simpson C.BiologyIndianolaPat SingerM
Simpson C.MathIndianolaDebra CzarneskiM
Simpson C.MathIndianolaWilliam SchellhornH
KENTUCKY
Asbury C.Math & CSWilmoreDavid L. CoullietteH
Asbury C.Math & CSWilmoreDuk LeeM
Asbury C.Math & CSWilmoreKenneth P. RietzM
MASSACHUSETTS
Frontier Regional Sch.BiologySouth DeerfieldBill CanadayH
Frontier Regional Sch.BiologySouth DeerfieldBill CanadayP
Frontier Regional Sch.MathSouth DeerfieldSteve BlinderH
Frontier Regional Sch.MathSouth DeerfieldSteve BlinderH
Frontier Regional Sch.MathSouth DeerfieldGarrett DeaneH
Frontier Regional Sch.MathSouth DeerfieldGarrett DeaneP
Frontier Regional Sch.MathSouth DeerfieldBev MacLeodH
Frontier Regional Sch.MathSouth DeerfieldDave MakoP
Frontier Regional Sch.MathSouth DeerfieldDave MakoP
Frontier Regional Sch.MathSouth DeerfieldCarol PikeH
Frontier Regional Sch.MathSouth DeerfieldCarol PikeP
Frontier Regional Sch.Sci.South DeerfieldChevy SeneyP
Frontier Regional Sch.Sci.South DeerfieldChevy SeneyP
Olin CollegeNeedhamBurt S. TilleyM
MINNESOTA
Bemidji State U.Math & CSBemidjiColleen LivingstonP
MONTANA
Carroll C.Math, Eng., & CSHelenaKelly ClineM
NEW JERSEY
Princeton U.Ops. Res. & Fin. Eng.PrincetonBirgit RudloffH
NEW YORK
U.S. Military Acad.MathWest PointKristin ArneyO
U.S. Military Acad.MathWest PointJanet BraunsteinP
WISCONSIN
Beloit C.Math & CSBeloitPaul J. CampbellH
CHINA
Anhui
Anhui U.Electron. Sci. & Tech.HefeiZhixiang HuangH
Anhui U.Appl. MathHefeiXuejun WangH
Anhui U.StatsHefeiLigang ZhouH
Anqing Teachers CollegeMath & CSAnqingBen Yue SuP
Hefei U. of Tech.MathHefeiXueqiao DuH
Hefei U. of Tech.Appl. MathHefeiHuaming SuH
Hefei U. of Tech.Appl. MathHefeiHuaming SuP
U. of Sci. & Tech. of ChinaElectron. Eng. & Info.HefeiYu HeM
Beijing
Beihang U.Advanced Eng.BeijingWei FengP
Beihang U.Instr. Sci. & Opto-electron. Eng.BeijingHaifeng DongP
Beihang U.Sci.BeijingHongying LiuP
Beijing Forestry U.Sci.BeijingLi Hong JunP
Beijing Forestry U.Sci.BeijingMengning GaoP
Beijing Inst. of Tech.MathBeijingHuafei SunP
Beijing Inst. of Tech.MathBeijingChunlei CaoP
Beijing Inst. of Tech.MathBeijingGui-Feng YanP
Beijing Inst. of Tech.MathBeijingYan DongP
Beijing Jiaotong U.ChemistryBeijingYongsheng WeiP
Beijing Jiaotong U.CSBeijingXun ChenH
Beijing Jiaotong U.CSBeijingXun ChenP
Beijing Jiaotong U.MathBeijingDan XueH
Beijing Jiaotong U.MathBeijingDan XueP
Beijing Jiaotong U.PhysicsBeijingBingli FanP
Beijing Jiaotong U.PhysicsBeijingQiao WangH
Beijing Jiaotong U.Traffic Eng.BeijingWen DengP
Beijing Jiaotong U.Traffic Eng.BeijingWen DengP
Beijing Lang. & Cult. U.CSBeijingGuilong LiuH
Beijing Lang. & Cult. U.CSBeijingGuilong LiuP
Beijing Lang. & Cult. U.CSBeijingXiaoxia ZhaoP
Beijing Lang. & Cult. U.CSBeijingXiwen ZhangP
Beijing Lang. & Cult. U.CSBeijingYanbing FengH
Beijing U. of Chemical Tech.Math & Info. Sci.BeijingGuangfeng JiangH
Beijing U. of Posts & Telecomm.Appl. Math.BeijingZuguo HeH
Beijing U. of Posts & Telecomm.Appl. Math.BeijingZuguo HeH
Beijing U. of Posts & Telecomm.Appl. Phys.BeijingJinkou DingH
Beijing U. of Posts & Telecomm.Appl. Phys.BeijingWenbo ZhangH
Beijing U. of Posts & Telecomm.Comm. Eng.BeijingLixia WangP
Beijing U. of Posts & Telecomm.CS & Tech.BeijingHongxiang SunM
Beijing U. of Posts & Telecomm.CS & Tech.BeijingLixia WangH
Beijing U. of Posts & Telecomm.CS & Tech.BeijingLixia WangH
Beijing U. of Posts & Telecomm.CS & Tech.BeijingWenbo ZhangH
Beijing U. of Posts & Telecomm.CS & Tech.BeijingXiaoxia WangH
Beijing U. of Posts & Telecomm.CS & Tech.BeijingXiaoxia WangH
Beijing U. of Posts & Telecomm.CS & Tech.BeijingXinchao ZhaoP
Beijing U. of Posts & Telecomm.CS & Tech.BeijingXinchao ZhaoP
Beijing U. of Posts & Telecomm.CS & Tech.BeijingZuguo HeP
Beijing U. of Posts & Telecomm.Electron. Eng.BeijingJianhua YuanH
Beijing U. of Posts & Telecomm.Electron. Eng.BeijingQing ZhouH
Beijing U. of Posts & Telecomm.Electron. Info. Eng.BeijingXueli WangP
Beijing U. of Posts & Telecomm.Communication Eng.BeijingZuguo HeP
Capital U. of Econ. & BusinessEcon.BeijingXue LiH
Capital U. of Econ. & BusinessEcon.BeijingXue LiH
Capital U. of Econ. & BusinessInfo. MgmtBeijingWei ShenP
Capital U. of Econ. & BusinessStatsBeijingQuan ZhangH
Central U. of Finance & Econ.Appl. MathBeijingXianjun YinP
Central U. of Finance & Econ.Appl. MathBeijingXiaoming FanP
Central U. of Finance & Econ.Appl. MathBeijingXiuguo WangH
Central U. of Finance & Econ.Appl. MathBeijingZhaoxu SunH
Central U. of Finance & Econ.Appl. MathBeijingDonghong LiP
Central U. of Finance & Econ.Appl. MathBeijingHuiqing HuangH
Central U. of Finance & Econ.Appl. MathBeijingWeihong YuP
Central U. of Finance & Econ.Appl. MathBeijingXiuguo WangH
Central U. of Finance & Econ.Appl. MathBeijingZongze ChaiH
Central U. of Finance & Econ.Appl. MathBeijingXianjun YinP
Central U. of Finance & Econ.Appl. MathBeijingXiaoming FanH
Central U. of Finance & Econ.China Econ. & Mgmt Acad.BeijingYuanzhu LuP
China Agricultural U.Sci.BeijingGuohui LiP
China U. of GeosciencesInfo. Eng.BeijingJiegen FengP
China U. of GeosciencesInfo. Eng.BeijingBaozeng ChuP
China U. of GeosciencesInfo. Tech.BeijingHaiying WangP
China U. of GeosciencesMathBeijingCuixiang WangP
China U. of GeosciencesMathBeijingLinlin ZhaoP
North China Electr. Power U.MathChangpingZhang KemingH
Peking U.Electron. Eng. &CSBeijingZhiwei TongH
Peking U.Guanghua Schl of MgmtBeijingXiao FuH
Peking U.MathBeijingYulong LiuH
Peking U.PhysicsBeijingLiqiang SunH
Peking U.PhysicsBeijingLiqiang SunH
Peking U.PhysicsBeijingXiaodong HuP
Peking U. Health Sci. CtrBeijingZhiyu TangM
Peking U. Health Sci. CtrMathBeijingDonghong GaoH
Peking U. Health Sci. CtrMathBeijingDongqi HeP
Peking U. Health Sci. CtrMathBeijingJinbing AnH
Peking U. Health Sci. CtrMathBeijingQiang WangH
Peking U. Inst. Condensed MatterPhysicsBeijingHongli WangH
Tsinghua U.MathBeijingJun YeP
Tsinghua U.MathBeijingMei LuP
Tsinghua U.MathBeijingZhiming HuH
U. of Int'l Business & Econ.Info. Tech. & Mgmt Eng.BeijingWei GuoP
U. of Int'l Business & Econ.Info. Tech. & Mgmt Eng.BeijingJunlin HaoP
U. of Int'l Business & Econ.Info. Tech. & Mgmt Eng.BeijingYanling SuH
U. of Int'l Business & Econ.Int'l Trade & Econ.BeijingBaomin DongM
U. of Int'l Business & Econ.Int'l Trade & Econ.BeijingHongyu PanH
U. of Int'l Business & Econ.Int'l Trade & Econ.BeijingJin ZhangH
U. of Int'l Business & Econ.Int'l Trade & Econ.BeijingQiang WangH
U. of Int'l Business & Econ.Int'l Trade & Econ.BeijingQiang WangP
U. of Int'l Business & Econ.Int'l Trade & Econ.BeijingYe DongyaP
U. of Int'l Business & Econ.Int'l Trade & Econ.BeijingYe DongyaP
U. of Int'l Business & Econ.Int'l Trade & Econ.BeijingYiping XuP
U. of Sci. & Tech.Math & MechanicsBeijingZhixing HuP
U. of Sci. & Tech.MathBeijingJin ZhuH
Chongqin
Chongqing U.Info. & Comp'l Sci.ChongqingRenbin HeP
Chongqing U.Info. & Comp'l Sci.ChongqingJian XiaoP
Chongqing U.Info. & Comp'l Sci.ChongqingLuosheng WenP
Chongqing U.Sftwr Eng.ChongqingLi FuP
Chongqing U.Stats & Act'l Sci.ChongqingTengzhong RongP
Southwest U.MathChongqingLin WeiM
Southwest U.StatsChongqingJianjun YuanP
Southwest U.StatsChongqingXuegao ZhengH
Fujian
Fujian Agri. & Forestry U.Food Sci.FuzhouYongxue ChenP
Guangdong
Jinan U.MathGuangzhouShizhuang LuoP
Jinan U.MathGuangzhouDaiqiang HuH
Jinan U.MathGuangzhouShizhuang LuoH
Jinan U.MathGuangzhouChuanlin ZhangP
Jinan U.MathGuangzhouDaiqiang HuM
Shenzhen Poly.Electron. & Info. Eng.ShenzhenJianlong ZhongH
Shenzhen Poly.Mech'l & Electr. Eng.ShenzhenKanzhen ChenP
South China Agri. U.MathGuangzhouShaomei FangH
South China Agri. U.MathGuangzhouQingmao ZengP
South China Normal U.MathGuangzhouHunan LiH
South China Normal U.MathGuangzhouXiuxiang LiuH
South China U. of Tech.Appl. MathGuangzhouManfa LiangP
South China U. of Tech.Appl. MathGuangzhouWeijian DingH
South China U. of Tech.Appl. MathGuangzhouYi HongH
Xiamen U.Math & Appl. MathXiamenJianguo QianH
Zhuhai C. of Jinan U.Math Modeling Innov. Pract.ZhuhaiAdvisor TeamM
Zhuhai C. of Jinan U.Math Modeling Innov. Pract.ZhuhaiAdvisor TeamP
Zhuhai C. of Jinan U.Math Modeling Innov. Pract.ZhuhaiYuanbiao ZhangM
Zhuhai C. of Jinan U.Packaging Eng. Inst.ZhuhaiZhi-wei WangH
Zhuhai C. of Jinan U.Packaging Eng. Inst.ZhuhaiZhi-wei WangH
Hebei
North China Electr. Power U.Math & Phys.BaodingPo ZhangP
North China Electr. Power U.Math & Phys.BaodingYagang ZhangP
Heilongjiang
Harbin Eng. U.Sci.HarbinLiyan XuH
Harbin Eng. U.Sci.HarbinJue WangP
Harbin Eng. U.Sci.HarbinLei ZhuP
Harbin Eng. U.Sci.HarbinLiyan XuH
Harbin Eng. U.Sci.HarbinXiaowei ZhangH
Harbin Eng. U.Sci.HarbinXuguang YangH
Harbin Inst. of Tech.Electron. Eng.HarbinLin LiH
Harbin Inst. of Tech.Electron. Eng.HarbinLin LiH
Harbin Inst. of Tech.Electron. Eng. departmentHarbinLiwei SongP
Harbin Inst. of Tech.Env'l Sci. & Eng.HarbinTong ZhengH
Harbin Inst. of Tech.Management Sci. & Eng.HarbinHong GeH
Harbin Inst. of Tech.Management Sci. & Eng.HarbinWei ShangP
Harbin Inst. of Tech.MathHarbinChiping ZhangH
Harbin Inst. of Tech.MathHarbinGuanghong JiaoP
Harbin Inst. of Tech.MathHarbinGuoqing LiuP
Harbin Inst. of Tech.MathHarbinPing JiangP
Harbin Inst. of Tech.MathHarbinQi GuoH
Harbin Inst. of Tech.MathHarbinQi GuoM
Harbin Inst. of Tech.MathHarbinXianyu MengH
Harbin Inst. of Tech.MathHarbinYong WangM
Harbin Inst. of Tech.MathHarbinZhenfeng ShiH
Harbin Inst. of Tech.Municipal Eng.HarbinJunguo HeH
Harbin Inst. of Tech.Network Proj.HarbinXiaoping JiP
Harbin Inst. of Tech.Software Eng.HarbinYan LiuP
Harbin U. of Sci. & Tech.MathHarbinShanqiang LiH
Inst. of Tech.MathHarbinGuanghong GaoP
Northeast Agri. U.CS & Tech.HarbinYazhuo ZhangP
Northeast Agri. U.Food Sci. & Eng.HarbinYueying YangP
Northeast Agri. U.Life Sci.HarbinFangge LiH
Henan
Henan Inst. of Sci. & Tech.MathXinxiangDonge BaoP
Zhengzhou Info. Eng. Inst.Dept. 5ZhengzhouJian Ping DuM
Hubei
Huazhong U. of Sci. & Tech.Math & StatsWuhanZhibin HanH
Wuhan U.Math & StatsWuhanLiuyi ZhongP
Wuhan U.Math & StatsWuhanZhuangchu LuoP
Hunan
Central South U.Info. Sci. & Eng.ChangshaHongyan ZhangH
Central South U.Metal. Sci. & Eng.ChangshaMuzhou HouH
Hunan U.Math & EconometricsChangshaChuanxiu MaH
Hunan U.Math & EconometricsChangshaHan LuoP
Hunan U.Math & EconometricsChangshaYueping JiangP
Hunan U.SftwrChangshaZhiqiang YouP
National U. of Defense Tech.Appl. MathChangshaLizhi ChengM
National U. of Defense Tech.Appl. MathChangshaMeihua XieP
National U. of Defense Tech.Appl. MathChangshaYi WuH
National U. of Defense Tech.Math & Sys. Sci.ChangshaDan WangH
National U. of Defense Tech.Math & Sys. Sci.ChangshaMengda WuH
National U. of Defense Tech.Math & Sys. Sci.ChangshaMengda WuM
National U. of Defense Tech.Math & Sys. Sci.ChangshaWenqiang YangP
Inner Mongolia
Inner Mongolia U.MathHohhotHaiTao HanH
Jiangsu
China Pharmaceutical U.Basic Sci.NanjingYan FangrongH
China U. of Mining & Tech.MathXuzhouHu ShaoP
China U. of Mining & Tech.MathXuzhouMiao HanP
China U. of Mining & Tech.MathXuzhouShengwu ZhouH
China U. of Mining & Tech.MathXuzhouXingyong ZhangO
China U. of Mining & Tech.MathXuzhouXinli SuoP
China U. of Mining & Tech.MathXuzhouZongxiang WuH
China U. of Mining & Tech.Info. & Electr. Eng.XuzhouDunwei GongP
Nanjing U.Earth Sci.NanjingHuiqun ZhouP
Nanjing U. of Info. Sci. & Tech.MathNanjingGuosheng ChengP
Nanjing U. of Posts & Telecom.Math & Phys.NanjingLiWei XuP
Nanjing U. of Posts & Telecom.Math & Phys.NanjingJin XuH
Nanjing U. of Posts & Telecom.Math & Phys.NanjingYe JunP
Nanjing U. of Sci. & Tech.Appl. MathNanjingChungen XuP
Nanjing U. of Sci. & Tech.Appl. MathNanjingJun ZhangP
Nanjing U. of Sci. & Tech.Appl. MathNanjingWei XiaoP
PLA U. of Sci. & Tech.Comm. & AutomationNanjingYing ZhaoM
PLA U. of Sci. & Tech.Comm. Eng.NanjingKui YaoH
PLA U. of Sci. & Tech.Eng. CropsNanjingZuowei TianP
Southeast U.MathNanjingDan HeH
Southeast U.MathNanjingDan HeP
Southeast U.MathNanjingDaoyuan ZhuH
Southeast U.MathNanjingDaoyuan ZhuH
Southeast U.MathNanjingFeng WangM
Southeast U.MathNanjingFeng WangP
Southeast U.MathNanjingJun HuangH
Southeast U.MathNanjingJun HuangM
Southeast U.MathNanjingRui DuP
Southeast U.MathNanjingRui DuP
Southeast U.MathNanjingZhizhong SunH
Southeast U.MathNanjingZhizhong SunM
Jilin
Jilin U.MathChangchunChunling ChaoP
Jilin U.MathChangchunMingji LiuH
Jilin U.MathChangchunPeichen FangP
Jilin U.MathChangchunWenrui ZhengP
Jilin U.MathChangchunXiuling YaoP
Liaoning
Dalian Maritime U.MathDalianG. ChenP
Dalian Maritime U.MathDalianShuqin YangP
Dalian Maritime U.Math DepartmentDalianYunjie ZhangP
Dalian Nationalities U.Dean's OfficeDalianXiaoniu LiP
Dalian Nationalities U.Innovation CollegeDalianRixia BaiP
Dalian Nationalities U.Sci.DalianJinzhi WangP
Dalian Nationalities U.Sci.DalianLiming WangP
Dalian Nationalities U.Sci.DalianRendong GeP
Dalian U.Info. & Eng.DalianJiatai GangH
Dalian U.Info. & Eng.DalianXiangyu DongH
Dalian U.Info. & Eng.DalianGuangzhi LiuP
Dalian U.Info. & Eng.DalianZixin LiuP
Dalian U.Info. & Eng.DalianZixin LiuP
Dalian U.Info. & Eng.DalianXinxin TanH
Dalian U.Info. & Eng.DalianCheng ZhangP
Dalian U. of Tech.Appl. MathDalianLin FengH
Dalian U. of Tech.Appl. MathDalianMingfeng HeP
Dalian U. of Tech.Appl. MathematicaDalianLiang ZhangH
Dalian U. of Tech.Innovation ExperimentDalianDongjuan FuH
Dalian U. Of Tech.Innovation ExperimentDalianLiang ZhangH
Dalian U. of Tech.Innovation ExperimentDalianMeng DuH
Dalian U. of Tech.Innovation ExperimentDalianMeng DuH
Dalian U. of Tech.Innovation ExperimentDalianTao SunH
Dalian U. of Tech.Innovation ExperimentDalianXiaodan ZhangH
Dalian U. of Tech.Innovation ExperimentDalianZhen WangP
Dalian U. of Tech.Innovation ExperimentDalianZhen WangP
Dalian U. of Tech.SftwrDalianE WangP
Dalian U. of Tech.SftwrDalianJiaxin ZhaoH
Dalian U. of Tech.SftwrDalianLing XieP
Dalian U. of Tech.SftwrDalianTie QiuH
Dalian U. of Tech.SftwrDalianWenjie LiuP
Shenyang Inst. of Aero. Eng.Basic Sci.ShenyangYunqing ChenP
Shenyang Inst. of Aero. Eng.CSShenyangLimei ZhuP
Shenyang Inst. of Aero. Eng.Aero. Eng.ShenyangShiyun WangP
Shenyang Inst. of Aero. Eng.North School of Sci. & Tech.ShenyangWang DanP
Shenyang Inst. of Aero. Eng.North School of Sci. & Tech.ShenyangJiang BoP
Shenyang Inst. of Aero. Eng.North School of Sci. & Tech.Shenyangli YanjieP
Shenyang Inst. of Aero. Eng.North School of Sci. & Tech.ShenyangLiu WeifangH
Shaanxi
Northwestern Poly. U.Appl. MathXi'anHuayong XiaoM
Northwestern Poly. U.Appl. MathXi'anMin ZhouM
Northwestern Poly. U.Appl. MathXi'anQuanyi LuP
Xi'an Jiaotong U.MathXi'anJiayin WangH
Xi'an Jiaotong U.MathXi'anJicheng LiP
Xi'an Jiaotong U.MathXi'anYuan YiP
Xi'an Jiaotong U.MathXi'anZhuosheng ZhangH
Xidian U.MathXi'anXiaogang QiM
Xidian U.MathXi'anXuewen MuH
Xidian U.MathXi'anYoulong YangH
Xidian U.Sci.Xi'anFeng YeP
Xidian U.Sci.Xi'anHanwen YUM
Shandong
China U. of PetroleumMath & Comp'1 Sci.DongyingHua ChenP
Harbin Inst. of Tech.Foreign Lang.WeihaiJunping WangP
Liaocheng U.MathLiaoChengXianYang ZengP
Shandong U.MathJinanJianliang ChenP
Shandong U.MathJinanYuhai ZhangP
Shandong U.Phys.JinanFuxun WangP
Shandong U. at WeihaiMath & StatsWeihaiLi JingH
Shandong U. at WeihaiMath & StatsWeihaiSun WeiP
Shandong U. at WeihaiMath & StatsWeihaiJinTao Wang & Bing YangP
Shandong U. at WeihaiMath & StatsWeihaiBing Yang & Zhulou CaoM
Shanghai
Donghua U.Glorious Sun Schl of Bus. & MgmntShanghaiXiaofeng WangP
East China U. of Sci. & Tech.MathShanghaiLu XiwenH
East China U. of Sci. & Tech.MathShanghaiLu YuanhongP
East China U. of Sci. & Tech.MathShanghaiQian XiyuanH
East China U. of Sci. & Tech.Sci.ShanghaiRende YuP
East China U. of Sci. & Tech.Sci.ShanghaiWenbin HuangP
Fudan U.Appl. MathShanghaiYongji TanH
Fudan U.MathShanghaiYuan CaoM
Fudan U.MathShanghaiZhijie CaiP
Shanghai Finance U.Appl. MathShanghaiChungen ShenP
Shanghai Finance U.Appl. MathShanghaiXiaobin LiP
Shanghai Finance U.Appl. MathShanghaiYong FangP
Shanghai Finance U.MathShanghaiKeyan WangP
Shanghai Finance U.MathShanghaiRongqiang CheP
Shanghai Jiao Tong U.MathShanghaiBaorui SongP
Shanghai U. of Finance & Econ.Int'l TradeShanghaiYuying JinP
Shanghai U.MathShanghaiBinwu HeP
Sichuan
Chengdu U. of Tech.Info. MgmtChengduYouHua WeiP
Sichuan Agri. U.MathYa'anXudong LiuH
Sichuan Agricultural U.MathYaanShiping DuP
Sichuan U.MathChengduQiong ChenH
U. of Elec. Sci. & Tech. of ChinaAppl. MathChengduHongfei DuH
U. of Elec. Sci. & Tech. of ChinaAppl. MathChengduHongfei DuP
U. of Elec. Sci. & Tech. of ChinaInfo. & Comp'nChengduZhang YongP
Univ. of Elec. Sci. & Tech. of ChinaAppl. MathChengduGuoLiang HeP
Zhejiang
Hangzhou Dianzi U.Info. & MathHangzhouChengjia LiP
Hangzhou Dianzi U.Info. & MathHangzhouHao ShenP
Hangzhou Dianzi U.Info. & MathHangzhouWei LiH
Hangzhou Dianzi U.Info. & MathHangzhouZheyong QiuP
Hangzhou Dianzi U.Info. & MathHangzhouZhifeng ZhangH
Hangzhou Dianzi U.Info. & MathHangzhouZongmao ChengP
Ningbo Inst. of Tech., Zhejiang U.Funda. CoursesNingboQi WeiP
Ningbo Inst. of Tech., Zhejiang U.Funda. CoursesNingboZhening LiH
Shaoxing U.MathShaoxingJinghui HeH
Shaoxing U.MathShaoxingJue LuP
Zhejiang Gongshang U.MathHangzhouLing ZhuP
Zhejiang Gongshang U.MathHangzhouXuesong ZhouP
Zhejiang Gongshang U.MathHangzhouYinfei LiH
Zhejiang Gongshang U.MathHangzhouZhengzhong DingP
Zhejiang Normal U.Math, Phys. & Info. Eng.JinhuaYoutian QuH
Zhejiang Normal U.Math, Phys. & Info. Eng.JinhuaYoutian QuH
Zhejiang Sci-Tech U.MathHangzhouJueliang HuP
Zhejiang U.MathHangzhouBiao WuM
Zhejiang U.MathHangzhouBiao WuP
Zhejiang U.MathHangzhouQifan YangP
Zhejiang U.MathHangzhouYong WuM
Zhejiang U.MathHangzhouZhiyi TanH
Zhejiang U.MathHangzhouZhongfei ZhangM
Zhejiang U. City C.CS & Tech.HangzhouHuizeng ZhangH
Zhejiang U. City C.CS & Tech.HangzhouXueyong YuH
Zhejiang U. City C.Info. & CSHangzhouGui WangH
Zhejiang U. City C.Info. & CSHangzhouXusheng KangH
Zhejiang U. of Finance & Econ.Math & StatsHangzhouJi LuoP
Zhejiang U. of Finance & Econ.Math & StatsHangzhouJi LuoP
Zhejiang U. of Tech.Foreign Langs.HangzhouYongqi LiH
Zhejiang U. of Tech.Jianxing C.HangzhouShiming WangH
Zhejiang U. of Tech.Jianxing C.HangzhouShiming WangP
Zhejiang U. of Tech.Jianxing C.HangzhouWenxin ZhuoP
Zhejiang U. of Tech.MathHangzhouMinghua ZhouP
HONG KONG
Chinese U. of Hong KongMathShatin, New TerritoriesLeungfu CheungP
Hong Kong Baptist U.MathHong KongMan Lai TangP
Hong Kong Baptist U.MathHong KongKwong Ip LiuP
INDONESIA
Bandung Inst. of Tech.MathBandungAgus Yodi GunawanM
UNITED ARAB EMIRATES
American U. in DubaiLiberal ArtsDubaiJerry LegeP
American U. in DubaiLiberal ArtsDubaiJerry LegeP
+ +# Rebalancing Human-Influenced Ecosystems + +YuanSi Zhang + +ShuoPeng Wang + +Ning Cui + +Dept. of Mathematics + +China University of Mining and Technology + +Xuzhou, Jiangsu, China + +Advisor: Xingyong Zhang + +# Summary + +In Task 1, we establish a Volterra predator-prey model with three biological populations, and we specify the steady-state numbers of the three populations. Then, based on the Analytic Hierarchy Process and a competition model, we obtain the ratio of different species in the second population, predict that the steady-state level of water quality is not high, and make the water quality satisfactory by adjusting the numbers of six species. + +In Task 2, when milkfish farming suppresses other animal species, we set up a logistic model, and predict that the water quality at steady-state is awful, the same as in the fish pens—insufficient for the continued healthy growth of coral species. When other species are not totally suppressed, with an improved predator-prey model we simulate the water quality of Bolinao (making it match current quality), obtain predicted numbers of populations, and discuss changes to the predator-prey model aimed at making the numbers of the populations agree more closely with observations. + +In Task 3, we establish a polyculture model that reflects an interdependent set of species, introduce mussels and seaweed growing on the sides of the pens, and obtain the numbers of populations in steady state and the outputs of our model. + +In Tasks 4 and 5, we differentiate the monetary values of different kinds edible biomass and define the total value as the sum of the values of each species harvested, minus the cost of milkfish feed. Under circumstances + +of acceptable water quality, we build a nonlinear equilibrium optimization model, from which we obtain an optimal strategy and harvest. + +In Task 6, we put forward a strategy to improve the water quality in Bolinao. With the ratio between feed cost and net income as the index, the index value of the model is smaller than that of Bolinao area, which signifies the leverage of the strategy. Also, we analyze the polyculture system in terms of ecology. + +# Introduction + +To improve the situation in Bolinao, we need to establish a practicable polyculture system and introduce it gradually. So our goal is pretty clear: + +- Model the original Bolinao coral reef ecosystem before fish farm introduction. +- Model the current Bolinao milkfish monoculture. +- Model the remediation of Bolinao via polyculture. +- Discuss the outputs and economic values of species. +- Write a brief to the director of the Pacific Marine Fisheries Council summarizing the relationship between biodiversity and water quality for coral growth. + +# Our approach is: + +- Deeply analyze data in the problem, gradually establishing a model of the coral reef foodweb. +- With available data as evaluation criteria, confirm the water quality based on elements in the sediment. +- Establish models, and interpret the actual situation with data, with the purpose of improving water quality. +- Do further discussion based on our work. + +# Solutions + +# Task 1 + +Aiming toward a coral reef foodweb model, we assume that all the species grow in the same fish pen. We divide the species into three populations: + +- one alga species (Population 1); + +- one herbivorous fish, one mollusc species, one crustacean species, and one echinoderm species (Population 2); and +- the sole predator species, milkfish (Population 3). + +The interrelationships among the species are presented in Figure 1. + +![](images/f4beb7c04757d5e4c6cc46f98e139d6f2a8c062e988170727c3079b34552c017.jpg) +Figure 1. Interrelationships among three populations. + +On this basis, we can establish a Volterra predator-prey model with three populations [Shan and Tang 2007]. Let the number of the $i$ th population be $x_{i}(t)$ . If we do not take into consideration the restrictions of natural resources, the algae species of Population 1 growing in isolation will follow an exponential growth law with relative growth rate $r_1$ , so that $\dot{x}(t) = r_1 x_1$ . However, species of Population 2 feeding on the alga species will decrease the growth rate of the algae, so the revised model of the alga species is + +$$ +\dot {x} _ {1} (t) = x _ {1} (r _ {1} - \lambda_ {1} x _ {2}), +$$ + +where the proportionality coefficient $\lambda_{1}$ reflects the feeding capability of species in Population 2 for the alga species. + +Assume that the death rate of the species in Population II is $r_2$ when existing in isolation; then $\dot{x}_2(t) = -r_2 x_2$ , so based on the foodweb we conclude that + +$$ +\dot {x} _ {2} (t) = x _ {2} (- r _ {2} + \lambda_ {2} x _ {1}), +$$ + +where the proportionality coefficient $\lambda_{2}$ reflects the support capability of the alga species for Population 2—which in turn provide food for the milkfish. The milkfish reduce the growth rate of the species in Population 2, so we must subtract their feeding effect to get + +$$ +\dot {x} _ {2} (t) = x _ {2} (- r _ {2} + \lambda_ {2} x _ {1} - \mu x _ {3}). +$$ + +Likewise, the model for the milkfish is + +$$ +\dot {x} _ {3} (t) = x _ {3} (- r _ {3} + \lambda_ {3} x _ {2}). +$$ + +Altogether, we have an interdependent and mutually-restricting mathematical model of the three populations: + +$$ +\dot {x} _ {1} (t) = x _ {1} \left(r _ {1} - \lambda_ {1} x _ {2}\right), +$$ + +$$ +\dot {x} _ {2} (t) = x _ {2} \left(- r _ {2} + \lambda_ {2} x _ {1} - \mu x _ {3}\right), +$$ + +$$ +\dot {x} _ {3} (t) = x _ {3} (- r _ {3} + \lambda_ {3} x _ {2}). +$$ + +Since this system of differential equations has no analytic solution, we need to use Matlab to get its numerical solution. + +Ecologists point out that a periodic solution cannot be observed in most balanced ecosystems; in a balanced ecosystem, there is an equilibrium. In addition, some ecologists think that the long-existing and periodically-changing balanced ecosystems in nature tend toward a stable equilibrium; that is, if the system diverges from the former periodic cycle because of disturbance, an internal control mechanism will restore it. However, the periodically-changing state described by the Volterra model is non-structured stability, and even subtle adjustments to the parameters will change the periodic solution. + +So we improve the model by letting the alga species follow logistic growth if in isolation: + +$$ +\dot {x} _ {1} (t) = r _ {1} x _ {1} \left(1 - \frac {x _ {1}}{N _ {1}}\right), +$$ + +where $N_{1}$ is the maximum population of the alga species allowed by the environmental resources. The alga species provides food for the species of Population 2, so the model for the algae species is + +$$ +\dot {x} _ {1} (t) = x _ {1} r _ {1} \left(1 - \frac {x _ {1}}{N _ {1}} - \sigma_ {1} \frac {x _ {2}}{N _ {2}}\right), +$$ + +where $N_{2}$ is the maximum capacity of the species in Population 2 and $\sigma_{1}$ refers to the quantity of the algae (compared to $N_{1}$ ) eaten by the unit quantity species in Population 2 (compared to $N_{2}$ ). + +Without the algae, the species in Population 2 will perish; let its death rate be $r_2$ , so that in isolation we will have: + +$$ +\dot {x} _ {2} (t) = - r _ {2} x _ {2}. +$$ + +The algae provide food for Population 2, so we should add that effect; and the growth of the species in Population 2 is also influenced by internal blocking action; so we get + +$$ +\dot {x} _ {2} (t) = r _ {2} x _ {2} \left(- 1 - \frac {x _ {2}}{N _ {2}} + \sigma_ {2} \frac {x _ {1}}{N _ {1}}\right), +$$ + +where $\sigma_{2}$ is analogous to $\sigma_{1}$ . Analogously, we can get a full model of the species in Population 2 via + +$$ +\dot {x} _ {2} (t) = r _ {2} x _ {2} \left(- 1 - \frac {x _ {2}}{N _ {2}} + \sigma_ {2} \frac {x _ {1}}{N _ {1}} - \sigma_ {3} \frac {x _ {3}}{N _ {3}}\right). +$$ + +Without the species in Population 2, milkfish will disappear; we set their death rate as $r_3$ . The species in Population 2 provide food for the milkfish, and the growth of milkfish is also restricted by internal blocking action. Here the model is + +$$ +\dot {x} _ {3} (t) = r _ {3} x _ {3} \left(- 1 - \frac {x _ {3}}{N _ {3}} + \sigma_ {4} \frac {x _ {2}}{N _ {2}}\right). +$$ + +Summarizing, we have simultaneous equations constituting an interdependent mathematical model for the three populations: + +$$ +\dot {x} _ {1} (t) = x _ {1} r _ {1} \left(1 - \frac {x _ {1}}{N _ {1}} - \sigma_ {1} \frac {x _ {2}}{N _ {2}}\right), +$$ + +$$ +\dot {x} _ {2} (t) = r _ {2} x _ {2} \left(- 1 - \frac {x _ {2}}{N _ {2}} + \sigma_ {2} \frac {x _ {1}}{N _ {1}} - \sigma_ {3} \frac {x _ {3}}{N _ {3}}\right), +$$ + +$$ +\dot {x} _ {3} (t) = r _ {3} x _ {3} \left(- 1 - \frac {x _ {3}}{N _ {3}} + \sigma_ {4} \frac {x _ {2}}{N _ {2}}\right). +$$ + +We obtain the values of some parameters in the model, and through nonlinear data fitting of the original data of the local three populations [Shan and Tang 2007; Sumagaysay-Chavoso 1998; Chen and Chou 2001], we get their natural growth rates: + +$$ +\sigma_ {1} = 0. 6, \qquad \sigma_ {2} = 0. 5, \qquad \sigma_ {3} = 0. 5, \qquad \sigma_ {4} = 2; +$$ + +$$ +N _ {1} = 1 5 0 \times 1 0 ^ {3}, \qquad N _ {2} = 3 0 \times 1 0 ^ {3}, \qquad N _ {3} = 2. 2 \times 1 0 ^ {3}. +$$ + +According to the volume of local fish pens and relevant materials, we get the original numbers of the three populations: + +$$ +x _ {1} (0) = 1 2 1. 5 \times 1 0 ^ {3}, \qquad x _ {2} (0) = 2 7 \times 1 0 ^ {3}, \qquad x _ {3} (0) = 2 \times 1 0 ^ {3}. +$$ + +Then we use Matlab to implement the model, with the results of Figure 2, where we see that can see that with the passage of time, the $x_{i}(t)$ tend to the steady-state values 69,027, 27,015, and 1,760. + +The number 27,015 of the species in Population 2 is made up of herbivorous fish, molluscs, crustaceans, and echinoderms. Now we confirm the numbers of all the species in Population 2, which stay at the same trophic level, coexisting and mutually competing. + +![](images/02cc18caa7052eb96d60f60531b2e50c04cab33ad231c9971975a50b95ecb9a8.jpg) +Figure 2. Numerical solutions for $x_{i}(t)$ . + +We apply expert system and group decision theory to determine the weights of the species in Population 2. We have a multi-attribute decision problem, where the aim is to select the optimal solution from many alternatives or to sort the available alternatives. + +Assume that the finite solution set is $Y = \{y_{1},\ldots ,y_{n}\}$ , the attribute set is $C = \{c_1,\dots ,c_q\}$ , and the decision expert set is $E = \{e_1,\dots ,e_m\}$ . Let $S = \{s_1,\dots ,s_g\}$ be a predefined set consisting of odd-chain elements. Expert $e_k$ selects one element from $S$ as the value of solution $y_{i}$ under attribute $c_{j}$ ; let it be denoted as $p_{ij}^{k}\in S$ , and let + +$$ +p ^ {k} = \left(p _ {i j} ^ {k}\right) _ {n \times q} +$$ + +denote the judgment matrix of expert $e_k$ on all the solutions for all the attributes. The attribute weight vector in evaluating information given by expert $e_k$ is + +$$ +\boldsymbol {W} ^ {k} = \left(\boldsymbol {w} _ {1} ^ {k}, \dots , \boldsymbol {w} _ {q} ^ {k}\right) ^ {T}, +$$ + +where $w_{j}^{k}$ is the weight of attribute $c_{j}$ selected by expert $e_{k}$ from set $S$ , $w_{j}^{k} \in S$ . + +This theory can be actualized through the Analytical Hierarchy Process (AHP), first put forward by American operational researcher T.L. Saaty in the 1970s. AHP is a method for decision-making analysis that combines qualitative and quantitative methods. Using this method, decision-makers + +can separate complex problems into several levels and factors, and compare and find the weights for different solutions, and provide the basis for the optimum solution. + +AHP first classifies the problem into different levels based on the nature and the purpose of the problem, constructing a multilevel structure model ranked as the lowest level (program for decision making, measures etc.), compared with the highest level (the highest purpose). Based on AHP, we can establish the stratification diagram shown in Figure 3. + +![](images/61c674526ae3e702d392ad311754c19dcb138ca83da0462bfb69997a719488c4.jpg) +Figure 3. AHP stratification diagram. + +At last, we make consistency check of the result, finding that the consistency ratio of each expert's judgment matrix is below 1, so the consistency of the judgment matrix is acceptable. Finally we figure out the weight of the numbers of all the species in Population 2, as shown in Table 1: + +Table 1. Weight of each species in Population 2 as measured by AHP. + +
SpeciesWeight
Herbivorous fish.21
Crustaceans.23
Molluscs.31
Echinoderms.24
+ +Here we adopt population competition model to confirm the weight of each species in Population 2: + +$$ +\dot {N} _ {1} = N _ {1} \left(\varepsilon_ {1} + \gamma_ {1} N _ {2}\right), +$$ + +$$ +\dot {N} _ {2} = N _ {2} (\varepsilon_ {2} + \gamma_ {2} N _ {1}), +$$ + +where $\varepsilon_{i}$ are birthrates and $\gamma_{i}$ are coefficients of species interaction. + +According to these equations, we find that the ratio between different species is almost consistent with that obtained by AHP, which also confirms the correctness of our method. + +In this way, we find that herbivorous fish, crustaceans, molluscs, and echinoderms can coexist and also compete. So the number of each species can be figured out based on the data in the steady state from the previous models, as shown in Table 2. + +Table 2. Number per pen of each species in steady state. + +
OrganismNumber
Algae69,027
Herbivorous fish5,638
Crustaceans6,305
Molluscs8,483
Echinoderms6,589
Milkfish1,760
+ +Now we use the model to check the water quality, and make clear whether it is suitable for the continued healthy growth of the coral. First, we calculate the current concentration of chlorophyll in a fish pen. With help of relevant references, we find the regression equation between the number of algae and chlorophyll: + +$$ +N = 1. 2 7 8 5 + 0. 7 5 6 8 C, \tag {1} +$$ + +where the units are $10^{4} / \mathrm{ml}$ for $N$ (algae) and $\mu \mathrm{g} / \mathrm{L}$ for $C$ (chlorophyll). For $N = 6.9027$ (from Table 2), we get $C = 7.43$ , a concentration of chlorophyll that is far beyond $0.25 \mu \mathrm{g} / \mathrm{L}$ , the highest suitable concentration for the growth of coral. + +From the available data in the problem, we figure out the mass of organic particles in the fish pen, and then work out the mass of each element. + +- The dry weight of echinoderms in the pen is $45.5\mathrm{kg}$ , the dry weight of milkfish excrement is $0.4 - 0.9\mathrm{kg}$ , so the total dry weight of excrement in the pen is $1.0 - 1.4\mathrm{kg}$ . +- The pen is $10\mathrm{m}\times 10\mathrm{m}\times 8\mathrm{m}$ , for a volume of $800\mathrm{m}^3 = 800\times 10^3\mathrm{L}$ . +- Finally, we get the concentration of organic particles is $1186 - 1738\mu \mathrm{g} / \mathrm{L}$ . Based on the percentage of elements given in the problem, we figure out then concentration of carbon C (10%), nitrogen N (0.4%), and phosphorus P (0.6%) (Table 3). + +Table 3. Concentrations of elements in a pen. + +
ElementConcentration (μg/L)
C (10%)119–174
N (0.4%)5–7
P (0.6%)7–10
+ +Comparing the water quality in Sites A, B, C, and D, we find that the concentration of organics is between A and B, which is suitable for the growth of coral (here the concentration of elements is calculated only based on the excrement of milkfish and echinoderm), so the concentration of the microbes meets the reproduction needs of the coral. But the concentration of chlorophyll is seriously out of limits. So we have to adjust the numbers of some species to make the concentration of chlorophyll reach the standard. + +We reason backward from the desired concentration $(0.25\mu \mathrm{g} / \mathrm{L})$ of chlorophyll suitable for the growth of the coral, using the regression equation (1). With the estimated steady-state value, we can assume the initial values as: (10000, 5500, 350). From relevant references, we get the maximum volume of fish pens: $(N_{1},N_{2},N_{3}) = (30000,6000,400)$ , and through resimulation finally find the positive revised results for the steady-state values: (13732, 5432, 320). + +We work out the estimated steady-state number of algae $N = 1.4677$ , and then derive the numbers of the three populations: (14677, 5744, 350). After revision, we get the actual steady-state number of the algae: $N = 1.3732$ . Putting this value into the regression equation, we get $C = 0.125$ , that is, the concentration of chlorophyll is $0.125\mu \mathrm{g} / \mathrm{L}$ , which means that the water quality after adjustment completely meets the standard demanded. Moreover, the total number of milkfish and echinoderm is smaller than that before revision, so the index of the organics can certainly reach the growing demands of the coral, as shown in Figure 4. + +In the retroregulation process, which is the feedback mechanism of this model, with known water quality, we reason backwards to the estimated steady-state numbers of all the species, make positive simulation after estimating the initial introducing value of all the species, and get the revised steady-state values. With this mechanism, we can find out the steady-state number of each species based on water quality, which provides great convenience to the solution to the following problems. + +# Task 2a: Establishment of Logistic Model + +In this task, with all the herbivorous fish, crustaceans, molluscs, and echinoderms excluded, we are required to find out the changes to the species and the circumstances of water quality. Based on our analysis, we make + +![](images/4ac47996a0969f48a45690a002a129f22c91cb15bece3c5d61858fa2acfa10cb.jpg) +Figure 4. The numbers of species meeting the demands after adjustment. + +clear the reasons why the growth rate will decrease after the milkfish increase. Factors such as natural resources and environmental conditions restrict the growth of milkfish; and with their growth, the blocking effect will become greater and greater. The blocking effect is expressed in terms of the influence on the growth rate $r$ of milkfish, making $r$ decrease with the increase in the number $x$ of milkfish. If we express $r$ as $r(x)$ , a function of $x$ , it should be a decreasing function, so we have: + +$$ +\dot {x} = r (x), \qquad x (0) = x _ {0}. +$$ + +The simplest assumption of that $r(x)$ is a linear function: + +$$ +r (x) = r - s x \qquad (r > 0, s > 0), +$$ + +where $r$ is the intrinsic growth rate. To confirm the meaning of the coefficient $s$ , we introduce the maximum quantity $x_{m}$ that is allowed by natural resources and environmental conditions, which we regard as the milkfish capacity. When $x = x_{m}$ , then $x$ will stop increasing, that is, the growth rate $r(x)$ will be 0. That occurs for $s = r / x_{m}$ , so that we have + +$$ +r (x) = r \left(1 - \frac {x}{x _ {m}}\right). \tag {2} +$$ + +Another interpretation of (2) is that the growth rate $r(x)$ is in direct proportion to the unsaturated part of the milkfish capacity $x = (x_{m} - x) / x_{m}$ . + +where the proportionality coefficient is the intrinsic growth rate $r$ . Putting (2) into (1), we get + +$$ +\dot {x} = r x \left(1 - \frac {x}{x _ {m}}\right), \qquad x (0) = x _ {0}. \tag {3} +$$ + +The factor $rx$ on the right side expresses the internal growing tendency of the milkfish, and the factor $(1 - x / x_{m})$ expresses the blocking effect of resources and environment on milkfish growth. Obviously, the bigger $x$ is, the bigger $rx$ is, and the smaller $(1 - x / x_{m})$ is. The growth of milkfish is the result of the co-action of the two factors. Equation (3) can be solved by separation of variables to yield + +$$ +x (t) = \frac {x _ {m}}{1 + \left(\frac {x _ {m}}{x _ {0}} - 1\right) e ^ {- r t}}. \tag {4} +$$ + +We use linear least squares to estimate the parameters $r$ and $x_{m}$ of this model, and express (3) as + +$$ +\frac {\dot {x}}{x} = r - s x, \qquad s = \frac {r}{x _ {m}}. +$$ + +We consult relevant data in Sumagaysay-Chavoso [1998] (where the amount of milkfish is the amount harvested over the entire Philippines), insert these data into Matlab, and get $r = 0.5$ and $x_{m} = 1.9 \times 10^{5}$ . Putting these into (4), we get the changes to the function shown in Figure 5. + +![](images/314fb1aad50885f6caf60a6a6da16ff80ff5f149926417200d214a99f606d89c.jpg) +Figure 5. Milkfish changes. + +Further, we get the weight and number of the milkfish respectively as $172 \times 10^{6} \mathrm{~kg}$ and $25 - 34 \times 10^{6}$ . + +The land area of the Philippines is $300,000 \mathrm{~km}^2$ , the sea area is $27.6 \mathrm{mi}^2$ . The Philippines is surrounded by the sea, and has lots of islands; the depth of the sea between islands is mostly within $50 \mathrm{~m}$ . + +Based on the sea area, we calculate the sediment per square meter to be $0.12 - 0.33\mathrm{g / m}^2$ . Since the sediment is usually not very thick, we assume that the depth is $0.1\mathrm{m}$ , so that the sediment per cubic meter is $1.2 - 3.3\mathrm{g / m}^3$ . Then based on the information given in the problem, we get the results of Table 4. + +Table 4. +Element concentrations. + +
ElementConcentration (μg/L)
C (10%)117-333
N (0.4%)47-133
P (0.6%)70-200
+ +From the table, we can see eutrophication is very serious, and the coral cannot grow. The water quality is very poor, which almost matches the environment in the pens. + +# Task 2b: Simulating Comparison of the Current Situation + +In Task 2a, we discussed the independent farming of the milkfish; but actually in the pen, there are more than just milkfish and algae. So here we have to introduce the removed species as the middle strata, and according to the requirements of the problem, adjust the numbers of the species in the middle strata to simulate the water quality in the Bolinao area until the water quality matches the one currently observed. + +The concrete practices are as follows: Simulate the water quality (in Site D, for example) and solve the problem according to the model in Task 1. It is easier to find out the water quality from the initial values of algae, milkfish, and other species than vice versa. + +We adopt brute-force random search: + +- Set the initial values of algae, other species, and milkfish to $100 \times 10^{3}$ , $10 \times 10^{3}$ , and $1.3 \times 10^{3}$ . +- According to the introductory ratio between the milkfish and the algae, and the requirements for the capacity of the pen obtained from Task 2a, we introduce the algae and the milkfish respectively as $72 \times 10^{3}$ and $1.3 \times 10^{3}$ , and at the same time have the introductory numbers of other species come from a random distribution between $8 \times 10^{3}$ and $10 \times 10^{3}$ , with the aim of searching for the theoretical value matching the observed water quality. + +- Simulate the model in Task 1 1,000 times, and finally output the water quality in steady state that is consistent with the actually observed value. + +We set out the criteria for judging water quality: + +- Chlorophyll a $\equiv (0.0001x_{1} - 1.2785) / 0.7568$ . +- Total concentration of organics = + +$$ +x _ {2} \times 0. 2 4 3 8 \times 6. 9 \times [ 0. 2, 1 1. 5 ] + x 1 \times [ 2 4 2, 4 9 3 ]. +$$ + +Percentage of different elements in the excrement: C $10\%$ , N $0.4\%$ , P $0.6\%$ . +- C meets $|c(1) - c1(1)| \leq 100$ and $|c(2) - c1(2)| \leq 100$ . +- N meets $|n(1) - n1(1)| \leq 10$ and $|n(2) - n1(2)| \leq 10$ . +- Chlorophyll meets $\left| {c}_{a} - {4.5}\right| \leq {0.15}$ . + +We sort out results meeting the above requirements, that is, the numbers of three species when the water quality obtained through simulation similar to the observed one, and show the result in Table 5.2 + +Table 5. +Simulation results. + +
Pop. 1Pop. 2Pop. 3
Simulation resultsInitial number ×10370.0[8.01,9.00]1.10
Number in steady state ×10346.19.01.04
Estimated from dataNumber in steady state ×10345.79.30.9
+ +To make the numbers of the species close to those predicted in the model, we compare the numbers of existing species with those observed in Bolinao area. Here we take into account that the added feedstuff for milkfish can revise the model in Task 1, that is, we can add a constant $\lambda$ to the third equation of the model in Task 1 to express the influence of feedstuff on the numbers of the species. The revised model is: + +$$ +\dot {x} _ {1} (t) = x _ {1} r _ {1} \left(1 - \frac {x _ {1}}{N _ {1}} - \sigma_ {1} \frac {x _ {2}}{N _ {2}}\right), +$$ + +$$ +\dot {x} _ {2} (t) = r _ {2} x _ {2} \left(- 1 - \frac {x _ {2}}{N _ {2}} + \sigma_ {2} \frac {x _ {1}}{N _ {1}} - \sigma_ {3} \frac {x _ {3}}{N _ {3}}\right), +$$ + +$$ +\dot {x} _ {3} (t) = r _ {3} x _ {3} \left(- 1 - \frac {x _ {3}}{N _ {3}} + \sigma_ {4} \frac {x _ {2}}{N _ {2}}\right) + \lambda . +$$ + +We set initial values (70000, [8008,8995], 1100), and calculate the steady-state numbers of all the species: (46062, 8989, 1051), as shown in Figure 6. + +![](images/8b18c2197ae73d0490f5814b4675e841390aaa685a53d8cbfe814ce4a4898f4d.jpg) +Figure 6. Comparison between observed values and simulated values. + +![](images/940f2f92eebd2ce66c7cb23e9897db3324a7c6846f9259aad132bb2f6bec58e4.jpg) + +# Task 3 + +# Task 3a: Develop a commercial polyculture to remediate Bolinao + +We start from the model of Task 1 (the Bolinao coral reef ecosystem model before farming), introduce filter feeders, and revise the model. We renumber the species, with algae as 1, filter feeders as 2, herbivores as 3, and milkfish as 4. Following the same modeling principles as earlier, we arrive at the system: + +$$ +\dot {x} _ {1} (t) = x _ {1} r _ {1} \left(1 - \frac {x _ {1}}{N _ {1}} - \sigma_ {1 2} \frac {x _ {2}}{N _ {2}} - \sigma_ {1 3} \frac {x _ {3}}{N _ {3}}\right), +$$ + +$$ +\dot {x} _ {2} (t) = r _ {2} x _ {2} \left(- 1 - \frac {x _ {2}}{N _ {2}} + \sigma_ {2} \frac {x _ {1}}{N _ {1}} - \sigma_ {7} \frac {x _ {4}}{N _ {4}}\right), \tag {5} +$$ + +$$ +\dot {x} _ {3} (t) = r _ {3} x _ {3} \left(- 1 - \frac {x _ {3}}{N _ {3}} + \sigma_ {3} \frac {x _ {1}}{N _ {1}} - \sigma_ {8} \frac {x _ {4}}{N _ {4}}\right), +$$ + +$$ +\dot {x} _ {4} (t) = r _ {4} x _ {4} \left(- 1 - \frac {x _ {4}}{N _ {4}} + \sigma_ {4} \frac {x _ {2}}{N _ {2}} + \sigma_ {6} \frac {x _ {3}}{N _ {3}} + \sigma_ {5} k\right), +$$ + +where we now use $k$ for the constant of feedstuff. + +We solve this system in Matlab to obtain the numbers of algae, filter feeders, herbivorous fish, and milkfish: (14314, 6092, 6129, 6979). Figure 7 shows the system tending toward equilibrium. + +![](images/9d4e6d1fab2dd299026186530471d3f6fd37a6505dde7b81bede1267e9a2c4ef.jpg) +Figure 7. The changes in the numbers of algae, filter feeders, herbivorous fish, and milkfish. + +# Report on the outputs of the model + +Based on (6), we find: + +- This model optimize the water quality, since only when the water quality reaches a certain standard, can it provide the ideal growing environment for a species, and only in the viable environment, is it meaningful to talk about the number of each species. +- We establish a newly-born coral reef habitat without the help of man, that is, without feedstuff casting, with least leftover nutriment and particles (foodstuff and excrement) sediment. +- According to Task 3a, we get the steady-state numbers of algae, filter feeders, herbivorous fish, and milkfish. We regard those as the initial values and determine the concentration of chlorophyll as $0.202 \mu \mathrm{g} / \mathrm{L}$ . Based on the information about the elements percentage given in problem, we calculate the content of different elements, as shown in Table 6. +- Assume that the total income is $K = \sum x_{i}v_{i}$ , where $v_{i}$ is the market value of a unit of species $i$ . +- Based on market investigation and relevant online data, we get the aver + +Table 6. Concentrations of elements in a pen. + +
ElementConcentration (μg/L)
C (10%)35 -72
N (0.4%)1.4- 2.9
P (0.6%)2.1- 4.3
+ +age weight and price of each species, and finally figure out the income: \( K = \\) 114 \times 10^3 / \text{pen} \). + +- To calculate the cost of improving water quality, assume that we introduce 1,000 mussels into the pen. We investigate such factors as weight and market price of mussels, and put them into the model in Task 1 to figure out all the indexes. + +Table 7. Steady-state numbers $(\times 10^{4})$ of species before adjustment. + +
AlgaeMolluscs (mussels)Herbivorous FishMilkfish
Before adjustment1.430.610.610.70
After adjustment1.370.620.610.70
+ +Table 8. Concentrations of elements $(\mu \mathrm{g} / \mathrm{L})$ before and after adjustment. + +
ChlorophyllCNP
Before adjustment0.20235–711.4–2.92.1–4.3
After adjustment0.12533–691.3–2.82.0–4.1
+ +From Table 8, it is easy to see that the water quality has improved. For one thing, the introduced mussels feed on the algae for one thing, and for another they decompose the organic particles. + +- The 1,000 introduced mussels cost $361 or so, scarcely making a dent in the income. + +# Task 4 + +From Task 3a, we know the numbers of algae, filter feeders, herbivorous fish, and milkfish: (14314, 6092, 6129, 6979). The algae are the most numerous, and the numbers of the other species are roughly equal. In such a steady state: + +- According to the relationship between supply, demand, the price of milkfish is higher that that of seaweed. In addition, although the amount of seaweed is large, it is light, so we cannot pursue maximizing weight. +- Measuring harvest with the price of each species harvested, we have to differentiate the values of the species. Since it costs to feed the milkfish, we should take these costs into consideration when calculating the values of each species. We define the value of edible biomass as the sum of the values of each species harvested, minus the cost of milkfish feed. + +# Task 5 + +When evaluating a commercial polyculture scheme, we usually consider not only the economic benefits of farming, but also try to ensure reaching a win-win between economy and environment under the premiss of keeping the ecological environment and water quality in good condition. + +Hence, we establish the following optimal model to pursue the maximum commercial benefits, with the premiss of not having water quality worsen. Combined with the previous polyculture system model, we establish the following nonlinear optimization model of balance to maximize the total values of harvest. It is a complex nonlinear single-objective optimization model, since nonlinear differential equations are embedded into the constraints: + +Objective function: $\max f = ax_1 + bx_2 + cx_3 + dx_4 - \mu,$ + +where $a, b, c, d$ are the unit market prices of the species and $\mu$ is the feedstuff price. + +The constraints on water quality are: + +- concentration of chlorophyll $\leq 0.28 \mathrm{mg} / \mathrm{mL}$ , +- concentration of $C \leq 196 \mu \mathrm{g} / \mathrm{L}$ , and +- concentration of $\mathrm{N} \leq 39 \mu \mathrm{g} / \mathrm{L}$ . + +We can express these conditions in the equations involving the $x_{i}$ as follows: + +$$ +\frac {0 . 0 0 0 1 x _ {1} - 1 . 2 7 8 5}{0 . 7 5 6} \leq 0. 2 8, +$$ + +$$ +1. 6 8 2 2 2 x _ {2} [ 0. 2, 1 1. 5 ] + 0. 1 x _ {4} [ 2 4 2, 4 9 3 ] \leq 1 9 6, +$$ + +$$ +1. 6 8 2 2 2 x _ {2} [ 0. 2, 1 1. 5 ] + 0. 0 0 4 x _ {4} [ 2 4 2, 4 9 3 ] \leq 3 9. +$$ + +In addition, we have the equality relations among the $x_{i}$ in (6). + +Such a complex optimization problem cannot be solved directly with any software, so first we make a cycle simulation search (actually still a brute-force search) to find enough solutions meeting water quality conditions, and obtain intervals for the steady-state numbers of the species that meet the demands of water quality, as shown in Table 9. + +Table 9. Steady-state numbers $(\times 10^{4})$ of species. + +
AlgaeMolluscs (mussels)Herbivorous FishMilkfish
Maximum1.39220.62490.62330.7061
Minimum1.32860.61520.61740.7018
+ +Therefore, we can replace the equality conditions among the $x_{i}$ by intervals for the steady-state numbers: + +$$ +\begin{array}{l} 1. 3 2 8 6 \leq x _ {1} \leq 1. 3 9 2 2, \\ 0. 6 1 5 2 \leq x _ {2} \leq 0. 6 2 4 9, \\ 0. 6 1 7 4 \leq x _ {3} \leq 0. 6 2 3 3, \\ 0. 7 1 0 8 \leq x _ {4} \leq 0. 7 0 6 1. \\ \end{array} +$$ + +We can now use Lingo to solve the equivalent model, with the results of Table 10. + +Table 10. Optimal steady-state numbers $(\times 10^{4})$ of species. + +
AlgaeMolluscs (mussels)Herbivorous FishMilkfish
Optimal1.390.620.620.71
+ +The corresponding the maximum harvest value is $115 \times 10^3$ , and the corresponding water quality is shown in Table 11. + +Table 11. Concentrations of elements $(\mu g / L)$ after optimization. + +
ChlorophyllCNP
At optimal0.1517–360.7–1.41.0–2.2
+ +Compared to the water quality required by coral growth, the water quality obtained here is obviously satisfactory, and we reap relatively high economic benefits at the same time. + +In order to prove the results of our model are correct, we define: + +$$ +\mathrm {f i s h i n g / h a r v e s t i n d e x} = \frac {\mathrm {f e e d c o s t}}{\mathrm {n e t i n c o m e}} +$$ + +Then the result we obtained is: fishing/harvest index $= 0.06\%$ . The actual result is: fishing/harvest index $= 2.8\%$ . + +Based on analyses of the model, for the optimal solution we find the feeding cost for one unit of net income is obviously less than the current cost, so our feeding strategy can produce better harvest. + +# Ecological Perspective on Polyculture + +Adding herbivorous fish as the middle stratum + +- contributes to the decomposition of solid particles, +- can suppress the over-multiplication of the algae, +can improve water quality, +- can enable the coral to grow normally, and +- thereby can restore the ecosystem and biodiversity. + +However, in our model we don't take into account the soluble POC released by the algae, the accumulation of which is likely to hinder the improvement of water quality. In view of this, some may doubt the restorative ability of our polyculture system. But bacteria in the waters can process POC and rational measures can be taken to control the concentration of microbes, thus ensuring the improvement of water quality. So, in terms of ecology, our polyculture system bears the potential of improving water quality and promoting favorable development of the ecosystems. + +# References + +Chen, Chih-Yu, and Chong-Nong Chou. 2001. Ichthyotoxicity studies of milkfish *Chanos chanos* fingerlings exposed to a harmful dinoflagellate *Alexandrium minutum*. Journal of Experimental Marine Biology and Ecology. 262 (2): 211-219. +Shan, Yu-lin, and Jia-de Tang. 2007. Numerical solution of bait-predator model derived from three biological populations based on Matlab [Numerical solution of predator-prey model with three groups, based on Matlab] (Chinese). *Ordnance Industry Automation* 26 (12): 94-96. +Sumagaysay-Chavoso, Neila S. 1998. Milkfish (Chanos chanos) production and water quality in brackishwater ponds at different feeding levels and frequencies. Journal of Applied Ichthyology 14 (1-2): 81-85. + +# Striving for Balance: Why Reintroducing More Species to Fish Farm Ecosystem Yields Bigger Profits + +Sean Clement + +Timothy Newlin + +Joseph Lucas + +Dept. of Mathematical Sciences + +U.S. Military Academy + +West Point, NY + +Advisor: Kristin Arney + +# Summary + +Demand for animal protein is the root problem that the people of Bolinao, Philippines have experienced over the last 15 years. Past solutions focused on harvesting large quantities of one type of fish using large cages. Unfortunately this approach failed to meet the demand for protein, ruined local water quality, and destroyed the coral reef. + +Future technological innovations such as self-powered fish cages, alga-based biodiesel fuel, and radio-frequency identification tracking offer great potential for waste reduction and improved open-water fish harvesting. However, the people of Bolinao cannot wait; change must begin now. We must assist the transition, but ultimately the people of Bolinao are the greatest stakeholders in the future quality of life there. + +Mathematics-based models show the various stages of this deterioration by demonstrating how the ecosystem in Bolinao once functioned before demand for fish grew dramatically in the early 1990s. We demonstrate the dangers to water quality of the current practice of farming only milkfish. Finally, we show how introducing other species into commercial fish pens will allow equilibrium to recur, reducing levels of waste in the water and allowing the coral reef (a catalyst for growth) to return. + +Combining the balanced ecosystem with market pricing formulas demonstrates how alternative fish-harvesting practices will lead to higher income for the local population and provide the protein that they need. Fish is the most efficient source of animal protein for humans because it requires less feed to obtain the same amount of protein as chicken, beef, or pork. + +A key limit of our models is scant data on prices and on ratios of species necessary to recreate a balanced ecosystem. Still, our results demonstrate to the Bolinao people both the environmental and the economic value of transitioning from producing only milkfish to a more diverse aquaculture. + +Finally, we suggest policy changes designed so that the people of Bolinao don't have to choose between getting enough food to eat now and having a healthy environment in the future. + +Table 1. +Symbol key. + +
SymbolMeaningFormula
aalgae
lblue mussels
mmilkfish
rrabbitfish
sstarfish
tgiant tiger prawn
Pxcurrent population of species X
Px-1population of species X the previous month
Bxbirth rate of species X
Sxsurvivability rate of species XGx-Dx
Gxgrowth rate of species X
Dxdeath rate of species X
Exrate at which species X is eaten by a predator
Pacurrent population of algaeSaPa-1+EaP_r-1
Plcurrent population of musselsSlP_l-1+Epl-s-1
Pmcurrent population of milkfishPmy+Pmo
Pmycurrent population of juvenile (young) milkfishBmPmo-1+0.066Pmy-1
Pmocurrent population of breeding (old) milkfishSmPmo-1+0.066Pmy-1
Prcurrent population of rabbitfishSrP_r-1-ErP_m-1
Pscurrent population of starfishSsP_s-1
Ptcurrent population of giant tiger prawnsStP_t-1-EtP_m-1
Cdlevel of carbon dissolvedYmPm+YtPt-YlP_l
Ndlevel of nitrogen dissolvedYmPm+YrPr-YlP_l-YaPa
Chllevel of chlorophyllYaPa
Cplevel of particulate carbonYmPm+YrPr+YsPs-YlP_l
Nplevel of particulate nitrogenYmPm+YtPt-YaPa
Wxlevel of bacteria created by individual species X
Mxmarket price for species X
+ +# Problem Approach + +# Task 1 + +To model water quality before milkfish dominated the local ecosystem, we create formulas that model the interactions among the species in the ecosystem. This model focuses on a steady-state equilibrium of water quality. We first establish how to measure the change in water quality, as the sum of the waste products of each species. Some species, such as the blue mussel, which consumes the waste of other species, contribute negative waste and thus help improve water quality. We develop functions to describe the population of each species at any given time; the population determines the waste produced by that species and thus the water quality. The formula for each species calculates the change in the population by adding the number of new individuals (based on the determined growth rate) and subtracting the number eaten by other species as well as the number that die naturally. + +We determine a steady state by running the whole model for several iterations until the level of the water quality stabilizes. Adjusting the number of each species in the system while keeping the ratios among species constant should allow prediction of population levels before the disruption of overfishing that led to the commercial milkfish monoculture. + +# Task 2 + +We set to zero the populations of all species except milkfish and algae and run the model to determine water quality. Based on the known current water quality, we attempt to determine the current populations of a variety of species. + +# Task 3 + +Setting the water quality to an acceptable desired constant, we run simulations of adjusting the populations of species in different combinations that would reestablish an equilibrium polyculture. This polyculture would consume the waste products of the milkfish and keep the growth of algae under control. We expect to determine different combinations for how many of various species would need to be introduced to the sites in the Bolinao region to reestablish acceptable water quality and create coral growth. + +# Task 4 + +We determine from data the dollar values for each species. + +# Task 5 + +Based on the values from Task 4, we assess which combinations from Task 3 are likely to create the most economic value for owners. + +# Task 6 + +We address policy changes that the Pacific Marine Fisheries Council can adopt to assist the Philippines in implementing long-term viability of a self-sustaining ecosystem. These policies center on harvesting all species at rates that keep the milkfish population under control and thus maintain the polyculture. + +# Assumptions + +- The growth rates of species are constant. +- Variability of amount of eggs laid by species is normally distributed. +- Humans are the only predator of milkfish. +- The channel is not a closed system; excess population can emigrate to other reef locations. +- The algae are a mix of cyan bacteria and red varieties (this assumption provides more-realistic results). +- Milkfish stop being omnivores when they mature, after which they eat only other animals. +- It takes five years for milkfish to become sexually mature [Luna 2009]. +- An adult milkfish is capable of eating an adult rabbitfish. +- The fish pens currently hold approximately 58.5 million fish. +- Milkfish weigh $500 - 600 \mathrm{~g}$ [Hambrey 1999]. +- None of the other five species in the ecosystem model eats starfish. +- Rabbitfish waste has the same composition as milkfish waste. +- The prices found in Task 4 are estimates assumed from solitary sources. +- Giant tiger prawns spawn nightly at a rate of $7.6\%$ to $9\%$ but only half of spawn hatch [Bray and Lawrence 1998]. +- Giant tiger prawns have a mortality rate of $10\%$ to $40\%$ and an average weight of $106\mathrm{g}$ [Bray and Lawrence 1998]. +- Rabbitfish double in population every 1.4 to 4.4 years. + +- Prawns excrete $0.028\mathrm{mg}$ of ammonia per gram of body weight per hour [Burbord and Williams 2001]. +- Molluscs urinate up to $45\%$ of their body weight per day. +Each year, $55\%$ of blue mussels die. +Female mussels release 1 million eggs semi-annually, of which $30\%$ hatch. +- Japanese starfish release 10 million to 25 million eggs per year. +A starfish has an average lifespan of 3 years. +A starfish eats $36 \mathrm{~g}$ of mussels each month. + +# Task 1: Water Quality before Disruption + +For a long time, the amount of fish in the area was more than adequate to meet the needs of the population. However, as people sought better nutrition by eating more fish protein, they fished more intensively, using dynamite and sodium cyanide, until the local population of wild fish was no longer large enough to sustain itself. These techniques killed off not only milkfish but other species that kept the ecosystem in balance. The resulting uncontrollable growth of algae, in combination with the destruction caused by explosives, destroyed parts of the coral reef by depriving it of the nutrients and sunlight needed for it to grow. The people built the milkfish population back up by introducing them in large numbers and keeping them in large cages where they could be fed until they were large enough to harvest. Using better-quality fish feed allowed the milkfish population to grow more quickly but also increased pollution in the local waters as a result of the fish waste. Previously, other species, such as the blue mussel mollusc (which feeds on the waste of milkfish), kept water pollution in check. Other herbivorous fish, such as the rabbitfish, and echinoderms, such as the starfish, helped contain algae growth. The starfish also ate the blue mussels. As seen in Figure 1, the food web of this ecosystem allowed for different species to coexist in certain ratios to one another, which kept the water clean and allowed the coral reef to grow. + +By allowing special feed to replace the natural diet of the milkfish, the people unknowingly depleted the quality of the local water supply while simultaneously destroying the coral reef. This coral reef had served as a catalyst for the growth of the overall system by providing shelter for certain species from their predators. + +By modeling the earlier stability, it is possible to show what levels of different populations were previously required to maintain a balanced ecosystem. These ratios can then serve as a helpful starting point for re-establishing a new balance within commercial milkfish farms. + +![](images/7685843b822d0fd05ba82ff8a9ae6667d68d70ecfc99f41bfeba56d5feb1f1e0.jpg) +Figure 1. Food web. + +To produce this model, we researched the relationships among the various species and determined appropriate rates of population growth patterns. We use a general formula to calculate the current population $P_{x}$ of species $X$ , given the population $P_{x^{-1}}$ of $X$ in the previous month, the growth rate $G_{x}$ , the death rate $D_{x}$ , and the amount $E_{y}P_{y}$ of $X$ eaten by each other species $y$ in the system: + +$$ +P _ {x} = P _ {x ^ {- 1}} + P _ {x ^ {- 1}} G _ {x} - P _ {x ^ {- 1}} D _ {x} - \sum E _ {y} P _ {y}. +$$ + +We obtain the overall bacterial level in the water as the sum over all species of its population $P_{x}$ times its rate $W_{x}$ of bacteria waste production. The same calculation applies to calculating levels of all waste products $(C_{d}, N_{d}, \mathrm{Chl}, C_{p}, N_{p})$ . + +Our model, executed for enough iterations, should have converged to an equilibrium for water quality; but it did not. The main reason was that our model set the growth rates and death rates to remain constant, which does not occur in nature due to the conservation of mass. An example of the more natural trend of this relationship is depicted in Figure 2. As the fish population increases, the rate at which they are eaten increases, so the rate at which they survive decreases. + +In any closed system, the overall mass of the system must stay the same. Thus, the addition of any new member to the system precludes the growth of something else either immediately or in the future. An example is that when the fish population is larger, the death rate should be greater at some point because fish are more easily caught by their predators. + +![](images/522a82bac200788a4feaf8a7c9665b2c8287f51946ce7c27c0344cf7a5848fd1.jpg) +Figure 2. Change in rates due to population change. + +Our model did not include any upper limit on the population of any of the species within the ecosystem. So over time, the population of all organisms continued to grow at similar rates, and water quality never reached an equilibrium value. In reality, there has to be a natural limit, if for no other reason than that if the fish waste grows uncontrollably, it will eventually occupy all of the space, choking off nutrient access. + +One possibility would be to introduce an assumed limit to the ecosystem by confining the space to the Bolinao region. The water area of Bolinao covers 1170 ha. Based on the limit in the problem statement that the farmers currently use 50,000 milkfish to a pen and operate 10 pens per hectare, a natural limit is 585 million milkfish (500,000 milkfish/ha × 1170 ha). Assuming this upper bound, we can base the growth rate from a factor of the difference between the current population of milkfish and the upper limit of 585,000,000, via the formula $G_{x}(585000000 - P_{m})$ . + +Despite the difficulty in achieving steady-state equilibrium of water quality, we still produce a model that demonstrates the general trend that should have been present in the ecosystem before mass-farming of milkfish. + +# Task 2: Current Water Quality + +Poor water quality and the destruction of coral don't really seem like problems to people who are trying to meet basic needs and keep their children healthy. It is difficult to show people how their actions now are ultimately leading to greater problems for them and their children in the future. The current thought process is that growing just one type of fish + +(milkfish) and feeding them specially formulated fishmeal creates the larger amounts of fish necessary to meet growing demand. Doing so also doesn't require the sustenance of a variety of different creatures. Why is it not possible simply to apply modern agriculture methods to aquaculture? Why shouldn't Filipinos continue to increase the yield of milkfish with specially-designed fishmeal, just as a farmer in America's Midwest increases the yield of a soybean or corn harvest by using specially-formulated seed and fertilizer? + +Initial observations may lead to the conclusion that such an approach is both viable and desirable. After all, why not simply remove the excess fish waste and sell it as fertilizer for local farmers? That might be possible. However, just as land farmers eventually realized that growing certain crops year after year leads to decreased yields because of nutrient depletion in the soil, fish farmers encounter the threat of decreased overall yield because growing only milkfish depletes water quality by causing algae and waste to grow uncontrollably. The excess algae reduce coral growth in the same way that lack of crop rotation depletes the soil of nitrogen. Both conditions appear to offer better results in the short term but destroy the longer-term viability of the system. Still, for people to change behavioral practices, it is important to demonstrate the limiting effects of the current system. For our model, this requires showing that farming only milkfish causes water quality and the amount of harvestable fish to decline. + +To model the current system, we took our model from Task 1 and set the values for the populations of everything but milkfish and algae to zero. Figure 3 shows the decline in water quality over time. The rise in algae population chokes off the viability of the milkfish because of the increased oxygen demanded by the algae and consequently the decreased quantity available to the fish. + +However, it is unrealistic to assume the current system consists only of milkfish and algae. We know that the current system has a water quality of $10^{10}$ bacteria/ml and $15\mu \mathrm{g} / 1$ of chlorophyll, both of which are much greater than the suggested $0.5 - 1.0\times 10^{6}$ bacteria/ml and $0.25\mu \mathrm{g} / 1$ of chlorophyll suggested to be acceptable for adequate coral growth. + +Coral growth acts like a skyscraper in that it allows more fish to grow in a given space through vertical partitioning. Therefore, we gradually adjust the populations of the various species in our model to achieve the level of current water pollution in Bolinao. Again, our model is unable to produce a steady-state equilibrium of water quality when the ecosystem consists of only milkfish and algae, because the algae do not entirely dispose of the waste from the milkfish; without another species such as blue mussels to reduce the waste of the milkfish, the milkfish grow uncontrollably, even if the $20\%$ that mature each year are removed by humans after reproducing. If humans harvest also immature milkfish, the level of milkfish will drop below sustainability. This human harvesting can reduce the level of waste in the water somewhat, although it is insufficient to achieve a steady state + +![](images/db7de49a182228901bc425ec96257fb969fb7eff24d49c92e13900fa3d276842.jpg) +Figure 3. Water quality when only milkfish are present. + +because there is still nothing to reduce the waste except the algae—which will grow uncontrollably to consume the milkfish waste, thus raising the level of chlorophyll to the point where it chokes off the sunlight and nutrients needed for the coral reef to grow [Environmental Protection Agency 2004]. While it is possible to reduce the levels of waste through harvesting, doing so will only reduce the rate at which the waste level of bacteria grows (a more gradual slope), not cause it to decline. + +# Task 3: Water Quality of a Polyculture + +Before the farming of massive quantities of milkfish in pens, there was a balanced ecosystem of a variety of species that coexisted in ratios that allowed the waste of certain animals to serve as food for others. However, the demand for milkfish led to a disruption of this balance. The ecosystem is not as ideal as it once was, as we modeled in Task 1; but it is not as bleak a situation as the milkfish monoculture that we modeled in Task 2. The second model in that task shows that the quantities of other species in the current system are insufficient to reach target levels of water quality—ones that would maximize the value of biomass available for harvest by restoring the natural catalyst of coral growth. The coral serves as protective shelter for all of these species. Coral grows very slowly, on average only $80\mathrm{mm}$ /yr [Roth 1979]. + +By determining the quantities of species required to reach the desired water quality of $0.5 - 1.0 \times 10^{6}$ bacteria/ml and $0.25\mu \mathrm{g} / 1$ of chlorophyll, it is possible to increase the overall yield of fish available for harvest while recreating a polyculture that is sustainable. Through modeling this process, we determine how to recreate the stable ecosystem present before commercial milkfish farming. This process will also reduce the cost of overall feed for the milkfish, since they can eat some of the other species. + +By fixing the goals of acceptable water quality as the output of this model, we determine what combinations of populations of the species could be self-sustaining. Still, this practice requires guidelines for harvesting only a portion of any species, so as to prevent recreating the overfishing problem that was the cause for the rise of commercial fish-farming, which created the issues with water quality and coral reef destruction in the first place. + +Re-establishing the balance that occurred in the region under the conditions present in the model from Task 1 is difficult. It requires introducing other species into the commercial fish pens that help to keep the other populations under control. However, our model demonstrates the pattern of what would occur to waste levels over time if such a combination is attempted. This process was possible by taking data from Internet sources to determine sustainability rates for each of the species and then adjusting the populations of each species to achieve the desired water quality levels. The results of this model rely heavily on increasing the population of blue mussel to control the waste levels of bacteria from the growing milkfish population. The downward trend in the level of bacteria present in the water is depicted in Figure 4. + +In a few years, the population of blue mussels almost entirely eliminates the bacteria waste. Similarly, rabbitfish reduce the level of chlorophyll through consumption of algae, a process that provides more sunlight and nutrients for coral to grow again [Capuli and Kesner-Reyes 2008]. The milkfish keep the rabbitfish under control, and tiger prawns provide the milkfish an alternative food source so that the milkfish don't wipe out the rabbitfish population. Moreover, starfish consume the mussels to keep them from growing uncontrollably. + +The reproductive rate of starfish can vary widely. If an overpopulation of starfish occurs before blue mussels can grow sufficiently, the waste levels of bacteria can grow upward exponentially because the blue mussel is not yet able to sustain its own survivability. Thus, the process requires a reduced presence of starfish early in the biodiversity effort and a greater number of blue mussels. After about six to eight months, the mussels have grown enough that more starfish can gradually be introduced. If the starfish reproduce too quickly early, it may be necessary to add more blue mussels periodically, because there is no effective control on the starfish population. + +Our model requires introduction of certain quantities of starfish, rabbitfish, blue mussels, and giant tiger prawn to re-establish a sustainable polyculture that would support the milkfish while improving water quality. + +![](images/7dbda3d691a309aa93fab8100a9687361c8a4453ce4afead31f143027020b7da.jpg) +Figure 4. Water quality with mussels. + +ity and coral growth. It also requires harvesting guidelines so that the system can maintain itself naturally. The goal is to keep the harvesting guidelines above the demand for milkfish, so that overfishing would become economically undesirable by creating excess supply above the level of demand. + +The specific equations used to determine the water quality levels are given in Table 1 on p. 142. + +# Task 4: Valuing Polyculture for Human Consumption + +Showing how a milkfish monoculture is undesirable for the long term is an insufficient argument to change a population's practices. We must also demonstrate how it benefits the population economically to change their practices now. + +As part of Task 3, we modeled and demonstrated what input quantities of other species would establish a self-sustaining polyculture that yields more harvestable biomass over the long term. However, those inputs come with short-term up-front monetary costs, in addition to longer-term costs in the form of restrained harvesting guidelines. + +To demonstrate the benefits of these changed practices, it is important + +to clarify the time required for water quality to improve and coral to grow again. It is also necessary to demonstrate how this growth will lead to more money for the population than continuing to farm only milkfish. This process requires setting a value on coral growth as well as on the harvestable fish in the system. We therefore sought to explain the values for different types of species and why these various species as a whole could produce a greater overall value of income for the population than simply growing milkfish. + +On the simplest level, besides being unsustainable over the long term because of depletion of the natural resources in an area, growing only milkfish is undesirable because an excessive supply of milkfish only makes the value of each additional fish worth less. By harvesting a polyculture of species with economic value to both the local and global population, the people of the Bolinao region have the potential to make more money and raise their standard of living over both the short term and the long term. Through diversification of risk, this policy also reduces the likelihood of a farmer losing an entire stock to disease. + +We established that coral reef growth creates a value of $52,000/km² and that each square kilometer of coral reef could produce 20 tons of fish biomass overall [White et al. 2007]. Giant tiger prawn are worth $6,400 per ton [Bray and Lawrence 1998]. Blue mussels yield much less at $1,000/ton. Starfish yield $2,200 and rabbitfish $4,600 a ton—although it is difficult to believe that such an herbivorous fish would be more valuable than the $1,280 for milkfish. Unfortunately, the pricing of most of these products was very difficult to obtain and estimates vary greatly. + +Our model from Task 3 was only able to yield general combinations of the ratios required in a biologically-diverse polyculture ecosystem. A pie chart of the combination that worked well to achieve acceptable water quality is depicted in Figure 5. + +It is difficult to produce the exact optimal market value of the new system and thus conclusively show the desirability of transitioning from the current system. However, the high price of giant tiger prawn over milkfish makes it an attractive alternative. Growing additional blue mussels, while they may not be worth as much as milkfish, is desirable because the reduction in waste levels they create allows for more milkfish to be grown in the same area. Algae can be sold in smaller quantities to produce what is now $18 to $30 a gallon biodiesel [Morton 1998]. + +Hopefully, the global production of a wider variety of seafood produce would create pressure for a more transparent and standardized market for seafood commodities similar to the markets that already exist for cattle and grain. Such a market would allow for better research on the desirability of making certain adjustments to various species. + +![](images/e1eb5f39b769a0ed5225a41d3b019dd32d618ba0f42e22f0ce0e84b8b02b7465.jpg) +Figure 5. Optimal polyculture proportions (without algae). + +# Task 5: Maximizing Bioproduce + +One of the great difficulties in getting the population of Bolinao and commercial fish farmers to change their milkfish monoculture is to show tangibly how a polyculture would not only improve water quality and coral growth but would also give them greater income. Different proportions of the populations of species in a polyculture may produce the same water quality, but they are not all equal in economic value. In Task 4 we established estimates for each species in the polyculture. By taking the proportions of these populations from the model in Task 3 and multiplying the harvestable population of each by its price from Task 4, we can estimate the revenue from that polyculture. By subtracting the associated input costs for establishing that polyculture, we can compare various polyculture combinations that meet the desired water quality levels and choose one that ultimately maximizes profit for the fish farmer. + +Unfortunately, although we were able to develop such a model, its accuracy is questionable, because of uncertainties in current and future market prices. + +Ultimately, with greater understanding of price changes and levels of pollution at different sites, it would be possible to use our models from Task 3 to determine an optimal harvest strategy is to achieve a unit of water quality. Because water quality is a product of both the level of bacteria and the level of chlorophyll, certain sites may produce more value when the combination of different species is tailored to reduce more of one of those two water contaminants. + +# Task 6: Changes to Reestablish a Balance + +Our policy recommendations result from understanding the interactions of the various inputs in the polyculture ecosystem. Principles such as inclusion of additional blue mussels to reduce waste levels are helpful to improve initial conditions and reduce waste. Even if the Bolinao people ultimately reject a transition to a polyculture, at the very least they should attempt to improve the water quality of the milkfish pens by using scoops to remove fish waste, which can be recycled and sold to local land farmers. Simple ideas and principles the value of local community-based education and policing efforts are helpful in improving local quality of life for all Filipinos regardless of the approach they choose. We have highlighted many of these pragmatic practices in a letter addressed to the Pacific Marine Fisheries Council, which we believe is the best avenue to suggest these ideas to the people in the region. + +# Conclusion + +At first glance, Bolinao appears to have a problem of coral-reef destruction and water-quality deterioration based on the overproduction of a single type of fish. However, when examined more closely, the real issues are much more personal. + +Filipinos are not farming large amounts of milkfish because they seek to destroy their living environment. Their milkfish monoculture practices stem from a growing need for animal protein, of which fish is the most economical and accessible source for the people in their country to produce. Fish offers the highest yield of food for raw weight at $65\%$ while simultaneously requiring the lowest amount of feed input to achieve a kilogram of animal protein. What Bolinao is really struggling to deal with is the very human problem of meeting a growing need for better nutrition and quality of life for their children. + +To break this cycle of short-term economic gain at the expense of gradual environmental destruction of both accessible water quality and coral growth, it must be demonstrated to local farmers that their current milkfish monoculture does more harm than good and that an alternative polyculture offers not only a better long-term stability for the environment in the Bolinao region (through improved water quality and coral growth) but also a better economic situation for the local population. + +Our solution involves a series of models to explain the past system, the current system, and what a transition in aquaculture practices could create if a future system is adopted. It then focuses on explaining the economic value of the current system and comparing it to the better potential economic value of a polyculture system based on the harvesting of a variety of species, as opposed to the current monoculture focused on harvesting only milkfish. + +Our solution offers the double benefit of a more sustainable ecosystem that reduces bacteria and chlorophyll to better levels while allowing for greater coral growth and economic benefit to the local population through greater revenue from the variety of species harvestable from a polyculture. + +At first glance, a monoculture of milkfish seems to be the type of specialization that offers the highest economic profit for fish farmers by reducing the unit cost of each fish. However, the long-term sustainability cost of the byproduct waste of a milkfish monoculture is not taken into consideration. Neither is the greater profit that can be obtained through the introduction and harvesting of other species that naturally reduce the economic cost of raising milkfish by reducing the effect of milkfish waste and creating more space to grow additional milkfish. These other species eat the byproduct of the milkfish, which provides more harvestable biomass produce per unit of effort. Finally, a variety of combinations of proportions of different populations in a polyculture produce the same level of water quality. + +However, not all combinations yield the same economic profit for the farmer, because certain fish offer a better profit than others and can be raised in larger quantities than in other scenarios. While our model was unable to demonstrate multiple scenarios that provide greater economic profit than the current system, such scenarios do exist and could be demonstrated by our model given a larger data set. + +We developed simulations to determine the varying water quality based on the conditions of different quantities of the various species and accounted for the harvesting rates required in order to make these polycultures obtainable. By applying population quantities for polyculture combinations that achieved the appropriate water quality levels to a formula that produced an profit value for that combination, we could have determined which polyculture could provide the most profit to the fish farmer for the desired level of water quality that would allow for successful coral growth and long term sustainability of the polyculture. Furthermore, this increased profit could then be used in global trade to create a wider variety of diet than would be otherwise available. + +Our model could be improved through the use of a more complete data set to improve the ratio of relationship between the species to levels closer to what is observable in nature. A more-developed data set would not fundamentally change any of the relationships among the variables in the models we developed. Additionally, more accurately accounting for the human population in the model, and adjusting the harvesting rate of the milkfish based on this inclusion, would also provide more accurate results than trying to extrapolate what the human population should harvest on a periodic basis in order to bring the ecosystem back into balance. This inclusion of the human population into the growth model of the population of the milkfish is necessary because in our ecosystem humans are the only predator of the milkfish, thus making them a requirement for equilibrium to be achieved. + +Finally, our model of the economic benefits for the people of Bolinao would be more accurate if we had been able to obtain more complete and recent pricing data for the market value and cost inputs of introducing the other species into the commercial fish farms next to the milkfish. + +Despite the shortcomings of our models, we were still able to adequately show the economic and environmental benefit to the region by transitioning from a monoculture of milkfish to a polyculture of biological diversity. One of the biggest contributors to changing this system and reducing bacterial waste was the growth of the blue mussel molluscs. Through this process, our models should convince the people of the Bolinao region of the Philippines to transition from the current monoculture of raising and harvesting only milkfish to a polyculture where they raise and harvest a wider variety of species to obtain the maximum sustainable yield from the ecosystem. With this optimal combination, implemented through the introduction of better farming practices and other species of aquatic life, it is possible to achieve a better result for both the environmental and economic quality of life for the Bolinao people over both the short and long term. + +# References + +Bray, William A., and Addison L. Lawrence. 1998. Successful reproduction of Penaeus monodon following hypersaline culture. Aquaculture 159 (3-4) (January 1998): 275-282. http://www.ingentaconnect.com/content/els/00448486/1998/00000159/00000003/art00236. +Burbord, Michele, and Kevin C. Williams. 2001. The fate of nitrogenous waste from shrimp feeding. Aquaculture 198 (1-2): 79-93. http://cat.inist.fr/?aModele=afficheN&cpsidt=1061076. +Capuli, Estelita Emily, and Kathleen Kesner-Reyes. 2008. FishBase: Siganus javus (Linnaeus, 1766) Streaked spinefoot [rabbitfish]. http://filaman.ifm-geomar.de/Summary/SpeciesSummary.php?id=4618. +Environmental Protection Agency. 2004. Total maximum daily load for nutrients and suspended sediment Lake Ontelaunee Berks and Lehigh County, Pennsylvania. Appendix B—Procedure for calculating daily dissolved oxygen (DO) swing from BATHTUB output. http://www.epa.gov/reg3wapd/tmdl/pa_tmdl/LakeOntelauneeTMDL/ontelaunee_TMDL_report_AppendixB.pdf. +Food and Agriculture Organization of the United Nations, Fisheries and Aquaculture Department. n.d. Cultured aquatic species information programme: Chanos chanos (Forsskal, 1775). http://www.fao.org/ fishery/culturedspecies/Chanos_chanos. +Hambrey, John. 1999. Milkfish, Chanos chanos. Chapter 3 in Tropical Coastal Aquaculture Student Handbook. Bangkok, Thailand: Aquaculture and + +Aquatic Resources Management, Asian Institute of Technology. http://www.aqua-information.ait.ac.th/aarmpage/pdf/hambrey_2000-coastal_aquaculture/03-milkfish.pdf. +Luna, Susan M. 2009. FishBase: Chanos chanos (Forsskål, 1765) Milkfish. http://www.fishbase.org/Summary/SpeciesSummary.php?id=80. +Morton, Steve L. 1998. Ethnobotanical leaflets: Modern uses of cultivated algae. http://www.siu.edu/~ebl/leaflets/algae.htm. +Roth, Ariel A. 1979. Coral reef growth. Origins 6 (2): 88-95. http://www.grisda.org/origins/06088.htm. +White, Alan T., Edgardo Gomez, Angel C. Alcala, and Garry Russ. 2007. Evolution and lessons from fisheries and coastal management in the Philippines. Chapter 5 in Fisheries Management: Progress towards Sustainability, edited by Tim R. McClanadan and Juan Carlos Castilla, 88-111. New York: Wiley-Blackwell. http://oneocean.org/download/db_files/Chap5.Phil.06.Book.pdf. + +![](images/3e43f6871b1d650f088705373e6fb5a92c612ed6a4a84eb13661fbcf471920df.jpg) +Dr. Edward Swim (MCM/ICM coordinator at West Point) with team members Sean Clement, Timothy Newlin, and Joseph Lucas receiving their ICM certificates. + +# Authors' Commentary: The Outstanding Coral Reef Papers + +Melissa Garren + +Center for Marine Biodiversity and Conservation + +Scripps Institution of Oceanography + +University of California-San Diego + +La Jolla, CA + +Joseph Myers + +Dept. of Mathematical Sciences + +U.S. Military Academy + +West Point, NY + +# Introduction + +According to the Food and Agriculture Organization of the United Nations, aquaculture is the fastest growing sector of animal-based food production for human consumption. As the global population increases, pressure on coastal ecosystems and the need to produce food also grow. More than half of the world's population lives within $200\mathrm{km}$ (120 mi) of a coast, and many natural fisheries are already fished at or over capacity. Within this context, the influence of aquaculture on coastal ecosystems is a topic of social, environmental and scientific concern and the subject for this year's problem in the Interdisciplinary Contest in Modeling (ICM) $^{\text{®}}$ . + +Coral reefs are delicate and valuable ecosystems that only thrive in shallow, tropical, nutrient-poor waters. They cover less than $1\%$ of the ocean's floor but harbor $25\%$ of marine biodiversity. Many people depend on these ecosystems for food, trade, tourism, shoreline protection, and new sources of medicinal compounds. The majority of coral reefs on this planet grow along inhabited tropical coastlines of developing countries. Thus, as an ever-growing number of aquaculture facilities are installed in coastal waters, the interactions between coral reef ecosystems and fish farms are of particular interest. + +There are many forms of aquaculture practices, but the more environmentally compatible versions tend to be more costly to set up and operate than their + +less compatible counterparts. Developing methods that are both cost-effective and have a low impact on the surrounding ecosystem is an important issue and a complex and timely challenge. A common method is simply to raise one species of carnivorous fish in pens set directly in coastal waters. Unfortunately, this method causes several environmental problems: + +- There is no real barrier between the captive and wild populations, so any disease that occurs in the densely packed pens will flow directly into contact with wild populations. +- No filtration of effluent exists—all excess feed, fish feces, and microbial populations mix directly with natural waters. +- Living organisms can only use $10 - 20\%$ of the energy they consume, so the other $80 - 90\%$ goes to waste—raising an organism higher up the food chain (a carnivore) means that several rounds of $80 - 90\%$ loss occurred to simply make the food that the target species will eat. + +These practices are currently happening on and adjacent to many coral reefs. A growing body of scientific literature is demonstrating that these fish farms have a significant negative impact on the corals, and thus major improvements are needed to attain a viable industry and a sustainable coral reef ecosystem. + +# Formulation and Intent of the Problem + +The goal of this year's ICM problem was for student teams to tackle the ecological and technological challenges of improving such practices within the tractable confines of one specific case study of milkfish (Chanos chanos) aquaculture directly next to coral reefs in Boliano, Philippines. There are many possible approaches to improving the current situation, but we asked teams specifically to come up with a polyculture scenario that would improve water quality sufficiently for corals to recolonize the areas close to the fish pens where they currently cannot survive. By adding more than one species to the industry, energy inputs can be reduced by growing food for the milkfish locally and water quality can be improved by filter feeders and algae that absorb excess nutrients without requiring major gear or technology shifts. This particular method of more environmentally responsible aquaculture also emphasizes the ecological links between different species and trophic levels. There are a number of potentially negative impacts associated with introducing new species into an ecosystem, so teams were also asked to evaluate the potential risks associated with their polyculture solution. + +Teams were first asked to model the original, healthy coral reef ecosystem before the introduction of fish farms. For the purpose of modeling, the complex ecosystem was simplified to one member from each major trophic and phylogenetic guild. The purpose was to identify how the natural system's organisms interact to control water quality in the area. + +The second task was to model the current system with the monoculture of milkfish present. Since the natural milkfish food supply was removed by placing the animals in pens, feed must be purchased and added to the system. The idea was to see the effect of exogenous feeding on water quality. They compared the results of their model to actual observed water quality data from Bolinao. Next, teams were asked to model a remediation scenario. They chose the species they wanted to include in their polyculture system and modeled the effects on water quality, harvest, and economic value. They were asked to discuss the harvesting of each species and what parameters they would use to determine the value of the harvest. The last modeling challenge was to maximize the value of the total harvest while maintaining sufficient water quality levels for corals to grow. + +The end result of modeling was to write recommendations to the Pacific Marine Fisheries Council regarding the management of the Bolinao milkfish aquaculture industry. This is where the teams evaluated the ecological pros and cons of the species chosen for their particular polyculture system, the economic trade-offs of improving water quality, and how long the remediation of Bolinao coral reefs can be expected to take. + +A major goal of this contest problem was for teams to relate the modeling choices they made to realistic ecological and biological processes. Teams were asked to use realistic parameters for their models based on actual ecological and physiological data and to justify any assumptions made. Fundamental understandings of primary production, trophic interactions, and energy transfer were essential for building and critiquing their own models. + +This year's ICM problem is based on research being done by the World Bank and Global Environment Facility's Coral Disease Working Group. This international group of scientists has been working to understand the ecological consequences of this fish farm industry on coral health. The first phase of the project was to identify some of the mechanisms by which fish pens are negatively impacting corals. This is the final year of phase one, and much progress has been made. As we enter phase two of the project, we move forward with a goal of testing and implementing alternative methods of farming in this area. Polyculture is one of the alternatives currently being discussed. + +# References + +Hinrichsen, D. 1998. Coastal Waters of the World: Trends, Threats, and Strategies. Washington, DC: Island Press. +United Nations Food and Agricultural Organization (FAO). 2006. State of World Fisheries and Aquaculture 2006. http://www.fao.org/docrep/009/a0699e/A0699E00.htm. + +# About the Authors + +![](images/7f70619cca44367dcad64747cafd9e92589236bec9f0bce3a2823c59e397bcb4.jpg) + +Melissa Garren is currently a Ph.D. candidate in marine biology at Scripps Institution of Oceanography in La Jolla, CA. She earned her B.S. in molecular biology from Yale University and her M.S. in marine biology from Scripps Institution of Oceanography. Her research focuses on the ecological response of microbes to organic coastal pollution, particularly in coral reef environments, with the aim of understanding the effects on coral health and disease. + +Joe Myers has served for two decades in the Dept. of Mathematical Sciences at the United States Military Academy. He holds degrees in Applied Mathematics and other disciplines and is a licensed Professional Engineer. He currently serves as a Professor, having directed freshman calculus, sophomore multivariable calculus, the electives program, and the research program. He has been involved in several major initiatives to improve teaching and learning, including building interdisciplinary activities and programs under the NSF-sponsored Project Intermath; integrating technology and student laptop computers into the classroom; and weaving modeling, history, and writing threads into the mathematics curriculum. He enjoys modeling and problem solving, has posed and guided the research of dozens of math majors, and has been involved in several research projects with the Army Research Laboratory. + +# Judges' Commentary: The Outstanding Coral Reef Papers + +Sheila Miller + +Dept. of Mathematical Sciences + +U.S. Military Academy + +West Point, NY + +Melissa Garren + +Center for Marine Biodiversity and Conservation + +Scripps Institution of Oceanography + +University of California-San Diego + +La Jolla, CA + +Rodney Sturdivant + +Dept. of Mathematical Sciences + +U.S. Military Academy + +West Point, NY + +Rodney.Sturdivant@usma.edu + +# Introduction + +The Interdisciplinary Contest in Modeling (ICM) $^{\text{®}}$ is an opportunity for teams of students to tackle challenging real-world problems that require a wide breadth of understanding in multiple academic subjects. This year's problem required a particularly deep understanding of ecology to model a solution effectively. Due to the rapid growth of aquaculture facilities currently being installed in or adjacent to many sensitive coastal ecosystems, research into sustainable culturing methods is an active area of investigation. Seven judges gathered in late March to select the most successful entries of this challenging competition out of an impressive set of submissions. + +# The Problem + +The primary goal of this year's ICM was to develop an aquaculture scenario that incorporated species from multiple trophic levels to reduce the level of effluent leaving the fish pens for a specific case study in the Philippines. These fish farms are adjacent to coral reefs, thus the target was to improve water quality such that corals could thrive in the area while an economically viable aquaculture industry could also be maintained. The main tasks expected of the teams were as follows: + +1. Model the original Bolinao coral reef ecosystem before the introduction of fish farms. +2. Model the current Bolinao milkfish monoculture. +3. Observe the remediation of Bolinao via aquaculture. +4. Maximize the value of the total harvest. +5. Call to action. + +Overall, the judges were impressed by both the strength of many of the submissions of individual teams and the variety of approaches that teams used to address the questions posed by the ICM problem. + +# Judges' Criteria + +In order to ensure that the individual judges assessed submissions on the same criteria, we developed a rubric. The framework used to evaluate submissions is described below. + +- Executive Summary: It was important that teams succinctly and clearly explained the highlights of their submissions. These executive summaries needed to include modeling approach(es) used for both the current monoculture and in remediation using polyculture. Further, the summary needed to answer the most pressing questions posed in the problem statement, namely recommendations for remediation and the impact on water quality and optimizing the harvest. Truly Outstanding papers were those that communicated their approach and recommendations in well-connected and concise prose. +- Domain Knowledge and Science: The problem this year was particularly challenging for teams in terms of the science. + +- To address the requirements effectively, teams needed first to establish an ecological frame of reference. Many teams were able to do this reasonably well; teams that excelled clearly did a great deal of research. Often, what distinguished the top teams was the ability not just to describe the + +ecosystem in a single section of the paper, but to integrate this domain knowledge throughout the modeling process. + +- A second important facet of the problem was the ability to understand issues that impact water quality. Many teams created reasonable models of the species and their interactions but very few effectively modeled the water quality. + +- Modeling and Assumptions: The most popular models used were differential equations—usually linear for the simple cases and then expanding to include nonlinear terms. Simulation was also a popular approach to the problem. Often the models appeared appropriate but neglected any discussion of important assumptions. Additionally, many papers lacked a reasonable discussion of model development, instead presenting a series of equations and parameter values without support. Finally, the very best papers not only formulated the models well, but were able to use the models to produce meaningful results to address the problem and to make recommendations. +- Solution (Optimization): Perhaps the most distinct difference between the best papers and others was the ability to utilize their models to develop an actual solution to the problems. Many teams failed to address the most important portions of the problem in any substantive way—what should be done to remediate Bolinao and how to balance the water quality while maximizing harvesting. As a result, the judges put additional emphasis on the actual solution presented, in addition to the modeling approach. +- Analysis/Reflection: Successful papers utilized the models developed in early sections of the paper to draw conclusions about the important issues in addressing problems with the Bolinao ecosystem. For example, the important parameters were identified in terms of their impact on the water quality and the harvest available. In the best papers, trade-offs were discussed and, in truly exceptional cases, some sensitivity analysis conducted to identify potential issues with the solutions presented. +- Communication: The challenges of the modeling in this problem may have contributed to the difficulty many teams had in clearly explaining their solutions. Papers that were clearly exposited distinguished themselves significantly, emphasizing that it is not only good science that is important, but also the presentation of the ideas. + +# Discussion of the Outstanding Papers + +The two Outstanding papers each had features that distinguished them from the other submissions. Working under the time constraint, both teams were impressive in their ability to research the ecological issues, propose reasonable models, and to present their work in a clear and readable manner. This year, in + +particular, the judges felt that the Outstanding papers each demonstrated particular strengths in one of the important dimensions discussed in the previous section. No submission was able to dominate every area, but these two teams were clearly superior in different ways. + +# China University of Mining and Technology + +The China University of Mining and Technology submission was notable for the impressive array of modeling techniques utilized in attacking the problems. There were other papers with a similar level of modeling, but this group not only described the modeling process clearly but connected the models coherently to the problem at hand. As with many of the teams, the principal models used were differential equations (Volterra models). The team also used the Analytical Hierarchy Process (AHP), as well as nonlinear optimization to improve their models and to address the later requirements of the problem. They propose strengthening the "middle strata of the foodweb" by introducing herbivorous species to the polyculture. They also propose a strategy for harvesting various species while still satisfying the constraint to maintain good water quality in Bolinao. While extremely strong on the modeling, the paper could have been further improved with more depth on the ecological issues and the overall quality of the writing. + +# U.S. Military Academy + +The paper from the U.S. Military Academy included perhaps the clearest understanding and presentation of the ecological problem and issues among all submissions. The paper was extremely well written and researched. Unlike many teams, the group chose discrete models (difference equations) as the primary tool for their analysis and then employed simulations to help with the optimization tasks. The team did an exceptional job of showing how their model output support the move from a mono- to poly-culture—they added blue mussel mollusks to the system to show the positive effects of such a change. They also proposed an optimal harvesting strategy involving multiple species. This paper could have been strengthened by adding detail about the models and modeling process. + +# Why Some Other Teams Weren't Outstanding + +In addition to the two Outstanding papers, the judges noted several other papers of equal merit in terms of the modeling effort but excluded from award due to issues with proper documentation. The issue was not the fact that material from Websites or books was included—within reason, quotations properly cited are appropriate. Rather, some teams used material taken directly from such sources (sometimes as much as one or more pages of text) in place of their own ideas, failed to document a quoted passage as a quotation, or both. + +# Conclusion + +The judges extend their congratulations to all who participated in the contest. It is a pleasure to see the variety of approaches taken by the different teams; some of these were novel and interesting. The number of excellent papers made the judging both enjoyable and difficult. The problem this year was extremely challenging and the ability to both research and then model in a short period of time was impressive. + +Two facets of this year's ICM are worth noting: + +- The importance of understanding the underlying science in formulating mathematical models. In the practice of modeling, assumptions should be carefully thought out and checked and, whenever possible, experts should be consulted. +- How critical communication skills are to the analyst. A great mathematical model is not likely to be used if not clearly and concisely explained. + +# Recommendations for Future Participants + +- Not even ingenious solutions are a substitute for clear exposition. +- Ensure that the assumptions you make are clear to the reader, and address them in your conclusions and recommendations. +- Address all aspects of the problem that are asked. +- Between two equally-clear explanations, the shorter one is better. +- Properly citing sources is critical. Judges notice plagiarized material and disqualify papers that contain it; cite as you go, not at the end. +- The recommendations and sensitivity analysis are often as important as the model itself. Frequently, it is better to have a well-analyzed model that accounts for slightly less than a comprehensive but untested one. +- Team members should work to integrate their final submissions. Your paper should read as though it has only one author. + +# About the Authors + +Sheila Miller is an Assistant Professor and Davies Fellow at the U.S. Military Academy in West Point, NY. She holds a Ph.D. in Mathematics from the University of Colorado, Boulder and does research in set theory, sea turtle population modeling, value of information, and change detection in social networks. + +![](images/95d54021ad894f87c4cfb11f28a5640f04c6c8f63b7e9004e0a5a9e04b54a283.jpg) + +![](images/1b814ae6526242b751d00c9da8a6ea77ab17bdedfc74171b02346cbf0cb0a245.jpg) + +Melissa Garren is currently a Ph.D. candidate in marine biology at Scripps Institution of Oceanography in La Jolla, CA. She earned her B.S. in molecular biology from Yale University and her M.S. in marine biology from Scripps Institution of Oceanography. Her research focuses on the ecological response of microbes to organic coastal pollution, particularly in coral reef environments, with the aim of understanding the effects on coral health and disease. + +Rod Sturdivant is an Associate Professor at the U.S. Military Academy in West Point, NY. He earned a Ph.D. in biostatistics at the University of Massachusetts-Amherst and is currently program director for the probability and statistics course at West Point. He is also founder and director of the Center for Data Analysis and Statistics within the Dept. of Mathematical Sciences. His research interests are largely in applied statistics with an emphasis on hierarchical logistic regression models. + +![](images/8a6a05b75149220eab7d2e41d0f6677f4a138b41b7bc783a0f56d8ddf689f09d.jpg) + +# On Jargon + +# Ptolemy to Fourier: Epicycles + +Fawaz Hjouj + +Dept. of Mathematics + +East Carolina University + +Greenville, NC 27858 + +hjoujf@ecu.edu + +# Introduction + +This remark highlights the remarkable resemblance of Fourier series to Ptolemy representation of the motion of a planet. We may say that Ptolemy anticipated Fourier analysis some sixteen centuries ago [Zebrowski 2000]. Perhaps Lagrange was the first to recognize this connection [Goldstein 1977, 171]. + +Fourier series is the Fourier representation + +$$ +f (x) = \sum_ {- \infty} ^ {\infty} F [ k ] e ^ {2 \pi i k x / p}, \quad - \infty < x < \infty , \tag {1} +$$ + +for a $p$ -periodic function $f$ with $F[k]$ being the amount of the exponential $e^{2\pi ikx / p}$ that we must use in (1) for $f$ . With Euler's identity, we also can obtain the alternative representation: + +$$ +f (x) = \frac {a _ {0}}{2} + \sum_ {k = 1} ^ {\infty} \left\{a _ {k} \cos \left(\frac {2 \pi k x}{p}\right) + b _ {k} \sin \left(\frac {2 \pi k x}{p}\right) \right\}, +$$ + +with + +$$ +a _ {k} = \frac {2}{p} \int_ {x = 0} ^ {p} f (x) \cos \left(\frac {2 \pi k x}{p}\right) d x, \qquad b _ {k} = \frac {2}{p} \int_ {x = 0} ^ {p} f (x) \sin \left(\frac {2 \pi k x}{p}\right) d x. +$$ + +# Old Ideas: Hipparchus—Ptolemy Model + +Claudius Ptolemy created a mathematical model of the Aristotelian universe in which the planet moved on a small circle called an epicycle that in turn moved on a larger circle called the deferent (Figure 1). + +![](images/b7788e32f942485247bc5544cfb09a34cc5fab578d7941370351027582231bf4.jpg) +Figure 1. The basic epicycle-deferent system. + +The word "planet" comes from the Greek word for "wanderer," referring to the eastward motion of the planets against the background of the fixed stars [Arny 2006]. The planets did not, however, move at a constant rate; and they could occasionally stop and move westward for a few months before resuming their eastward motion. This backward motion is called retrograde motion. + +# Modern Notation + +Consider the uniform circular motion of a point $X$ around the Earth $E$ at the origin as shown in Figure 1. We represent the position of $X$ by the equation + +$$ +z _ {1} (t) = r _ {1} e ^ {2 \pi i t / p _ {1}}, \qquad - \infty < t < \infty , +$$ + +with period $p_1$ and radius of the orbit of the Earth $|r_1|$ . Let $P$ be a planet that moves about the point $X$ with radius $|r_2|$ and period $p_2$ , and let $z_2(t)$ be the position of the planet at time $t$ with respect to the origin (Earth $E$ ). Then + +$$ +\begin{array}{l} z _ {2} (t) = z _ {1} (t) + r _ {2} e ^ {2 \pi i t / p _ {2}}, - \infty < t < \infty \\ = r _ {1} e ^ {2 \pi i t / p _ {1}} + r _ {2} e ^ {2 \pi i t / p _ {2}}. \\ \end{array} +$$ + +Thus, the planet is moving in a circular path around a point that undergoes uniform circular motion around the Earth $E$ at the origin (Figure 1). + +This two-circle model can produce the observed retrograde motion, but it cannot fit the motion of the planets to observational accuracy. + +We build a more sophisticated model using $n - 1$ moving epicycles: + +$$ +z _ {n} (t) = \sum_ {k = 1} ^ {n} r _ {k} e ^ {2 \pi i t / p _ {k}}. \tag {2} +$$ + +In (2), if the $p_k$ are integer multiples of some $p$ , then the motion is periodic and (2) is a finite Fourier series. Kammler [2000] summarizes this history: + +Hipparchus and Ptolemy used a shifted four-circle construction of this type (with the Earth near but not at the origin) to fit the motion of each planet. These models were used for predicting the positions of the five planets of antiquity until Kepler and Newton discovered the laws of planetary motion some 1,300 years later. + +Example: Earth and Mars orbit the Sun with periods $T_{E} = 1$ year, $T_{M} = 1.88$ year at mean distances $1 \, \text{au} \approx 150 \times 10^{6} \, \text{km}$ , $1.52 \, \text{au} = 228 \times 10^{6} \, \text{km}$ . We may use the simple approximations + +$$ +z _ {E} (t) \approx e ^ {2 \pi i t / 1 \mathrm {y r}}, \qquad z _ {M} (t) \approx e ^ {2 \pi i t / 1. 5 \mathrm {y r}} +$$ + +to study the motion of Mars as seen from Earth [Kammler 2000]. Figure 2 displays + +$$ +z (t) = z _ {M} (t) - z _ {E} (t), \quad 0 \leq t \leq 2 \mathrm {y r}, \tag {3} +$$ + +![](images/7a4ff18f4fedef460a91e7b79af479f8f96e86bc2123c2a939f3e881e5d7f87b.jpg) +Figure 2. $z(t) = z_{M}(t) - z_{E}(t), 0 \leq t \leq 2$ yr, from (3). + +which shows the position of Mars as seen from Earth. This orbit corresponds to one of the two-circle approximations of Hipparchus and Ptolemy that we described. + +# References + +Arny, Thomas T. 2006. Explorations. 4th ed. New York: McGraw-Hill. +Goldstein, Herman. 1977. A History of Numerical Analysis from the 16th through the 19th Century. New York: Springer-Verlag. +Kammler, D.W. 2000. A First Course in Fourier Analysis. Upper Saddle River, NJ: Prentice-Hall. +Zebrowski, Ernest. 2000. A History of the Circle: Mathematical Reasoning and the Physical Universe. New Brunswick, NJ: Rutgers University Press. + +# About the Author + +![](images/dd38abc49698596e235ffe3e7128dee47ea17af27da8ebe5df2852cea47ecb87.jpg) + +Dr. Hjouj has a bachelor's degree from Yarmouk University in Jordan, a master's degree from An-Najah National University in Palestine, and a Ph.D. from Southern Illinois University, all in mathematics. Dr. Hjouj teaches at East Carolina University and at Craven College in North Carolina. + +# Reviews + +Zill, Dennis G., and Patrick D. Shanahan 2008. A First Course in Complex Analysis With Applications. 2nd ed. Sudbury, MA: Jones and Bartlett; 480 pp, $129.95. ISBN-978-0-76375772-4. + +It seems to be a foregone conclusion that a seasoned complex analyst will want to make love publicly to his/her subject by writing a book thereon. This could be an elementary text, or an advanced monograph, or something in between. I myself have fallen prey to this vagary of the spirit at least six times, and I may exhibit the weakness on additional occasions in the future. + +And what better subject in which to exhibit the frailties of the flesh? For complex analysis is so elegant, so compelling, so full early on with exciting new results, that one can hardly resist the temptation to explain the subject to the world at large. + +Now we have a new contribution to the fray, by Zill and Shanahan. This is a text for undergraduates. One of the distinguishing features of an undergraduate complex variables textbook—separating it markedly from a graduate text—is that the main audience is engineers (not mathematics majors). So both the content and the focus are special: There is certainly a de-emphasis of proofs and of rigor in general, and a special focus is given to differential equations and applications. There are lots of examples, and there are precise and compelling graphics. The cover of the book is liable to have purples and oranges and a dynamic design (because this is what the engineering market wants). The exposition is, likely as not, brisk and lively. + +Certainly, the book under review conforms to the paradigm just described. And what makes it special or distinctive? First, there is an extraordinary number of well-selected exercises, covering both drill and solid thought problems. There are lots of examples in the text proper. The graphics, mostly in the vein of Mathematica figures, are well-drawn and accurate (though my gut feeling is that there could be many more of them). The book is full of textual explanation and comforting patter. Words are not wasted—they are used well—and points are made succinctly and clearly. + +As already noted, a book of this type must have applications. And in fact each chapter of the present book has a substantive section called Applications. These applications are in no way surprising or innovative—there is the usual material on vector fields and the argument principle and approximation theory and boundary-value problems—but it is well-presented and + +coherent. The student will be kept gainfully employed in working through these sections, and he/she will get a good sense of "What is all this stuff good for?" + +The book has several labor-intensive but certainly worthwhile features, such as a thorough glossary and a table of conformal mappings. It is clear that these authors are sensitive to, and indeed are plugged in to, a vast terrain of information about the points and mappings under consideration. + +There are already a good many books out there on undergraduate complex variable theory, including the classic texts of Brown and Churchill [2009], Saff and Snider [2003], Derrick [1984], and many others. The present books fits comfortably into the lower end of this spectrum. It makes a conscious effort to downplay rigor and emphasize practicalities. On the one hand, it includes proofs of all the key results. But it presents those proofs in a clean and palatable fashion. It includes even tricky topics like the Schwarz-Christoffel formulas. But it shows real finesse in presenting these topics in an accessible and friendly manner. + +As one might expect, a book like this does not present a complete proof of the Riemann mapping theorem. Fair enough. But it does give a fairly substantive coverage (with proofs) to the argument principle, to Rouche's theorem, and to the calculation of improper integrals using the calculus of residues. + +If I had to say something critical—and I guess it is part of my job to find fault with something—I would note that the book does not have any sort of bibliography. This is really a shame. The lore of complex variables is vast and multitextured. The authors would be doing both students and teachers a great favor to provide a lexicon and concordance to the literature. Include books of applications, books of theory, books of history. This is part of teaching the subject. + +In sum, I would call this a respectable and thorough introduction for undergraduates to the lore of complex variable theory. Although the orientation for engineers is evident, these authors do not give short shrift to the venerable mathematics of the discipline. A mathematics major would do well to study from this book and will be well-equipped to go on in the study of complex analysis. The instructor will find this text to be reliable, accurate, and comforting. It contains no surprises (pleasant or unpleasant), but it consistently pleases. It is a useful contribution to the didactics of complex analysis. + +# References + +Brown, J.W., and R.V. Churchill. 2009. Complex Variables and Applications. 8th ed. Boston, MA: McGraw-Hill Higher Education. +Derrick, W.R. 1984. Complex Analysis and Applications. 2nd ed. Belmont, CA: Wadsworth International Group. + +Saff, E.B., and A.D. Snider. 2003. Fundamentals of Complex Analysis with Applications to Engineering and Science. 3rd ed. Upper Saddle River, NJ: Prentice Hall. + +Steven G. Krantz, Professor of Mathematics, Washington University in St. Louis, One Brookings Drive, St. Louis, MO 63130; sk@math.wustl.edu. + +Krantz, Steven G. 2008. A Guide to Complex Variables. Washington, DC: Mathematical Association of America; xviii+182 pp, $49.95 ($39.95 for MAA members). ISBN 978-0-88385-338-2. + +Frequent reviewer Steven Krantz has written over 50 books, and complex analysis is one of his core areas. This, his most recent book, can be described as a concise course on complex analysis or, more accurately, a summary of such a course. However, though it is not unique in being a concise overview, I find it surprisingly easy to read. + +The cover suggests that the book is useful to undergraduates. As a rule, I find what is written on the covers of math books is often fiction of the fantasy category; but in this case, it is absolutely correct. The book is also a worthy aid for the graduate student preparing for qualifying exams. + +Moreover, the cover says that this is MAA Guides #1. If later Guides live up to this precedent, I am keenly interested in them. I predict this slim volume will find its way into many institutional and personal libraries. + +James M. Cargal, Mathematics Department, Troy University—Montgomery Campus, 231 Montgomery St., Montgomery, AL 36104; jmçargal@sprintmail.com. + +Strang, Gilbert. 2007. Computational Science and Engineering. Wellesley, MA: Wellesley-Cambridge Press; xi + 716 pp, $90. ISBN 978-0-961408-81-7. + +My second review for this journal [1986] was of Gilbert Strang's Introduction to Applied Mathematics (hereafter IAM). I have never been too happy with that review, where I said that it is a "wonderful book." True enough; but more appropriately, it is an important book, as is the book reviewed here, Computational Science and Engineering (hereafter CSE). + +CSE is—and is not—a second edition of IAM. Apparently, it is the result of more than 20 years of Strang teaching his favorite course at MIT, presumably out of IAM. Since CSE does not contain everything in IAM and also contains topics not in IAM, it is a different text. CSE contains Strang's further ruminations on the nature of applied mathematics, and I view it as the superior text, but some individuals might prefer IAM. To some extent, either book represents Strang's philosophy of teaching applied + +mathematics—that we need a new approach—but this conviction is much more explicit in $CSE$ . + +In particular, Strang believes that we should focus on both modeling and computation. Many books are about one or the other, and he feels that applied mathematics is both. Furthermore, Strang believes that applied problems tend to have a common structure, and Chapter 2 is devoted to illustrating this principle through a wide variety of problems. + +In my review of IAM, I tried to give an idea of the range of topics without enumerating the contents. CSE has the same difficulty: Enumerating the topics is tedious, but the titles of the chapters are informative (though listing them does not do justice to the sheer range of content): + +1. Applied Linear Algebra +2. A Framework for Applied Mathematics +3. Boundary Value Problems +4. Fourier Series and Integrals +5. Analytic Functions +6. Initial Value Problems +7. Solving Large Systems +8. Optimization and Minimum Principles + +Strang suggests that a course designed out of this text might follow the structure that he uses (p. v): + +Applied linear algebra +Applied differential equations +Fourier series + +I have long been a champion of Strang's books. I have reviewed different editions of two texts on linear algebra, making clear that that I think he is the most influential author in linear algebra in the last 50 years. I have heaped high praise on his calculus text in my recent editorial on calculus [Cargal 2008]. I have done this for the exact reason that I have championed John Stillwell's books on geometry and algebra. These two authors, as well as a handful of others, write with authority leavened with the great enthusiasm of the born teacher. They are superb pedagogues. + +What makes IAM and CSE so important is that they cover a great deal of applied mathematics, and there is nothing in the literature that compares to them. Pedagogical works, as opposed to dry tomes, are simply rarer in applied mathematics than they are in, say, calculus, linear algebra, geometry, and number theory. There are pedagogical works in differential equations and probability. But there is nothing that covers so much applied mathematics as these with comparative pedagogical skill and acumen. + +Like IAM, CSE has a long first chapter that is a summary of applied linear algebra (86 pp in IAM, 97 pp in CSE). Linear algebra is a key to applied mathematics; it is the most important tool after calculus (this apparently is Strang's view). However, the first chapter is definitely a review. The reader needs to have had a course in linear algebra as well as the usual course in differential equations. These things are minimal. Courses in probability, numerical analysis, and so on certainly help. Knowledge of physics is a definite plus. These days, there are students of applied mathematics (computer science, statistics, operations research) who are physics-phobic. They would have problems with parts of the book. This necessity of a modicum of prior knowledge of applied mathematics means that the level of the book is for seniors and graduate students. The online comments about IAM are striking in their simplicity: Students who are not prepared despise the book, the others are enamored with it; there is no middle ground. The reader who is prepared should love this book. In particular, engineers and physicists should love this book. + +People in industry, too, should love this book. Mathematicians and engineers in industry benefit particularly from a book such as this for a very simple reason. Mathematicians in academia tend to specialize because of the need to publish. However, mathematicians in industry are motivated to generalize. They don't have tenure; often they depend on contracts, so that specializing can limit opportunities to get work. If a book like $CSE$ (or $AIM$ ) had been available when I went into industry more than 30 years ago, it would have changed my life; it certainly would have made those first years easier. In fact, one topic that Strang covers very nicely in both books is the Kalman filter, a topic that is very big in industry and that occupied me in my first job. + +The most important thing I tell my students is the need to study if they go into industry. This is particularly true if the student has stopped at the bachelor's degree, since a bachelor's degree is essentially a learner's permit. Few students go to work for national labs (those who do, do not need my advice—I need theirs), which means that on-the-job training is unlikely or superficial. Of people who have technical degrees, only a small portion maintain their technical skills; most simply travel along and forget much of what they learned. People tend to learn or they forget; nobody remains in stasis. In industry, you should take some of your time on the job to study. + +Is spending work time studying material that is not clearly work related to the work unethical? Typically, doing so does not create a problem (as long as one gets one's tasks done). However, if your supervisor sees you reading a newspaper, that could create a problem. On the other hand, if you are studying number theory, there is no problem; that number theory has nothing to do with your current job tasks will almost certainly not register. Moreover, the worker who studies number theory will tend to retain competence in differential equations far better than a worker who just lets technical skills dissipate. In fact, those few workers who develop + +good technical reputations almost always study widely while on the job. Their ability to quickly respond to new problems on the job is a result of having used work time not to do company tasks. I view this behavior as a survival skill. The fact is, if one "steals" company time to study mathematics and engineering—even topics that have nothing to do with the job—one is far more likely to be promoted because of it than to be reprimanded. + +However, the young worker almost always would benefit not only from learning more number theory but—more urgently—needs to learn a lot more applied mathematics. The undergraduate curriculum can't cover it all. Key core areas are not just physics and differential equations, but probability, numerical analysis, and programming. For a worker in industry, CSE would be invaluable, and yet experienced engineers and mathematicians will also be impressed by this book. + +Computational Science and Engineering should be in the library of every applied mathematician, not to mention engineers. As a textbook, it is well-suited for a senior or graduate course in applied mathematics. + +# References + +Cargal, J.M. 1986. Review of Strang [1986]. The UMAP Journal 7 (4): 364-365. +Cargal, J.M. 2008. Calculus: Textbooks, aids, and infinitesimals. The UMAP Journal 29 (4): 399-416. +Strang, Gilbert. 1986. Introduction to Applied Mathematics. Wellesley, MA: Wellesley-Cambridge Press. +James M. Cargal, Mathematics Department, Troy University—Montgomery Campus, 231 Montgomery St., Montgomery, AL 36104; jmçargal@sprintmail.com. + +Daniel, James W. and Leslie Jane Federer Vaaler. 2009. Mathematical Interest Theory. 2nd ed. Washington, DC: Mathematical Association of America. xviii + 475 pp, $89.95 ($71.95 for MAA members). ISBN 978-0-88385-754-0. + +This is an excellent book on interest theory, one of the four books currently recognized by the Society of Actuaries (SOA) [2009] and Casualty Actuary Society (CAS) [2009] as a basis of study for the interest theory component of their joint Financial Mathematics (FM) exam. + +Additionally, the book has unique features that make it stand out among these four books and led me to use it for the Theory of Interest course at my institution prior to official recognition by the SOA and the CAS. + +Before discussing the unique features, I point out that the book has many common features of good textbooks: + +- abundant worked-out examples, +- ample expository discussions, +- numerous exercises, +- important formulae highlighted with boxes, +- a writing emphasis consisting of several dozen end-of-chapter exercises encouraging students to practice writing, as well as +- a flexible presentation allowing students without a calculus background to "skip" the few calculus-based sections. + +To appreciate properly the unique features of the book, let us first introduce some (light) notation. A deposit of \(1 at time t = 0 in a bank with a constant interest rate of \(i\) compounded annually will accumulate to \(1 + i\) at time \(t = 1\), to \((1 + i)^2\) at time \(t = 2\), and more generally to \((1 + i)^n\) at time \(t = n\). Defining \(v = 1/1 + i\), we immediately see that a deposit of \(v^n\) at time \(t = 0\) accumulates to an amount of 1 at time \(t = n\). The cool way of saying this in actuarial lingo is that "a deposit of 1 at time \(t = n\) has present value \(v^n\)." + +Using this notation, we might naively attempt to define the course content of the Theory of Interest as the study of finding the equivalent present value of a stream of payments of amounts $a_{i}$ at time $t = i$ , $i \in I$ , with $I$ some possibly parametrized index set. + +The problem with such a formulation is that it does not necessitate a new course. Indeed, using the definition of $v$ , we immediately see that the present value of amounts $a_i$ at time $t_i$ , $i \in I$ , equals $\sum_{i \in I} a_i v^i$ . Furthermore, the formulae for sums of geometric series admit simplified closed formulae when appropriate collections of $a_i$ are constant. + +So we need a more precise definition of the course content of the Theory of Interest. I would suggest the following: + +The Theory of Interest is the study of finding the equivalent present value of a stream of payments of $a_{i}$ at time $t = i$ , $i \in I$ , by skillfully using a core set of actuarial functions that facilitate quick and efficient computation of present values. + +This revised definition of the Theory of Interest explains why a separate course is needed for it: + +- The Theory of Interest studies the interaction between a core set of actuarial functions and real-world financial problems. +- The Theory of Interest is not concerned with just meeting some pedantic standard of conformity with actuarial notation but rather seeks to use existing core actuarial functions skillfully to facilitate quick computation. + +These two concerns—interaction and computation—define the subject and also enable us to evaluate the book. + +- Computation: Mathematical Interest Theory has a short eight-page Chapter 0 guiding the student in the use of the specially designed Texas Instruments BA-II calculator. Furthermore, many of the several hundred worked-out problems in the book are accompanied with detailed calculator keystroke sequences illustrating computational technique. + +Special emphasis is placed on five worksheets available in the BA-II: + +- the time-value-of-money (TVM) worksheet, +- the cash-flow worksheet, +- the interest conversion worksheet, +- the amortization worksheet, and +- the bond worksheet. + +By emphasizing the computational utility of these worksheets, the authors ensure that students are proficient not only in ordinary calculator functions but in the special worksheets built into the calculator. Such an emphasis is consistent with the definition of the course content of the Theory of Interest presented above. + +A comparison of Mathematical Interest Theory with an approach used by Broverman [2004; 2008] will further enhance appreciation of the Daniel-Vaaler approach. Broverman wrote an extensive calculator manual [2005] as part of the book. Broverman's idea is to separate the theory and computation. Students can read the book, digest the theory, and then on the side learn any calculator functions that they need. In fact, I always list the Broverman calculator manual on my syllabus, since it is well-written and free. But my students rarely use it. Students want a one-stop textbook; something on the side frequently remains on the side. The Daniel-Vaaler approach is best: Students see inside the text what is needed. + +- Interaction: What I particular like about Mathematical Interest Theory is that many problems are intrinsically multi-stepped, requiring use of several core functions. This is consistent with the definition of the course content of the Theory of Interest as the study of the interaction between core actuarial functions and real-world problems + +- The simplest level of textbook problems are plug-ins. +- Many FM books also give comparison problems, which require comparing two basic subproblems, each of which is similar to a plug-in. For example: + +Xiang will pay Dmitry $\$ 800$ immediately and another $\$ 200$ at the end of three years. In return, Dmitry will pay Xiang $\$ K$ in exactly one year and again at the end of exactly two years. Find $K$ if the transaction is base on compound interest at a nominal discount rate of $6 \%$ convertible monthly. + +—[Daniel and Vaaler 2009, Chapter 2, 107] + +- A typical Daniel-Vaaler, superior, real-world, multi-step comparison problem might be the following (paraphrased): +The $6 \%$ annual coupons of a $\$ 3,000$ 10- year par- value bond are reinvested in an account earning $4 \%$ annually. Given that the yield on the combined bond- savings- account investment is $5 \%$ , compute the price and yield of the bond. +— [Daniel and Vaaler 2009, Chapter 6, 297]. +We invite the skeptical reader who thinks that course content of the Theory of Interest can be reduced to study of geometric sums to attempt to solve this problem. +By providing a multitude of superior problems, Daniel and Vaaler are able to familiarize the student not only with core actuarial functions but also develop their skills in studying the interaction between these functions and real-world problems. +Although officially both the SOA and CAS require a second textbook for the derivative component of the Financial Mathematics exam, many topics—including puts, calls, options, swaps, strips, stocks, and dividends—are covered in the Daniel and Vaaler book also. +Finally, a student solutions manual [Vaaler 2009] contains solutions to all of the odd-numbered exercises. + +# References + +Broverman, Samuel A. 2004. Mathematics of Investment and Credit. 3rd ed. Winsted, CT: ACTEX Publications. 2008. 4th ed. +______ 2005. Review of calculator functions for the Texas Instruments BA II Plus. http://www.soa.org/files/pdf/FM-23-05.pdf. Reprinted from Broverman [2004]. +Casualty Actuarial Society. 2009. Exam 2—August 2009: Financial Mathematics. http://www.casact.org/admissions/syllabus/Exam2Aug09.pdf. +Society of Actuaries. 2009. Exam FM—August 2009: Financial Mathematics. http://soa.org/files/pdf/edu-2009-fall-exam-fm.pdf. +Vaaler, Leslie Jane Federer. 2009. Student Solutions Manual for Mathematical Interest Theory, 2nd ed. Washington, DC: Mathematical Association of America. + +Russell Jay Hendel, Mathematics Department, Towson University, Towson, MD 21252; rhendel@towson.edu. + +MacKay, David J.C. 2009. Sustainable Energy—Without the Hot Air. Cambridge, UK: UIT Cambridge. xi + 370 pp, $79.95, $49.95 (P). ISBN 978-1-90686001-1, 978-0-9544529-3. Free download of the book at http://www.withouthotair.com/download.html and free 10-page synopsis at http://www.withouthotair.com/synopsis10.pdf. + +In his review above, Reviews Editor Cargal laments "physics-phobic" students of applied mathematics. In their favor: They are the mathematically capable among the physics-avoiders in high school and college. + +This Journal focuses on mathematical modeling and applications of mathematics. A prime arena for modeling and applications is, of course, physics. + +This Journal often publishes sophisticated modeling efforts. Sometimes, though, vast insight can be obtained from basic physics concepts—length, area, volume, mass, density, force, energy, power, scaling, temperature—and fundamental modeling techniques—"back-of-the-envelope" calculation (estimation, rounding, bounding), good data, and sensitivity analysis. + +This remarkable book uses just those simple tools to generate a "balance sheet" of energy consumption vs. energy production by sustainable means, for the UK in particular, but with remarks about other countries, too. Author MacKay assesses quantitatively all uses of energy and every conceivable sustainable energy source (you name it—it's definitely here). + +The key to comparisons is measuring all energy usage and production in a single standard unit, the kilowatt-hour per day (kWh/d) per person. So, for example, a typical UK car driver uses $40\mathrm{kWh / d}$ , while "plausible production from on-shore windmills" in the UK is $20\mathrm{kWh / d}$ per person. MacKay even considers the energy used to produce imported goods. + +The first 110 pp are devoted to the current balance sheet (final version on p. 109), the next 140 pp to options for sustainability, including construction costs for renewables and the potential for energy storage. (Mackay's recipe: electrify transport; heat via heat pumps; get electricity from renewables plus perhaps "clean coal" and nuclear). Then follow 70 pp of "technical" supplements (at the level of high school physics) and 20 pp of data. + +Pertinent to the 2009 MCM Cellphone Charger Problem, MacKay pooh-poohs "every little bit helps": "All the energy saved in switching off your charger for one day is used up in one second of car-driving" (p. 68), and + +If everyone does a little, we'll achieve only a little. We must do a lot. What's required are big changes in demand and in supply. (p. 114) + +MacKay keeps the presentation lively by displaying cumulative results in simple bar graphs, together with numerous photographs and figures. + +This book is thorough, authoritative, passionate, exciting, and revolutionary. + +Paul J. Campbell, Mathematics and Computer Science, Beloit College, Beloit, WI 53511; campbell@beloit.edu. + +Makridakis, Spyros, Robin Hogarth, and Anil Gaba. 2009. Dance with Chance: Making Luck Work for You. Oxford, UK: Newworld. x+287 pp, $22.95. ISBN 978-1-85168-697-7. +Devlin, Keith. 2008. The Unfinished Game: Pascal, Fermat, and the Seventeenth-Century Letter that Made the World Modern. New York: Basic Books. x+191 pp, $24.95. ISBN 978-0-465-00910--7. + +In 2001, Nassim Nicholas Taleb's Fooled by Randomness: The Hidden Role of Chance in Markets and Real Life became a best-seller among business books (the section of bookstores where it was usually placed). I discussed the second edition [Taleb 2004] in my omnibus review of books on investing [Cargal 2005]. Taleb has been a successful professional in finance and is mathematically sophisticated. Oddly enough, he doesn't mention Burton G. Malkiel's A Random Walk Down Wall Street [2003], which I also discussed in my review; it is a great book, which introduced to a general audience (in its original edition in 1973) the concept of a randomly unpredictable market. + +Taleb's 2007 success, The Black Swan: The Impact of the Highly Improbable, has as its theme is the importance and inevitability of unexpected events. Black swans (or their discovery) are a metaphor for such events, and the term "black swan" has entered common usage. + +Another book in the randomness vein, Leonard Mlodinow's *The Drunkard's Walk: How Randomness Rules our Lives* [2008], has also been a success; the book was awarded the Robert P. Balles Annual Prize in Critical Thinking given by the Committee for Skeptical Thinking for 2008. + +Now we have Dance with Chance: Making Luck Work for You by Spyros Makridakis et al. Like Taleb's Fooled by Randomness, it is strongly oriented toward financial markets. Its contribution to the doctrine that chance is a huge part of our lives are claims that often our belief that we control our circumstances is pure illusion and that sometimes we make decisions simply to maintain that illusion. The book relies so much on anecdotes that it reminded me of the best-selling books by Malcolm Gladwell, such as Blink: The Power of Thinking Without Thinking [Gladwell 2005]. In fact, Makridakis et al. recapitulate the theme of Blink and credit Gladwell; similarly, a large section covers the random market as championed by Malkiel and the ideas of Taleb's Black Swan, and those authors are likewise credited. + +The lead author, Makridakis, is a statistician, and some statistical topics show up in the book. At one point, the book refers to significant work of his in the early 1980s that shows surprising efficacy for exponential averages. Only at this point did it dawn on me that in the early 1980s I had spent a great deal of time reading works of Makridakis! In my review on books on investing, in fact, I discussed exponential averages. + +The book mentioned so far are all very readable—and somewhat redundant. I think they would be of greater interest to the student than to the professor. Mlodinow's *The Drunkard's Walk* might be something of an exception; it covers a great deal of history of probability and statistics. + +Nonetheless, it must be emphasized to students that books such as these are no substitute for actually studying probability and statistics. + +Keith Devlin's The Unfinished Game: Pascal, Fermat, and the Seventeenth-Century Letter that Made the World Modern is not of the same cloth as the others and is not redundant. Devlin is a noted mathematician (logic and set theory) and a top popularizer of mathematics. His book is about a seminal event in the history of probability: the working out of the basic concepts of probability in the correspondence of Fermat and Pascal. He emphasizes that the invention (discovery?) of probability brought about a revolution in thinking; the concept of probability itself was revolutionary. Moreover, he demonstrates how difficult it was to develop these ideas, especially for Pascal, who struggled to keep up with Fermat. + +I appreciated the book for filling in details about earlier work by Cardano on probability. Cardano, who may have been the single most important character in the development of algebra in the 16th century, wrote a book on probability that wasn't published until after the work of Pascal and Fermat. However, the manuscript—and Cardano's ideas—may have floated around. Similarly, Devlin fills in the details of Galileo's brief flirtation with probability. There is also a tour of some of the development of probability after its start with Fermat and Pascal. I consider the book a must-have for those interested in the history of probability (and statistics). + +# References + +Cargal, J. M. 2005. Review of books on investing and finance. The UMAP Journal 27 (1): 81-90. +Gladwell, Malcolm. 2005. Blink: The Power of Thinking Without Thinking. New York: Little, Brown and Company. +Malkiel, Burton G. 1973. A Random Walk Down Wall Street: The Time-Tested Strategy for Successful Investing. New York: Norton. 2nd ed. +Mlodinow, Leonard. 2008. *The Drunkard's Walk: How Randomness Rules our Lives*. New York: Pantheon. +Taleb, Nassim Nicholas. 2001. Fooled by Randomness: The Hidden Role of Chance in Life and in the Markets. 2nd ed. New York: Random House. 2004. 2nd ed. +2007. The Black Swan: The Impact of the Highly Improbable. New York: Random House. + +James M. Cargal, Mathematics Department, Troy University—Montgomery Campus, 231 Montgomery St., Montgomery, AL 36104; jmçargal@sprintmail.com. \ No newline at end of file diff --git a/MCM/2009/2009MCM/2009MCM.md b/MCM/2009/2009MCM/2009MCM.md new file mode 100644 index 0000000000000000000000000000000000000000..3ae70b82106d0753feed93c71d7228eedbc2ad3c --- /dev/null +++ b/MCM/2009/2009MCM/2009MCM.md @@ -0,0 +1,5321 @@ +# The U + +# M + +# Publisher + +COMAP, Inc. + +# Executive Publisher + +Solomon A. Garfunkel + +# ILAP Editor + +Chris Arney + +Associate Director, Mathematics Division + +Program Manager, Cooperative Systems + +Army Research Office P.O.Box 12211 + +Research Triangle Park, NC 27709-2211 + +david.arney1@arl.army.mil + +# On Jargon Editor + +Yves Nievergelt + +Dept. of Mathematics Eastern Washington Univ. + +Cheney, WA 99004 + +ynievergelt@ewu.edu + +# Reviews Ed + +James M. Cargal + +Mathematics Dept. + +Troy University—Montgomery Campus +231 Montgomery St. + +Montgomery, AL 36104 + +jmcargal@sprintmail.com + +# Chief Operating Officer + +Laurie W. Aragón + +# Production Manager + +George W. Ward + +# Production Editor + +Joyce Barnes + +# Distribution + +John Tomicek + +# Graphic Designer + +Daiva Chauhan + +# AP Journal + +# Vol. 30, No. 3 + +# Editor + +Paul J. Campbell + +Beloit College + +700 College St. + +Beloit, WI 53511-5595 + +campbell@beloit.edu + +# Associate Editors + +Don Adolphson + +Chris Arney + +Aaron Archer + +Ron Barnes + +Arthur Benjamin + +Robert Bosch + +James M. Cargal + +Murray K. Clayton + +Lisette De Pillis + +James P. Fink + +Solomon A. Garfunkel + +William B. Gearhart + +William C. Giauque + +Richard Haberman + +Jon Jacobsen + +Walter Meyer + +Yves Nievergelt + +Michael O'Leary + +Catherine A. Roberts + +John S. Robertson + +Philip D. Straffin + +J.T. Sutcliffe + +Brigham Young Univ. + +Army Research Office + +AT&T Shannon Res. Lab. + +U. of Houston-Downtn + +Harvey Mudd College + +Oberlin College + +Troy U.-Montgomery + +U. of Wisc.—Madison + +Harvey Mudd College + +Gettysburg College + +COMAP, Inc. + +Calif. State U., Fullerton + +Brigham Young Univ. + +Southern Methodist U. + +Harvey Mudd College + +Adelphi University + +Eastern Washington U. + +Towson University + +College of the Holy Cross + +Georgia Military College + +Beloit College + +St. Mark's School, Dallas + +# Subscription Rates for 2009 Calendar Year: Volume 30 + +# Institutional Web Membership (Web Only) + +Institutional Web Memberships do not provide print materials. Web memberships allow members to search our online catalog, download COMAP print materials, and reproduce them for classroom use. + +(Domestic) #2930 $467 (Outside U.S.) #2930 $467 + +# Institutional Membership (Print Only) + +Institutional Memberships receive print copies of The UMAP Journal quarterly, our annual CD collection of UMAP Modules, Tools for Teaching, and our organizational newsletter Consortium. + +(Domestic) #2940 $312 (Outside U.S.) #2941 $351 + +# Institutional Plus Membership (Print Plus Web) + +Institutional Plus Memberships receive print copies of the quarterly issues of The UMAP Journal, our annual CD collection of UMAP Modules, Tools for Teaching, our organizational newsletter Consortium, and online membership that allows members to search our online catalog, download COMAP print materials, and reproduce them for classroom use. + +(Domestic) #2970 $615 (Outside U.S.) #2971 $659 + +# For individual membership options visit www.comap.com for more information. + +To order, send a check or money order to COMAP, or call toll-free 1-800-77-COMAP (1-800-772-6627). + +The UMAP Journal is published quarterly by the Consortium for Mathematics and Its Applications (COMAP), Inc., Suite 3B, 175 Middlesex Tpke., Bedford, MA, 01730, in cooperation with the American Mathematical Association of Two-Year Colleges (AMATYC), the Mathematical Association of America (MAA), the National Council of Teachers of Mathematics (NCTM), the American Statistical Association (ASA), the Society for Industrial and Applied Mathematics (SIAM), and The Institute for Operations Research and the Management Sciences (INFORMS). The Journal acquaints readers with a wide variety of professional applications of the mathematical sciences and provides a forum for the discussion of new directions in mathematical education (ISSN 0197-3622). + +Periodical rate postage paid at Boston, MA and at additional mailing offices. + +Send address changes to: info@comap.com + +COMAP, Inc., Suite 3B, 175 Middlesex Tpke., Bedford, MA, 01730 + +Copyright 2009 by COMAP, Inc. All rights reserved. + +Mathematical Contest in Modeling (MCM) $^{\text{®}}$ , High School Mathematical Contest in + +Modeling (HiMCM), and Interdisciplinary Contest in Modeling (ICM) + +are registered trade marks of COMAP, Inc. + +# Vol. 30, No. 3 2009 + +# Table of Contents + +# Publisher's Editorial + +The Faffufnik-Chaim Yankel Effect + +Solomon Garfunkel. 185 + +About This Issue 188 + +# Special Section on the MCM + +Results of the 2009 Mathematical Contest in Modeling + +Frank Giordano 189 + +Abstracts of the Outstanding Papers 207 + +A Simulation-Based Assessment of Traffic Circle Control + +Christopher Chang, Zhou Fan, and Yi Sun 227 + +One Ring to Rule Them All: The Optimization of + +Traffic Circles + +Aaron Abromowitz, Andrea Levy, and Russell Melick 247 + +Three Steps to Make the Traffic Circle Go Round + +Zeyuan Allen Zhu, Tianyi Mao, and Yichen Huang 261 + +Pseudo-Finite Jackson Networks and Simulation: + +A Roundabout Approach to Traffic Control + +Anna Lieb, Anil Damle, and Geoffrey Peterson 281 + +Judges Commentary: The Outstanding Traffic Circle + +Papers + +Kelly Black 305 + +Mobile to Mobil: The Primary Energy Costs for Cellular and Landline Telephones +Nevin Brackett-Rozinsky, Katelynn Wilton, and Jason Altieri 313 + +Energy Implications of Cellular Proliferation in the U.S. +Benjamin Coate, Zachary Kopplin, and Nate Landis 333 + +Modeling Telephony Energy Consumption Amrish Deshmukh, Rudolf Nikolaus Stahl, and Matthew Guay 353 + +America's New Calling +Stephen R. Foster, J. Thomas Rogers, and Robert S. Potter …367 + +Wireless Networks: An Easy Cell +Jeff Bosco, Zachary Ulissi, and Bob Liu 385 + +Judges' Commentary: The Outstanding Cellphone Energy Papers +Marie Vanisko 403 + +Judges' Commentary: The Fusaro Award for the +Cellphone Energy Problem +Marie Vanisko and Peter Anspach 409 + +# Publisher's Editorial The Faffufnik-Chaim Yankel Effect + +Solomon A. Garfunkel + +Executive Director + +COMAP, Inc. + +175 Middlesex Turnpike, Suite 3B + +Bedford, MA 01730-1459 + +s.garfunkel@mail.comap.com + +In the United States, as in many countries and more global entities, in order to receive funding for a major project, one has to submit a proposal. As anyone who has ever written one knows, writing a proposal is an unnatural act. Normally literate persons are reduced to using words such as "input" as a verb, as well as "facilitating" and "orientating," and talking about "stakeholders" and "meta-cognition." But large projects often require large budgets, and as painful as the process may be, we write the proposals and fill out the myriad forms required. + +Most of the money that comes from federal sources in the United States for math education projects is given in grants from the National Science Foundation (NSF). NSF uses a peer-review process to determine which projects are funded. Panels of approximately six people are formed to review a set of proposals. The proposals in each panel are graded and compared with the grades from several other panels that are convened at the same time. The programs are ordered by grade, and funding proceeds on that basis. What actually happens is that on a first pass, a number of projects are graded highly enough to be assured of funding; a number are graded so low that they are immediately declined; and there is a group in the middle (said to be "on the bubble") whose fate is decided some time later when the final yearly budget for these programs is negotiated. The criteria for reviewing proposals, as specifically cited in NSF guidelines, are "intellectual merit" and "broader impacts." + +The Consortium for Mathematics and Its Applications (COMAP) has been submitting proposals and administering projects for over 29 years. In the "good old days," if one had a good idea and a good staff of people to carry out that idea, then funding usually depended upon impressing one of + +the program officers who worked at the Foundation. Outside reviews were mostly handled by mail and were considered advisory. The bottom line was that if the NSF program officer thought that a project should be funded, it was. Admittedly, this created something of an old-boy network. People and institutions with a good track record of success tended to continue to receive funding, while those who were not yet members of the club had a hard time joining. This has given way to the more overtly democratic process described above where the reviewers' opinions rule. + +It should also be said that if one goes back 20 years or so, most of the principal investigators (PIs) on mathematics education projects were Ph.D. mathematicians who had, so to speak, "given their youth to the devil and were giving their old age to the Lord." In other words, they had taken an interest in mathematics education later in their careers. And, to be honest, many other mathematics educators were persons who originally pursued careers as research mathematicians but were unable to complete their degrees. In any event, the PIs on these projects had extremely strong mathematics backgrounds. + +In the United States at least, this has changed significantly. Mathematics education is now a well-established field unto itself and, in many cases, people highly successful in the field have relatively weak mathematical training. Increasingly, they are the principal investigators on new projects in mathematics education and they are the reviewers. They help decide what projects get funded and what projects don't. And, increasingly, they are responsible for the Faffufnik-Chaim Yankel Effect (FCE). What exactly is the FCE? + +Years ago, a typical review of a COMAP proposal would read, "This is an excellent idea with an excellent staff with an excellent track record; we recommend this project for funding." The FCE refers to more typical current reviews that read, "This is an excellent idea with an excellent staff with an excellent track record. However, we have to recommend against funding because they don't make any reference to the seminal research papers of Faffufnik, nor do they plan to use the statistical protocols of Chaim Yankel." The reviewers may very well be students of Faffufnik and/or Chaim Yankel. + +Of course, there are some sour grapes here. I am not a member of the Faffufnik and Chaim Yankel club. And now, as opposed to the good old days, it is the members of this club who get funded. But there is much more to be discussed. There appears to be an underlying assumption that mathematics education projects must proceed in the following way. + +- First, they must be based upon research. Therefore, we heavily quote the results of prior research (see the papers of Faffufnik). +- Then, based upon that research, we make a new research hypothesis and test it with a small number of students. If at all possible, we make this experiment as close to a "gold standard" double-blind medical approach as possible. + +- Then, using certain statistical protocols (see the work of Chaim Yankel), we conclude that there is some measurable effect and write a new proposal to test this effect on a larger population. +- This process is then iterated. + +This is now a necessary condition for funding—independent of content and the strength of the ideas being considered. + +The problem is that while this may very well help to make mathematics education research be seen as more of an established discipline, it is a criterion divorced from classroom practice. And we forget that we separate our efforts in education from the classroom at our peril. There has to be a way for good ideas that hold the promise of increasing student learning to be funded and for good people to work on them. Mathematics education is an art as well as a science, and it cannot be reduced to a set of research protocols and statistical tests and procedures. It is simply not possible to prove that an approach to teaching and learning will be effective before the fact. + +Education, as a scientific discipline, is a young field with an active community focused on R&D—research on learning coupled with the development of new and better curriculum materials. In truth, however, much of the work is better described as D&R—informed and thoughtful development followed by careful analysis of results. It is in the nature of the enterprise that we cannot discover what works before we create the what. Curriculum development, in particular, is best related to an engineering paradigm. To test the efficacy of an approach, we must analyze needs, examine existing programs, build an improved model program, and then test it—in the same way that we build scale models to design a better bridge or building. This kind of iterative D&R leads to new and more effective materials and new pedagogical approaches that better incorporate the growing body of knowledge of cognitive science. + +I wish to be clear. I recognize that Faffufnik has done important research. I recognize that Chaim Yankel's protocols can help quantify our results. We must learn from the past, and theoretical frameworks are important for future work. But we also must recognize that quoting Faffufnik and Chaim Yankel is not a substitute for imagination, creativity, and the application of common sense. The problems of mathematics education are difficult and will require the work of many people over a long period of time. We cannot afford to lose sight of this, even as mathematics education becomes a more-established research discipline. + +# Acknowledgment + +This editorial is adapted from the author's talk at the International Commission on Mathematical Instruction (ICMI) meeting in Rome, Italy, 2008. + +# About the Author + +Solomon Garfunkel, previously of Cornell University and the University of Connecticut at Storrs, has dedicated the last 30 years to research and development efforts in mathematics education. He has served as project director for the Undergraduate Mathematics and Its Applications (UMAP) and the High School Mathematics and Its Applications (HiMAP) Projects funded by NSF, and directed three telecourse projects including Against All Odds: Inside Statistics, and In Simplest Terms: College Algebra, for the Annenberg/CPB Project. He has been the Executive Director of COMAP, Inc. since its inception in 1980. Dr. Garfunkel was the project director and host for the series, For All Practical Purposes: Introduction to Contemporary Mathematics. He was the Co-Principal Investigator on the ARISE Project, and is currently the Co-Principal Investigator of the CourseMap, ResourceMap, and WorkMap projects. In 2003, Dr. Garfunkel was Chair of the National Academy of Sciences and Mathematical Sciences Education Board Committee on the Preparation of High School Teachers. + +# About This Issue + +Paul J. Campbell + +Editor + +This issue runs longer than a regular 92-page issue, to more than 200 pages. However, not all of the articles appear in the paper version. Some appear only on the Tools for Teaching 2009 CD-ROM (and at http://www.comap.com for COMAP members), which will reach members and subscribers later and will also contain the entire 2009 year of Journal issues. + +All articles listed in the table of contents are regarded as published in the Journal. The abstract of each appears in the paper version. Paging of the issue runs continuously, including in sequence articles that do not appear in the paper version. So if, say, p. 250 in the paper version is followed by p. 303, your copy is not necessarily defective! The articles on the intervening pages are on the CD-ROM. + +We hope that you find this arrangement agreeable. It means that we do not have to procrusteanize the content to fit a fixed number of paper pages. We might otherwise be forced to select only two or three Outstanding MCM papers to publish. Instead, we continue to bring you the full content. + +# Modeling Forum + +# Results of the 2009 Mathematical Contest in Modeling + +Frank Giordano, MCM Director + +Naval Postgraduate School + +1 University Circle + +Monterey, CA 93943-5000 + +fgiordano@nps.edu + +# Introduction + +A total of 1,675 teams of undergraduates from hundreds of institutions and departments in 14 countries, spent the first weekend in February working on applied mathematics problems in the 25th Mathematical Contest in Modeling. + +The 2009 Mathematical Contest in Modeling (MCM) began at 8:00 P.M. EST on Thursday, February 5 and ended at 8:00 P.M. EST on Monday, February 9. During that time, teams of up to three undergraduates researched, modeled, and submitted a solution to one of two open-ended modeling problems. Students registered, obtained contest materials, downloaded the problem and data, and entered completion data through COMAP's MCM Website. After a weekend of hard work, solution papers were sent to COMAP on Monday. The top papers appear in this issue of The UMAP Journal, together with commentaries. + +In addition to this special issue of The UMAP Journal, this year—for the first time—COMAP has made available a special supplementary "2009 MCM-ICM CD-ROM" containing the press releases for the two contests, the results, the problems, and original versions of the Outstanding papers that appear here in edited form. Information about ordering the CD-ROM is at http://www.comap.com/product/?idx=1025 or from (800) 772-6627. + +Results and winning papers from the first 24 contests were published in special issues of Mathematical Modeling (1985-1987) and The UMAP Journal (1985-2008). The 1994 volume of Tools for Teaching, commemorating the tenth anniversary of the contest, contains the 20 problems used in the first 10 years of the contest and a winning paper for each year. That volume and the special + +MCM issues of the Journal for the last few years are available from COMAP. The 1994 volume is also available on COMAP's special Modeling Resource CD-ROM. Also available is The MCM at 21 CD-ROM, which contains the 20 problems from the second 10 years of the contest, a winning paper from each year, and advice from advisors of Outstanding teams. These CD-ROMs can be ordered from COMAP at http://www.comap.com/product/cdrom/index.html. + +This year, the two MCM problems represented significant challenges. The author of Problem A, Daniel Solow of Case Western Reserve University, Cleveland, OH, was also one of the final judges. His problem, "Designing a Traffic Circle," asked teams to use a model to determine how best to control traffic flow in, around, and out of a circle, clearly stating the objective(s) and summarizing the conditions for use of various types of traffic-control methods. Problem B, "Energy and the Cellphone," was written by Joe Malkevitch of York College in Jamaica, NY. What is the long-term consequence of large-scale usage of cellphones in terms of electricity use by the battery and the charger? Teams were asked to take into account the fact that cellphones last much less time (they get lost and break) than phones for landlines and to suggest an optimal way (in terms of an energy perspective) to provide phone service to a "Pseudo U.S.," a country of 300 million people with about the same economic status as the current U.S. but with no landlines or cellphones. + +In addition to the MCM, COMAP also sponsors the Interdisciplinary Contest in Modeling (ICM) and the High School Mathematical Contest in Modeling (HiMCM). The ICM runs concurrently with MCM and for the next several years will offer a modeling problem involving an environmental topic. Results of this year's ICM are on the COMAP Website at http://www.comap.com/undergraduate/contest; results and Outstanding papers appeared in Vol. 30 (2009), No. 2. The HiMCM offers high school students a modeling opportunity similar to the MCM. Further details about the HiMCM are at http://www.comap.com/highschool/contest. + +# 2009 MCM Statistics + +1,675 teams participated +- 7 high school teams (<1%) +350 U.S. teams (21%) +- 1,325 foreign teams $(79\%)$ , from Australia, Canada, China, Finland, Germany, Hong Kong, Hungary, Indonesia, Ireland, Mexico, Singapore, South Africa, United Kingdom +9 Outstanding Winners (<1%) +294 Meritorious Winners (18%) +- 298 Honorable Mentions (18%) + +1,074 Successful Participants (63%) + +# Problem A: Designing a Traffic Circle + +Many cities and communities have traffic circles—from large ones with many lanes in the circle (such as at the Arc de Triomphe in Paris and the Victory Monument in Bangkok) to small ones with one or two lanes in the circle. Some of these traffic circles position a stop sign or a yield sign on every incoming road, which gives priority to traffic already in the circle; some position a yield sign in the circle at each incoming road to give priority to incoming traffic; and some position a traffic light on each incoming road (with no right turn allowed on a red light). Other designs may also be possible. + +The goal of this problem is to use a model to determine how best to control traffic flow in, around, and out of a circle. State clearly the objective(s) you use in your model for making the optimal choice as well as the factors that affect this choice. Include a Technical Summary of not more than two double-spaced pages that explains to a traffic engineer how to use your model to help choose the appropriate flow-control method for any specific traffic circle. That is, summarize the conditions under which each type of traffic-control method should be used. When traffic lights are recommended, explain a method for determining how many seconds each light should remain green (which may vary according to the time of day and other factors). Illustrate how your model works with specific examples. + +# Problem B: Energy and the Cellphone + +This question involves the "energy" consequences of the cellphone revolution. Cellphone usage is mushrooming, and many people are using cellphones and giving up their landline telephones. What is the consequence of this in terms of electricity use? Every cellphone comes with a battery and a recharger. + +# Requirement 1 + +Consider the current U.S., a country of about 300 million people. Estimate from available data the number $H$ of households, with $m$ members each, that in the past were serviced by landlines. Now, suppose that all the landlines are replaced by cellphones; that is, each of the $m$ members of the household has a cellphone. Model the consequences of this change for electricity utilization in the current U.S., both during the transition and during the steady state. The analysis should take into account the need for charging the batteries of the cellphones, as well as the fact that cellphones do not last as long as landline phones (for example, the cellphones get lost and break). + +# Requirement 2 + +Consider a second "Pseudo U.S."—a country of about 300 million people with about the same economic status as the current U.S. However, this emerging country has neither landlines nor cellphones. What is the optimal way of providing phone service to this country from an energy perspective? Of course, cellphones have many social consequences and uses that landline phones do not allow. A discussion of the broad and hidden consequences of having only landlines, only cellphones, or a mixture of the two is welcomed. + +# Requirement 3 + +Cellphones periodically need to be recharged. However, many people always keep their recharger plugged in. Additionally, many people charge their phones every night, whether they need to be recharged or not. Model the energy costs of this wasteful practice for a Pseudo U.S. based on your answer to Requirement 2. Assume that the Pseudo U.S. supplies electricity from oil. Interpret your results in terms of barrels of oil. + +# Requirement 4 + +Estimates vary on the amount of energy that is used by various recharger types (TV, DVR, computer peripherals, and so forth) when left plugged in but not charging the device. Use accurate data to model the energy wasted by the current U.S. in terms of barrels of oil per day. + +# Requirement 5 + +Now consider population and economic growth over the next 50 years. How might a typical Pseudo U.S. grow? For each 10 years for the next 50 years, predict the energy needs for providing phone service based upon your analysis in the first three requirements. Again, assume electricity is provided from oil. Interpret your predictions in terms of barrels of oil. + +# The Results + +The solution papers were coded at COMAP headquarters so that names and affiliations of the authors would be unknown to the judges. Each paper was then read preliminarily by two "triage" judges at either Appalachian State University (Traffic Circle Problem) or at the National Security Agency (Cellphone Energy Problem). At the triage stage, the summary and overall organization are the basis for judging a paper. If the judges' scores diverged for a paper, the judges conferred; if they still did not agree, a third judge evaluated the paper. + +Additional Regional Judging sites were created at the U.S. Military Academy and at the Naval Postgraduate School to support the growing number of contest submissions. + +Final judging took place at the Naval Postgraduate School, Monterey, CA. The judges classified the papers as follows: + +
OutstandingMeritoriousHonorable MentionSuccessful ParticipationTotal
Traffic Circle Problem41921657631124
Cellphone Energy Problem5102133311551
929429810741675
+ +The 9 papers that the judges designated as Outstanding appear in this special issue of The UMAP Journal, together with commentaries. We list those teams here and the Meritorious teams (and advisors) at the end of this report; the list of all participating schools, advisors, and results is in the Appendix. + +# Outstanding Teams + +Institution and Advisor + +Team Members + +# Traffic Circle Papers + +"A Simulation-Based Assessment of Traffic Circle Control" + +Harvard University + +Cambridge, MA + +Clifford H. Taubes + +Christopher Chang + +Zhou Fan + +Yi Sun + +"One Ring to Rule Them All: + +The Optimization of Traffic Circles" + +Harvey Mudd College + +Claremont, CA + +Susan E. Martonosi + +Aaron Abromowitz + +Andrea Levyi + +Russell Melick + +"Three Steps to Make the Traffic Circle Go Round" + +Tsinghua University + +Beijing, China + +Jun Ye + +Zeyuan Zhu + +Tianyi Mao + +Yichen Huang + +"Pseudo-Finite Jackson Networks and Simulation: A Roundabout Approach to Traffic Control" + +University of Colorado + +Anna Lieb + +Boulder, CO + +Anil Damle + +Anne Dougherty + +Geoffrey Peterson + +# Cellphone Energy Papers + +"Mobile to Mobil: The Primary Energy Costs for Cellular and Landline Telephones" + +Clarkson University + +Nevin Brackett-Rozinsky + +Potsdam, NY + +Katelynn Wilton + +Joseph Skufca + +Jason Altieri + +"Energy Implications of Cellular Proliferation in the U.S." + +College of Idaho + +Benjamin Coate + +Caldwell,IDA + +Zachary Kopplin + +Michael P. Hitchman + +Nate Landis + +"Modeling Telephony Energy Consumption" + +Cornell University + +Amrish Deshmukh + +Ithaca, NY + +Rudolf Nikolaus Stahl + +Alexander Vladimirsky + +Matthew Guay + +"America's New Calling" + +Southwestern University + +Stephen R. Foster + +Georgetown, TX + +J. Thomas Rogers + +Richard Denman + +Robert S. Potter + +"Wireless Networks: An Easy Cell" + +University of Delaware + +Jeff Bosco + +Newark, DE + +Zachary Ulissi + +Louis Rossi + +Bob Liu + +# Awards and Contributions + +Each participating MCM advisor and team member received a certificate signed by the Contest Director and the appropriate Head Judge. + +INFORMS, the Institute for Operations Research and the Management Sciences, recognized the teams from the University of Colorado-Boulder (Traffic Circle Problem) and Cornell University (Cellphone Energy Problem) as INFORMS Outstanding teams and provided the following recognition: + +- a letter of congratulations from the current president of INFORMS to each team member and to the faculty advisor; +- a check in the amount of $300 to each team member; +- a bronze plaque for display at the team's institution, commemorating their achievement; +- individual certificates for team members and faculty advisor as a personal commemoration of this achievement; +- a one-year student membership in INFORMS for each team member, which includes their choice of a professional journal plus the OR/MS Today periodical and the INFORMS society newsletter. + +The Society for Industrial and Applied Mathematics (SIAM) designated one Outstanding team from each problem as a SIAM Winner. The teams were from Harvard University (Traffic Circle Problem) and Southwestern University (Cellphone Energy Problem). Each of the team members was awarded a $300 cash prize, and the teams received partial expenses to present their results in a special Minisymposium at the SIAM Annual Meeting in Denver, CO in July. Their schools were given a framed hand-lettered certificate in gold leaf. + +The Mathematical Association of America (MAA) designated one Outstanding North American team from each problem as an MAA Winner. The teams were from Harvey Mudd College (Traffic Circle Problem) and Clarkson University (Cellphone Energy Problem). With partial travel support from the MAA, the teams presented their solution at a special session of the MAA Mathfest in Portland, OR in August. Each team member was presented a certificate by an official of the MAA Committee on Undergraduate Student Activities and Chapters. + +# Ben Fusaro Award + +One Meritorious or Outstanding paper was selected for each problem for the Ben Fusaro Award, named for the Founding Director of the MCM and awarded for the sixth time this year. It recognizes an especially creative approach; details concerning the award, its judging, and Ben Fusaro are in Vol. 25 (3) (2004): 195-196. The Ben Fusaro Award winners were the + +University of Iowa (Traffic Circle Problem) and Lawrence Technological University (Cellphone Energy Problem). + +# Judging + +Director + +Frank R. Giordano, Naval Postgraduate School, Monterey, CA + +Associate Director + +William P. Fox, Dept. of Defense Analysis, Naval Postgraduate School, Monterey, CA + +# Traffic Circle Problem + +Head Judge + +Marvin S. Keener, Executive Vice-President, Oklahoma State University, Stillwater, OK + +Associate Judges + +William C. Bauldry, Chair, Dept. of Mathematical Sciences, Appalachian State University, Boone, NC (Head Triage Judge) + +Kelly Black, Mathematics Dept., Union College, Schenectady, NY + +Karen D. Bolinger, Mathematics Dept., Clarion University of Pennsylvania, Clarion, PA (SIAM Judge) + +Patrick J. Driscoll, Dept. of Systems Engineering, U.S. Military Academy, West Point, NY + +J. Douglas Faires, Youngstown State University, Youngstown, OH + +Ben Fusaro, Dept. of Mathematics, Florida State University, Tallahassee, FL + +Jerry Griggs, Mathematics Dept., University of South Carolina, Columbia, SC (Problem Author) + +Steve Horton, Dept. of Mathematical Sciences, U.S. Military Academy, West Point, NY (MAA Judge) + +Mario Juncosa, RAND Corporation, Santa Monica, CA (retired) + +Michael Moody, Olin College of Engineering, Needham, MA (SIAM Judge) + +John L. Scharf, Mathematics Dept., Carroll College, Helena, MT (Ben Fusaro Award Judge) + +Dan Solow, Mathematics Dept., Case Western Reserve University, Cleveland, OH (INFORMS Judge) + +Michael Tortorella, Dept. of Industrial and Systems Engineering, Rutgers University, Piscataway, NJ + +Richard Douglas West, Francis Marion University, Florence, SC + +Dan Zwillinger, Raytheon Company, Sudbury, MA + +# Cellphone Energy Problem + +Head Judge + +Maynard Thompson, Mathematics Dept., University of Indiana, Bloomington, IN + +Associate Judges + +Peter Anspach, National Security Agency, Ft. Meade, MD (Head Triage Judge) + +Jim Case (SIAM Judge) + +Veena Mendiratta, Lucent Technologies, Naperville, IL + +Peter Olsen, Johns Hopkins Applied Physics Laboratory, Baltimore, MD + +David H. Olwell, Naval Postgraduate School, Monterey, CA (INFORMS Judge) + +Kathleen M. Shannon, Dept. of Mathematics and Computer Science, Salisbury University, Salisbury, MD (SIAM Judge) + +Marie Vanisko, Dept. of Mathematics, Carroll College, Helena, MT (Ben Fusaro Award Judge) + +# Regional Judging Session at U.S. Military Academy + +Head Judges + +Patrick J. Driscoll, Dept. of Systems Engineering, and + +Steve Horton, Dept. of Mathematical Sciences, + +United States Military Academy (USMA), West Point, NY + +Associate Judges + +Tim Elkins, Dept. of Systems Engineering, USMA + +Michael Jaye, Dept. of Mathematical Sciences, USMA + +Darrall Henderson, Sphere Consulting, LLC + +Steve Horton, Dept. of Mathematical Sciences, USMA + +Tom Meyer, Dept. of Mathematical Sciences, USMA + +Scott Nestler, Dept. of Mathematical Sciences, USMA + +# Regional Judging Session at Naval Postgraduate School + +Head Judge + +William P. Fox, Dept. of Defense Analysis, and Frank Giordano, Naval Postgraduate School (NPS), Monterey, CA + +Associate Judges + +Greg Mislik, Matt Boensel, and Pete Gustitis + +—all from the Naval Postgraduate School, Monterey, CA + +# Triage Session for Traffic Circle Problem + +Head Triage Judge + +William C. Bauldry, Chair, Dept. of Mathematical Sciences, + +Appalachian State University, Boone, NC + +Associate Judges +Jeffry Hirst, Rick Klima, Mark Ginn, and Tracie McLemore Salinas + +—all from Dept. of Mathematical Sciences, Appalachian State University, Boone, NC + +# Triage Session for Cellphone Energy Problem + +Head Triage Judges + +Peter Anspach, National Security Agency (NSA), Ft. Meade, MD +Jim Case + +Associate Judges + +Other judges from inside and outside NSA, who wish not to be named. + +# Sources of the Problems + +The Traffic Circle Problem was contributed by Daniel Solow (Case Western Reserve University), who was also one of the final judges, and the Telephone Energy Problem by Joe Malkevitch (York College of CUNY). + +# Acknowledgments + +Major funding for the MCM is provided by the National Security Agency (NSA) and by COMAP. Additional support is provided by the Institute for Operations Research and the Management Sciences (INFORMS), the Society for Industrial and Applied Mathematics (SIAM), and the Mathematical Association of America (MAA). We are indebted to these organizations for providing judges and prizes. + +We also thank for their involvement and support the MCM judges and MCM Board members for their valuable and unflagging efforts, as well as + +- Two Sigma Investments. (This group of experienced, analytical, and technical financial professionals based in New York builds and operates sophisticated quantitative trading strategies for domestic and international markets. The firm is successfully managing several billion dollars using highly automated trading technologies. For more information about Two Sigma, please visit http://www.twosigma.com.) + +# Cautions + +To the reader of research journals: + +Usually a published paper has been presented to an audience, shown to colleagues, rewritten, checked by referees, revised, and edited by a journal editor. Each paper here is the result of undergraduates working on a problem over a weekend. Editing (and usually substantial cutting) has taken place; minor errors have been corrected, wording altered for clarity or economy, and style adjusted to that of The UMAP Journal. The student authors have proofed the results. Please peruse their efforts in that context. + +To the potential MCM Advisor: + +It might be overpowering to encounter such output from a weekend of work by a small team of undergraduates, but these solution papers are highly atypical. A team that prepares and participates will have an enriching learning experience, independent of what any other team does. + +COMAP's Mathematical Contest in Modeling and Interdisciplinary Contest in Modeling are the only international modeling contests in which students work in teams. Centering its educational philosophy on mathematical modeling, COMAP uses mathematical tools to explore real-world problems. It serves the educational community as well as the world of work by preparing students to become better-informed and better-prepared citizens. + +# Meritorious Teams + +Designations of departments named Mathematics, Mathematical Sciences, Mathematics and Computer Science, or the like are omitted. + +# Traffic Circle Problem (192 teams) + +Beihang University, Beijing, China (Peng Lin Ping) +Beihang University, Advanced Engineering, Beijing, China (Wei Feng) +Beijing Institute of Technology, Beijing, China (Xue-Wen Li) +Beijing Institute of Technology, Beijing, China (Gui-Feng Yan) +Beijing Institute of Technology, Beijing, China (Chun-guang Xiong) +Beijing Institute of Technology, Beijing, China (Bing-Zhao Li) +Beijing Jiaotong University, Beijing, China (Bingtuan Wang) +Beijing Jiaotong University, Beijing, China (Peng Cao) +Beijing Language and Culture University, Computer Science, Beijing, Q +Beijing Language and Culture University, Computer Science, Beijing, Q +Beijing Normal University, Beijing, China (Qing He) +Beijing Normal University, Beijing, China (Xianghui Shi) +Beijing University of Posts and Telecommunications, Computer Science, China (Wenbo Zhang) + +Beijing University of Posts and Telecommunications, Electronic Engineering, Beijing, China (Qing Zhou) + +Beijing University of Posts and Telecommunications, Electronic Engineering, Beijing, China (Jianhua Yuan) + +Bethel University, Arden Hills, MN (Nathan M. Gossett) (two teams) + +Bucknell University, Lewisburg, PA (Nathan, C. Ryan) + +Carnegie Mellon University, Pittsburgh, PA (Dale J. Winter) + +Carroll College, Natural Sciences, Helena, MT (Anthony Szpilka) + +Carroll College, Mathematics, Engineering, and Computer Science, Helena, MT (Jack Oberweiser) + +Central China Normal University, Mathematics and Statistics Department, Wuhan, Hubei, China (Bo Li) + +Central South University, Metallurgical Science and Engineering, Changsha, Hunan, China (Muzhou Hou) + +Central South University, Mathematics and Applied Mathematics, Changsha, Hunan, China (Zheng Zhoushun) + +Central University of Finance and Economics, Applied Mathematics, Beijing, China (Huiqing Huang) + +Central Washington University, Ellensburg WA (James W. Bisgard) + +Chengdu University of Technology, Information Management, Chengdu, Sichuan, China (YouHua Wei) + +China University of Petroleum, Dongying, Shandong, China (Xinhai Liu) + +China University of Petroleum-Beijing, Mathematics and Physics, Beijing, China (Ling Zhao) + +Chinese University of Hong Kong, Physics, Hong Kong, China (Ming Chung Chu) + +City University of Hong Kong, Hong Kong, China (Jonathan J. Wylie) + +Clarion University, Clarion, PA (David M. Hipfel) + +Clarkson University, Computer Science, Potsdam, NY (Kathleen R. Fowler) (two teams) + +Coe College, Cedar Rapids, IA (Jonathan White) + +College of Charleston, Charleston, SC (William G. Mitchener) (two teams) + +Colorado College, Colorado Springs, CO (Amelia Taylor) + +Cornell University, Ithaca, NY (Alexander Vladimirsky) + +Dalian Maritime University, Dalian, Liaoning, China (Naxin Chen) + +Dalian Maritime University, Dalian, Liaoning, China (Xiangpeng Kong) (two teams) + +Dalian University of Technology, Software, Dalian, Liaoning, China (Ning Ding) + +Dalian University of Technology, Software, Dalian, Liaoning, China (Tie Qiu) + +Dalian University of Technology, Applied Mathematics, Dalian, Liaoning, China (Mingfeng He) + +Dalian University of Technology, Applied Mathematica, Dalian, Liaoning, China (Qiuhui Pan) + +Dalian University of Technology, Innovation Experiment, Dalian, Liaoning, China (Lin Feng) + +Davidson College, Davidson, NC (Tim Chartier) + +East China Normal University, Statistics and Actuarial Science, Shanghai, China (Shujin Wu) + +Eastern Michigan University, Ypsilanti, MI (Andrew M. Ross) + +Fudan University, Shanghai, China (Zhijie Cai) + +Fujian Normal University, Fuzhou, Fujian, China (Qinghua Chen) + +Fujian Normal University, Fuzhou, Fujian, China (Changfeng Ma) + +Guangdong University of Business Studies, Guangzhou, Guangdong, China (Zigui Xiang) + +Hangzhou Dianzi University, Information and Mathematical Science, Hangzhou, Zhejiang, China (Hao Shen) + +Harbin Engineering University, Science, Harbin, Heilongjiang, China (Zhenbin Gao) + +Harbin Engineering University, Science, Harbin, Heilongjiang, China (Dongqi Sun) + +Harbin Institute of Technology, Harbin, Heilongjiang, China (Guofeng Fan) (two teams) + +Harbin Institute of Technology, Computer Science, Harbin, Heilongjiang, China (Zheng Kuang) + +Harbin Institute of Technology, Automatic Measurement and Control, Harbin, Heilongjiang, China (Limin Zou) + +Harbin Institute of Technology, Weihai, Shandong, China (Rongning Qu) + +Harbin University of Science and Technology, Harbin, Heilongjiang, China (Fengqiu Liu) + +Harvard University, Engineering and Applied Sciences, Cambridge, MA (Michael Brenner) + +Harvey Mudd College, Claremont, CA (Rachel Levy) + +Harvey Mudd College, Claremont, CA (Francis Su) + +Huazhong University of Science and Technology, Mathematics and Statistics, Wuhan, Hubei, China (Zhihong Lu) + +Jiangsu University, Zhenjiang, Jiangsu, China (Yang Hong Lin) + +Jilin University, Changchun, Jilin, China (Yongkui Zou) + +Lawrence Technological University, Southfield, MI (Ruth G. Favro) + +Lawrence University, Appleton, WI (Bruce H. Pourciau) + +Linfield College, McMinnville, OR (Martha E. VanCleave) + +Luther College, Computer Science, Decorah, IA (Steve Hubbard) + +Macquarie University, North Ryde, NSW, Australia (Xuan Duong) + +McGill University, Mathematics and Statistics, Montreal, QC, Canada (Bruce Shepherd) + +MIT, Cambridge, MA (Martin Zdenek Bazant) + +Mount St. Mary's University, Emmitsburg, MD (Brian Heinold) + +Nanjing University, Computer Science and Technology, Nanjing, Jiangsu, China (Zhengxing Sun) + +Nanjing University, Nanjing, Jiangsu, China (Huang Wei Hua) + +Nanjing University of Posts and Telecommunications, Mathematics and Physics, Nanjing, Jiangsu, China (Jun Ye) + +National University of Defense Technology, Mathematics and System Science, Changsha, Hunan, China (Ziyang Mao) + +National University of Defense Technology, Department of Mathematics and System Science, Changsha, Hunan, China (Mengda Wu) + +National University of Singapore, Singapore (Hwee Huat Tan) + +National University of Singapore, Statistics and Applied Probability, Singapore (Wei-Liem Loh) + +North Carolina School of Science and Mathematics, Durham, NC (Daniel J. Teague) (two teams) + +Northeastern University, Information Science and Engineering, Shenyang, Liaoning, China (Dali Chen) + +Northeastern University, System Science, Shenyang, Liaoning, China (Xuefeng Zhang) + +Northwest University, Xi'an, Shaanxi, China (Liantang Wang) + +Oklahoma State University, Stillwater, OK (Benny Evans) + +Pacific Lutheran University, Tacoma, WA (Mei Zhu) + +Pacific University, Forest Grove, OR (Christine Guenther) + +Peking University, Computer Science, Beijing, China (Bo Peng) + +Peking University, Beijing, China (Shanjun Lin) + +Peking University, Beijing, China (Shanjun Lin) + +Peking University, Beijing, China (Xiang Ma) (two teams) + +Peking University, Beijing, China (Xufeng Liu) + +Peking University, Probability and Statistics, Beijing, China (Minghua Deng) + +People's Liberation Army University of Science and Technology, Command Automation, Nanjing, Jiangsu, China (Jinren Shen) + +Princeton University, Princeton, NJ (Ingrid Daubechies) + +Quanzhou Normal University, Quanzhou, Fujiang, China (Xiyang Yang) + +Renmin University of China, Information, Beijing, China (Qingcai Zhang) + +Renmin University of China, Information, Beijing, China (Yong Lin) + +Rensselaer Polytechnic Institute, Troy, NY (Donald A. Drew) + +Xi'an Communication Institute, Xi'an, Shaanxi, China (Xinshe Qi) + +Seattle Pacific University, Seattle, WA (Wai Lau) + +Shandong University, Jinan, Shandong, China (Hualin Huang) + +Shandong University, Jinan, Shandong, China (Tongchao Lu) + +Shandong University, Jinan, Shandong, China (Xiaoxia Rong) + +Shandong University, Mathematics and System Sciences, Jinan, Shandong, China (Lu Lin) + +Shandong University, Computer Science, Jinan, Shandong, China (Jun Ma) + +Shandong University, Jinan, Shandong, China (Jingtao Shi) + +Shandong University, Computer Science and Technology, Jinan, Shandong, China (Xing Dong) + +Shandong University of Science and Technology, Information Science and Engineering, Qingdao, Shandong, China (Xinzeng Wang) (two teams) + +Shandong University, Jinan, Shandong, China (Tiande Zhang) + +Shanghai Jiaotong University, Shanghai, China (Baorui Song) + +Shanghai Jiaotong University, Shanghai, China (Ershun Pan) + +Shanghai University, Shanghai, China (Wei Huang) + +Shanghai University of Finance and Economics, Shanghai, China (Yibo Zhu) + +Shijianzhuang Railway Institute, Mechanical Engineering, Shijiazhuang, Hebei, China (Yongliang Wang) + +Shippensburg University, Shippensburg, PA (Paul T. Taylor) + +Sichuan University, Chengdu, Sichuan, China (HuiLei Han) + +Sichuan University, Chengdu, Sichuan, China (Yingyi Tan) (two teams) + +Sichuan University, Chengdu, Sichuan, China (Hai Niu) + +Sichuan University, Chengdu, Sichuan, China (Jie Zhou) + +Siena Heights University, Adrian, MI (Jeffrey C. Kallenbach) + +Simon Fraser University, Burnaby, BC. Canada (Nilima Nigam) (two teams) + +Slippery Rock University, Slippery Rock, PA (Richard J. Marchand) (two teams) + +South China Normal University, Guangzhou, Guangdong, China (Shaohui Zhang) + +South Fort Myers High School, Fort Myers, FL (Johnny Lee Jones) + +Southeast University, Nanjing, Jiangsu, China (Enshui Chen) + +Southeast University, Nanjing, Jiangsu, China (Zhiqiang Zhang) + +Southeast University, Nanjing, Jiangsu, China (Xiang Yin) + +Southeast University, Nanjing, Jiangsu, China (Liyan Wang) + +Southwestern University of Finance and Economics, International Business School, Chengdu, Sichuan, China (Dai Dai) + +Southwestern University of Finance and Economics, Mathematical Economics, Chengdu, Sichuan, China (Feng Xu) + +Stanford University, Institute for Computational and Mathematical Engineering (iCME), Stanford, CA (Walter Murray) + +Tsinghua University, Industrial Engineering, Beijing, China (Lei Zhao) + +Tsinghua University, Beijing, China (Heng Liang) + +Tsinghua University, Beijing, China (Chunxiong Zheng) + +Tufts University, Medford, MA (Scott MacLachlan) + +University of North Carolina-Chapel Hill, Chapel Hill, NC (Sarah A. Williams) + +University of Electronic Science and Technology of China, Information and Computation Science, Chengdu, Sichuan, China (Qin Siyi) (two teams) + +University of Adelaide, Adelaide, SA, Australia (Tony J. Roberts) + +University of Arizona, Tucson, AZ (Paul F. Dostert) + +University of California-Merced, Merced, CA (Arnold D. Kim) + +University of Colorado-Boulder, Applied Mathematics, Boulder, CO (Bengt Fornberg) + +University of Iowa, Iowa City, IA (Joe Eichholz) + +University of Iowa, Iowa City, IA (Stephen Welch) + +University of Michigan-Dearborn, Mathematics and Statistics, Dearborn, MI (Joan C. Remski) + +University of Minnesota-Duluth, Mathematics and Statistics, Duluth, MN (Bruce B. Peckham) + +University of Pittsburgh, Pittsburgh, PA (Jonathan Rubin) + +University of Puget Sound, Tacoma, WA (Michael Z. Spivey) (two teams) + +University of Science and Technology of China, Electronic Engineering and Information Science, Hefei, Anhui, China (Bo Pang) + +University of Science and Technology of China, Hefei, Anhui, China (Yige Ding) + +University of Science and Technology of China, Special Class for the Gifted Young, Hefei, Anhui, China (Yangyang Cheng) + +University of Toronto at Scarborough, Scarborough, ON, Canada (Paul S. Selick) + +University of Washington, Seattle, WA (James Allen Morrow) + +University of Washington, ACMS, Seattle, WA (Anne Greenbaum) (two teams) + +University of Western Ontario, London, ON, Canada (Allan B. MacIsaac) + +Virginia Tech, Blacksburg, VA (Henning S. Mortveit) + +Wake Forest University, Winston Salem, NC (Jennifer B. Erway) + +Wesleyan College, Macon, GA (Joseph A. Iskra) + +Western Carolina University, Cullowhee, NC (Erin K. McNelis) + +Western Washington University, Bellingham, WA (Tjalling Ypma) + +Western Washington University, Bellingham, WA (Saim Ural) + +Willamette University, Salem, OR (Cameron W. McLeman) + +Wuhan University of Technology, Statistics, Wuhan, Hubei, China (Mao Shuhua) + +Xavier University, Cincinnati, OH (Bernd E. Rossa) + +Xi'an Jiaotong University, Xi'an, Shaanxi, China (Huanqin Li) + +Xi'an Jiaotong University, Xi'an, Shaanxi, China (Lizhou Wang) + +Xi'an Jiaotong University, Xi'an, Shaanxi, China (Zhuosheng Zhang) + +Xi'an Jiaotong-Liverpool University, Financial Mathematics, Suzhou, Jiangsu, China (Miaoxin Yao) + +Xi'an Jiaotong-Liverpool University, Computer Science, Suzhou, Jiangsu, China (Jingming Guo) + +Xidian University, Xi'an, Shaanxi, China (Yue Song) + +Xidian University, Xi'an, China (Jimin Ye) + +Xuzhou Institute of Architectural Technology, Foundation, Xuzhou, Jiangsu, China (Feng Xinyong) + +Youngstown State University, Mathematics and Statistics, Youngstown, OH (George T. Yates) + +Yunnan University, Electronic Engineering, Kunming, Yunnan, China (Haiyan Li) + +Yunnan University, Communication Engineering, Kunming, Yunnan, China (Guanghui Cai) + +Zhejiang Gongshang University, Hangzhou, Zhejiang, China (Ding Zhengzhong) + +Zhejiang University, Hangzhou, Zhejiang, China (Ling Lin) + +Zhuhai College of Jinan University, Mathematical Modeling Innovative Practice Base, Zhuhai, Guangdong, China (Yuanbiao Zhang) + +Zhuhai College of Jinan University, Mathematical Modeling Innovative Practice Base, Zhuhai, Guangdong, China (Advisor Team) + +# Cellphone Energy Problem (102 teams) + +Academy of Armored Force Engineering, Mechanical Engineering, Beijing, China (Han De) + +Albion College, Albion, MI (Darren E. Mason) + +Bandung Institut of Technology, Mathematics, Bandung, West Java, Indonesia (Nuning Nuraini) + +Beijing Institute of Technology, Beijing, China (Hongying Man) + +Beijing Normal University, Beijing, China (Chun Yang) + +Beijing Normal University, Beijing, China (Su Xiao Le) + +Beijing University of Posts and Telecommunications, Beijing, China (Xinchao Zhao) + +Beijing University of Posts and Telecommunications, Computer Science and Technology, Beijing, China (Hongxiang Sun) + +Beijing University of Posts and Telecommunications, Applied Mathematics, Beijing, China (Hongxiang Sun) + +Beijing University of Posts and Telecommunications, Beijing, China (Tianping Shuai) + +Bemidji State University, Bemidji, MN (College Livingston) + +Carroll College, Mathematics, Engineering, and Computer Science, Helena, MT (Lahna VonEpps) + +China University of Petroleum-Beijing, Mathematics and Physics, Beijing, China (Xiaoguang Lu) + +Chongqing University, Information and Computational Science, Chongqing, Sichuan, China (Jian Xiao) + +Civil Aviation University of China, Air Traffic Control, Tianjin, Tianjin, China (Zhaoning Zhang) + +Clarkson University, Potsdam, NY (Joseph D. Skufca) + +Dalian Maritime University, Dalian, Liaoning, China (Y. Zhang) + +Dalian Neosoft Institute of Information, Information Technology and Business Administration, Dalian, Liaoning, China (Ping Song) + +Dalian University, Dalian, Liaoning, China (Jiatai Gang) + +Dalian University of Technology, Dalian, China (Zhen Wang) + +Davidson College, Davidson, NC (Laurie Heyer) + +Davidson College, Davidson, NC (Donna Molinek) + +Duke University, Durham, NC (David Kraines) + +Eastern Mennonite University, Harrisonburg, VA (Leah Shao Boyer) + +Electronic Engineering Institute, Electronic Engineering, Hefei, Anhui, China (Quancai Gan) + +Father Gabriel Richard High School, Ann Arbor, MI (William B. Dannemiller) + +Grand View University, Des Moines, IA (Sergio Loch) + +Hangzhou Dianzi University, Information and Mathematical Science, Hangzhou, Zhejiang, China (Zongmao Cheng) + +Harbin Institute of Technology, Harbin, Heilongjiang, China (Xilian Wang) + +Harbin Institute of Technology, Harbin, Heilongjiang, China (Xiaofeng Shi) + +Hebei Polytechnic University, College of Light Industry, Fundamental Teaching, Tangshan, Hebei, China (Shaohong Yan) + +Hohai University, Nanjing, Jiangsu, China (Genhong Ding) (two teams) + +Hohai University, Nanjing, Jiangsu, China (Jifeng Chu) + +Hong Kong University of Science and Technology, Kowloon, Hong Kong, China (Jimmy Chi-Hung Fung) + +Huazhong University of Science and Technology, Mathematics and Statistics, Wuhan, Hubei, China (Nanzhong He) + +Huazhong University of Science and Technology, Mathematics and Statistics, Wuhan, Hubei, China (Feng Pan) + +Jacksonville University, Physics, Jacksonville, FL (W. Brian Lane) + +John Tyler Community College, Midlothian, VA (Peter R. Peterson) + +Lanzhou University, Mathematics and Statistics, Lanzhou, Gansu, China (Yuewei Liu) + +Lawrence Technological University, Southfield, MI (Ruth G. Favro) + +Lawrence Technological University, Natural Sciences, Southfield, MI (Valentina Tobos) + +MIT, Cambridge, MA (Martin Zdenek Bazant) + +Nanjing University, Nanjing, Jiangsu, China (Weihua Huang) + +Nanjing University, Control and System Engineering, Nanjing, Jiangsu, China (Zhao Jiabao) + +Nanjing University of Science and Technology, Nanjing, Jiangsu, China (Chungen Xu) + +Nankai University, Tianjin, Tianjin, China (Kui Wang) + +National University of Defense Technology, Mathematics and System Science, Changsha, Hunan, China (Ziyang Mao) + +National University of Ireland Galway, Galway Ireland (Niall Madden) + +North China University of Technology, Beijing, China (Quan Zheng) + +Northwestern Polytechnical University, Xi'an, Shaanxi, China (Genjiu Xu) + +Oregon State University, Corvallis, OR (Nathan L. Gibson) + +Oxford University, Merton College, Oxford, United Kingdom (Jeffrey H. Giansiracusa) + +Pacific University, Forest Grove, OR (Christine Guenther) + +Pacific University, Forest Grove, OR (Michael Rowell) + +Peking University, Economics, Beijing, China (Jingyi Ye) + +Peking University, Software Engineering, Beijing, China (Wei Zhang) + +People's Liberation Army University of Science and Technology, Meteorology, Nanjing, Jiangsu, China (Liu Shousheng) + +PLA University of Science and Technology, Engineering Corps, Nanjing, Jiangsu, China (Hansheng Shi) + +Regis University, Denver, CO (James A. Seibert) + +Renmin University of China, Beijing, China (Yonghong Long) + +Rensselaer Polytechnic Institute, Troy, NY (Donald A. Drew) + +Seattle University, Seattle, WA (Jeffery C. DiFranco) + +Shanghai Foreign Language School, Educational Research, Shanghai, China (Jia Wang) + +Shanghai University of Finance and Economics, Statistics, Shanghai, China (Ke Dai) + +Simpson College, Indianola, IA (Rick Spellerberg) + +Southeast University, Nanjing, Jiangsu, China (Jinguan Lin) + +Southern Connecticut State University, New Haven, CT (Ross B. Gingrich) + +Sun Yat Sen University, Physics, Guangzhou, China (Jian Liang Huang) + +Trinity University, San Antonio, TX (Peter Olofsson) + +Union College, Schenectady, NY (Jue Wang) + +United States Military Academy, West Point, NY (Jonathan Roginski) + +United States Military Academy, West Point, NY (Amanda Beecher) + +United States Military Academy, West Point, NY (Suzanne DeLong) + +United States Military Academy, West Point, NY (Randy Boucher) + +University of Arizona, Tucson, AZ (Paul F. Dostert) + +University of Colorado Denver, Denver, CO (Gary A. Olson) + +University of Electronic Science and Technology of China, Chengdu, Sichuan, China (Mingqi Li) + +University of Pittsburgh, Pittsburgh, PA (Jonathan Rubin) + +University of North Carolina-Chapel Hill, Chapel Hill, NC (Amanda L. Traud) + +University of North Carolina-Chapel Hill, Chapel Hill, NC (Brian Pike) + +University of North Carolina-Chapel Hill, Chapel Hill, NC (Sarah A. Williams) + +University of Science and Technology of Beijing, Beijing, China (Zhixin Hu) + +University of Toronto at Scarborough, Scarborough, Ontario, ON, Canada (Paul S. Selick) + +University of Virginia, Charlottesville, VA (Tai Melcher) + +University of Wisconsin-La Crosse, La Crosse, WI (Theodore Wendt) + +University of Wisconsin-La Crosse, La Crosse, WI (Barbara Bennie) + +University of Wisconsin-River Falls, River Falls, WI (Kathy A. Tomlinson) + +Virginia Tech, Blacksburg, VA (John F. Rossi) + +Wake Forest University, Winston Salem, NC (Jennifer B. Erway) + +Wuhan University, Mathematics and Statistics, Wuhan, Hubei, China (Yizhong Liu) + +Xi'an Jiaotong University, Xi'an, Shaanxi, China (Lizhou Wang) + +Xi'an Jiaotong-Liverpool University, Financial Mathematics, Suzhou, Jiangsu, China (Dongen Zhang) + +Xi'an Jiaotong-Liverpool University, Computer Science, Suzhou, Jiangsu, China (Jingming Guo) + +Youngstown State University, Mathematics and Statistics, Youngstown, OH (George T. Yates) + +Yunnan University, Information Science and Engineering, Kunming, Yunnan, China (Zong Rong) + +Zhejiang University, Hangzhou, Zhejiang, China (Junjie Li) + +Zhejiang University, Hangzhou, Zhejiang, China (Qifan Yang) + +Zhengzhou Information Engineering Institute, Zhengzhou, Henan, China (Chang Yong Peng) + +Zhuhai College of Jinan University, Packaging Engineering, Zhuhai, Guangdong, China (Yuan-biao Zhang) + +Zhuhai College of Jinan University, Mathematical Modeling Innovative Practice Base, Zhuhai, Guangdong, China (Yuanbiao Zhang) + +Zhuhai College of Jinan University, Mathematical Modeling Innovative Practice Base, Zhuhai, Guangdong, China (Jianwen Xie) + +# Editor's Note + +The complete roster of participating teams and results has become too long to reproduce in the printed copy of the Journal. It can now be found at the COMAP Website, in separate files for each problem: + +http://www.comap.com/undergraduate/contestms/mcm/contestss/ 2009/results/MCM-A-Results-2009.pdf and + +http://www.comap.com/undergraduate/contest/mcm/contest/2009/results/MCM-B-Results-2009.pdf + +The listings will also appear on the annual end-of-year CD-ROM. + +# A Simulation-Based Assessment of Traffic Circle Control + +Christopher Chang + +Zhou Fan + +Yi Sun + +Harvard University + +Cambridge, MA + +Advisor: Clifford H. Taubes + +# Summary + +The difficulty of evaluating the performance of a control system for a traffic circle lies largely in crucial dependence on the local interactions among individual drivers. Traffic circles are relatively small compared to highways and are therefore susceptible to blockages caused by lane changes, entrances, and exits. A complete model must account for effects of such individual car behavior. Existing models, however, do not track performance at the level of individual cars. + +We propose a novel simulator-based approach to evaluating and selecting such control systems. We create a multi-agent discrete-time simulation of behavior under different control systems. The behavior of individual cars in our simulator is determined autonomously and locally, allowing us to capture the effects of local interactions. In addition, by modeling each car separately, we track the time spent in the traffic circle for each individual car, giving us a more specific measure of performance than the more commonly-used aggregate rate of car passage. + +Measuring the performance of several control strategies using both metrics, we find that the rate of incoming traffic and the number of lanes in the traffic circle are the major factors for optimal choice of a control system. Based on the simulated performance of traffic circles with varying values of these parameters, we have two different recommendations for traffic control systems based upon the rate of incoming traffic: + +- When the rate of incoming traffic is low, entering cars should yield to cars already in the circle. +- When the rate of incoming traffic increases beyond a certain threshold (which should be determined empirically), traffic lights should control entering traffic and the outermost lane of the traffic circle. These lights should be synchronized so that the time between successive lights turning green is the average time needed for a car to travel between them. + +For a low rate of incoming traffic, the circle is relatively clear of cars, so entering cars can merge in without blocking the road or slowing the flow. By making entering cars yield to cars in the circle, we maximize the total throughput of cars while maintaining average speed. + +When incoming traffic saturates the circle, allowing cars to merge freely into the circle impedes the flow of others. While throughput is still quite high, our simulation predicts that each car will spend an extremely long time in the circle. + +Instead, we recommend that traffic lights attenuate the incoming flow of cars. While cars must wait slightly longer to enter, the number of cars in the circle is limited, allowing those cars a reasonable speed. Our simulator predicts that this policy will allow fewer cars to travel through the circle at a much higher speed. + +By viewing the performance of the control system at the level of the individual cars, our simulator distinguishes between the performance of these two systems in this case and select the correct system to use. + +We therefore recommend as follows: For times with high occupancies and rates of incoming traffic, implement synchronized traffic lights; for other times, require entering cars to yield to cars in the circle. Under this system, the total throughput is maximized while still maintaining an acceptable level of individual performance. + +![](images/ff88e7193a0b3279ff6b4d75b9f62c3546577696a053f956343c29e7e3a511c6.jpg) +Zhou Fan, Christopher Chang, and Yi Sun. + +The text of this paper appears on pp. 227-245. + +# One Ring to Rule Them All: The Optimization of Traffic Circles + +Aaron Abromowitz + +Andrea Levy + +Russell Melick + +Harvey Mudd College + +Claremont, CA + +Advisor: Susan E. Martonosi + +# Summary + +Our goal is a model that can account for the dynamics of vehicles in a traffic circle. We mainly focus on the rate of entry into the circle to determine the best way to regulate traffic. We assume that vehicles circulate in a single lane and that only incoming traffic can be regulated (that is, incoming traffic never has the right-of-way). + +For our model, the adjustable parameters are the rate of entry into the queue, the rate of entry into the circle (service rate), the maximum capacity of the circle, and the rate of departure from the circle (departure rate). We use a compartment model with the queue and the traffic circle as compartments. Vehicles first enter the queue from the outside world, then enter the traffic circle from the queue, and lastly exit the traffic circle to the outside world. We model both the service rate and the departure rate as dependent on the number of vehicles inside the traffic circle. + +In addition, we run computer simulations to have a visual representation of what happens in a traffic circle during different situations. These allow us to examine different cases, such as unequal traffic flow coming from the different queues or some intersections having a higher probability of being a vehicle destination than others. The simulation also implements several life-like effects, such as how vehicles accelerate on an empty road but decelerate when another vehicle is in front of them. + +In many cases, we find that a high service rate is the optimal way to maintain traffic flow, signifying that a yield sign for incoming traffic is most effective. However, when the circle becomes more heavily trafficked, + +a lower service rate better accommodates traffic, indicating that a traffic light should be used. Thus, a light should be installed in most circle implementations, with variable timing depending on the expected amount of traffic. + +The main advantage of our approach is that the model is simple and allows us to see clearly the dynamics of the system. Also, the computer simulations provide more in-depth information about traffic flow under conditions that the model could not easily show, as well as enabling visual observation of the traffic. Some disadvantages to our approach are that we do not analyze the effects of multiple lanes nor stop lights to control the flow of traffic within the circle. In addition, we have no way of analyzing singular situations, such as vehicles that drive faster or slower than the rest of the traffic circle, or pedestrians. + +![](images/136c65256ff414b4c954d5aea757a4dea5207c72ab180f807b787396216dfcb7.jpg) +Aaron Abromowitz, Andrea Levy, Russell Melick. + +The text of this paper appears on pp. 247-260. + +# Three Steps to Make the Traffic Circle Go Round + +Zeyuan Allen Zhu + +Tianyi Mao + +Yichen Huang + +Tsinghua University + +Beijing, China + +Advisor: Jun Ye + +# Summary + +With growing traffic, control devices at traffic circles are needed: signals, stop / yield signs, and orientation signs—a special sign that we designed. + +We create two models—one macroscopic, one microscopic—to simulate transport at traffic circles. The first models the problem as Markov chain, and the second simulates traffic by individual vehicles—a "cellular-automata-like" model. + +We introduce a multiobjective function to evaluate the control. We combine saturated capacity, average delay, equity degree, accident rate and device cost. We analyze how best to choose control the traffic circle, in terms of: + +- placement of basic devices, such as lights and signs; +- installation of orientation signs, to lead vehicles into the proper lanes; and +- self-adaptivity, to allow the traffic to auto-adjust according to different traffic demands. + +We examine the 6-arm-3-lane Sheriffhall Roundabout in Scotland and give detailed suggestions for control of its traffic: We assign lights with a 68-s period, and we offer a sample orientation sign. + +We also test smaller and larger dummy circles to verify strength and sensitivity of our model, as well as emergency cases to judge its flexibility. + +![](images/734975ec5372cceac28e4698ae7c26362e4fb4f136aaedbe0e439ec67f879119.jpg) +Yichen Huang, Zeyuan Allen Zhu, Tianyi Mao, and team advisor Jun Ye. + +The text of this paper appears on pp. 261-280. + +# Pseudo-Finite Jackson Networks and Simulation: A Roundabout Approach to Traffic Control + +Anna Lieb + +Anil Damle + +Geoffrey Peterson + +Dept. of Applied Mathematics + +University of Colorado + +Boulder, CO + +Advisor: Anne Dougherty + +# Summary + +Roundabouts, a foreign concept a generation ago, are an increasingly common sight in the U.S. In principle, they reduce accidents and delays. A natural question is, "What is the best method to control traffic flow within a roundabout?" Using mathematics, we distill the essential features of a roundabout into a system that can be analyzed, manipulated, and optimized for a wide variety of situations. As the metric of effective flow, we choose time spent in the system. + +We use Jackson networks to create an analytic model. A roundabout can be thought of as a network of queues, where the entry queues receive external arrivals that move into the roundabout queue before exiting the system. We assume that arrival rates are constant and that there is an equilibrium state. If certain conditions are met, a closed-form stationary distribution can be found. The parameters values can be obtained empirically: how often cars arrive at an entrance (external arrival rate), how quickly they enter the roundabout (internal arrival rate), and how quickly they exit (departure rate). We control traffic by thinning the internal arrival process with a "signal" parameter that represents the fraction of time that a signal light is green. + +A pitfall of this formulation is that restricting the capacity of the round- + +about queue to a finite limit destroys the useful analytic properties. So we utilize a "pseudo-finite" capacity formulation, where we allow the roundabout queue to receive a theoretically infinite number of cars, but we optimize over the signal parameter to create a steady state in which a minimal number of waiting cars is overwhelmingly likely. Using lower bound calculations, we prove that a yield sign produces the optimal behavior for all admissible parameter values. The analytic solution, however, sacrifices important aspects of a real roundabout, such as time-dependent flow. + +To test the theoretical conclusions, we develop a computer simulation that incorporates more parameters: roundabout radius; car length, spacing, and speed; period of traffic signals; and time-dependent inflow rates. We model individual vehicles stochastically as they move through the system, resulting in more-realistic output. In addition to comparing yield and traffic-signal control, we also examine varied input rates, nonstandard roundabout configurations, and the relationships among traffic-flow volume, radius size, and average total time. However, our simulation is limited to a single-lane roundabout. This model is also compromised by the very stochasticity that enhances its realism. Since it is nondeterministic, randomness may mask the true behavior. Another drawback is that the computational cost of minimization is enormous. However, we verify that a yield sign is almost always the best form of flow control. + +![](images/ff2b4a3d24eabf091d4a2c318553d28af06ffc9089ec7df75c29668abfaad283.jpg) +Geoffrey Peterson, Anil Damle, Anna Lieb, and advisor Anne Dougherty. + +The text of this paper appears on pp. 281-304. + +# Going in Circles: A Roundabout Analysis + +Mark Tucker Luke Wassink Ameet Gohil + +University of Iowa +Iowa City, IA + +Advisor: Joe Eichholz + +# Summary + +We present a microscopic model of traffic flow in, around, and out of traffic circles of various sizes, configurations, traffic levels, and control systems. We use artificial intelligence to ensure that cars in our simulation faithfully follow human behavior, physical laws, and traffic regulations. + +We devise a measure of to compare the efficiencies of control systems on various traffic circles. We illustrate this analysis by redesigning the control systems for La Place de l'Étoile in Paris and around the Victory Monument in Bangkok. According to our model, efficiency is improved by $100\%$ and $80\%$ , respectively. In smaller traffic circles, efficiency can be improved by as much as $20 - 30\%$ over standard control systems. + +We condense our results into rules to find an efficient control system for any traffic circle. + +![](images/1a90af39c0d1f0f1fa0f05fd0ec922d228b9f138b5148833ad9d18c9f311bf74.jpg) +Mark Tucker, Luke Wassink and Ameet Gohil. + +[EDITOR'S NOTE: This Meritorious paper won the Ben Fusaro Award for the Traffic Circle Problem. Only this summary appears in this issue.] + +# Mobile to Mobil: The Primary Energy Costs for Cellular and Landline Telephones + +Nevin Brackett-Rozinsky + +Katelynn Wilton + +Jason Altieri + +Clarkson University + +Potsdam, NY + +Advisor: Joseph Skufca + +# Summary + +We determine that cellphones are the optimal communication choice from an energy perspective, using a comprehensive analysis based on multiple factors. We split phones into three categories: cellular, cordless landline, and corded landline. We average the energy used in manufacture and transportation over the life of each phone. To account for the inefficiency of production, we calculate in terms of primary energy, which is the amount of fuel supplied to a power plant per unit of energy produced. We use real-world data for population, number of mainlines, and cellphone subscriptions. + +During the transition, as cellphones overtake landlines, part of the population owns both types of phone. As a result, the total energy used by telephones increases. We fit a competing-species model to past statistics; it forecasts that the net energy cost of the cellphone revolution (1995-2025) in the U.S. will be 84 TWh. At the start of this period, there were 0.1 cellphones per capita; at the end there will be 0.1 landlines per capita. Energy savings will begin in 2022. After this transition, savings will be $30\mathrm{GWh / d}$ . The competing-species model is a proven technique; we apply it to telephone lines and cellphones per capita, and also use it in conjunction with population projections to develop a closed-form solution. + +The UMAP Journal 30 (3) (2009) 216-217. ©Copyright 2009 by COMAP, Inc. All rights reserved. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice. Abstracting with credit is permitted, but copyrights for components of this work owned by others than COMAP must be honored. To copy otherwise, to republish, to post on servers, or to redistribute to lists requires prior permission from COMAP. + +The most energy-efficient way to provide phone service in a country with no existing infrastructure is to construct a cellular network. By amortizing the fixed setup costs over the lifetime of the phone system, the energy used during construction is negligible. For a country similar to the U.S., the savings would be 12 TWh/yr. Over the next 50 years, the energy savings would equal 0.5 billion barrels of oil. + +Cellphone chargers waste energy, but the total energy wasted would be almost five times as great if everyone instead used a cordless phone. Continuing advances in charger technology are reducing charger waste. If all cellphone chargers in the future meet a 5-star Energy Star rating, they will be 10 times as efficient as now. + +Our model is supported by historical data and numerous publicly available statistics. One factor not accounted for is the maintenance and operating power required for cell towers and physical telephone lines. + +![](images/a0e6969ddb586e3d319258a4be1f66f79597d0f3152fc16f138e0861619ebdcc.jpg) +Jason Altieri, Katelynn Wilton, and Nevin Brackett-Rozinsky. Photo by Dominick DeSalvatore. + +The text of this paper appears on pp. 313-332. + +# Energy Implications of Cellular Proliferation in the U.S. + +Benjamin Coate +Zachary Kopplin +Nate Landis + +College of Idaho Caldwell, ID + +Advisor: Michael P. Hitchman + +# Summary + +The U.S. has undergone a massive transformation in how it approaches telecommunications. In 30 years, it has gone from having an entirely landline-based phone system to one where $89\%$ of the population uses cellphones, with $16\%$ of households having replaced their landlines entirely. We set out to establish the key consequences and energy costs of this system. + +By collecting data on wattages of cellphone chargers and modeling likely American cellphone usage, we calculate that a cellphone might waste $86\%$ of its energy intake through its charger, the equivalent of 754,000 bbl/yr of oil. Comparing that to the energy costs of landline phones, we model two transition scenarios as cellphones replace landlines. We conclude that the faster that landlines can be phased out, the more energy will be saved. + +We find that a full cell network, combined with Voice over Internet Protocol (VoIP) technology, would be the best way to provide phone service to a Pseudo U.S. completely lacking in telecommunications. Doing this would save the cost of implementation of a landline infrastructure that would be rendered mostly redundant as cellphones became more popular. Because all the cellphone chargers in this Pseudo U.S. would be brand-new models with recent energy conservation features, cellphone waste would add up to only 234,000 bbl/yr of oil. We model the increase in cellphone energy consumption in this Pseudo U.S. for the next 50 years with two models: one accounts for the growth of the population, and another also factors in a rate of technological advance. In the first model, cellphone + +energy consumption would reach 1.53 million bbl/yr of oil by 2059, while in the second it would actually decrease to 525,000 bbl/yr by then, due to increases in battery efficiency and a reduction in standby power. + +Cellphone chargers are a small part of standby-power waste in America. Using extensive wattage and usage data on consumer electronics, we calculate that these devices waste 99 million bbl/yr of oil. + +These models show that although a single cellphone charger may waste only a small amount of energy (one author estimates leaving a charger plugged in for a day is about equal to driving a car for one second), the sheer magnitude of cellphone users means that this loss is significant. + +![](images/2ec53ddacead7d02d29fda23113053fe94ece28707c9a20b09053a88148ec270.jpg) +Advisor Michael Hitchman, with team members Benjamin Coate, Nathaniel Landis, and Zachary Kopplin. + +The text of this paper appears on pp. 333-351. + +# Modeling Telephony Energy Consumption + +Amrish Deshmukh +Rudolf Nikolaus Stahl +Matthew Guay + +Cornell University Ithaca, NY + +Advisor: Alexander Vladimirsky + +# Summary + +The energy consequences of rapidly changing telecommunications technology are a significant concern. While interpersonal communication is ever more important in the modern world, the need to conserve energy has also entered the social consciousness as prices and threats of global climate change continue to rise. Only 20 years after being introduced, cellphones have become a ubiquitous part of the modern world. Simultaneously, the infrastructure for traditional telephones is well in place and the energy costs of such phones may very well be less. As a superior technology, cellphones have gradually begun to replace the landline but consumer habits and perceptions have slowed this decline from being an outright abandonment. + +To evaluate the energy consequences of continued growth in cellphone use and a decline in landline use, we present a model that describes three processes—landline consumption, cellphone consumption, and landline abandonment—as economic diffusion processes. In addition, our model describes the changing energy demands of the two technologies and considers the use of companion electronics and consumer habits. Finally, we use these models to determine the energy consequences of the future uses of the two technologies, an optimal mode of delivering phone service, and the costs of wasteful consumer habits. + +![](images/4210aa8baed5d244adf708c4e7e2beb1907b81e6f4480842ec39b29d0c042749.jpg) +Amrish Deshmukh, Matt Guay, Niko Stahl, and advisor Alexander Vladimirsky. + +The text of this paper appears on pp. 353-365. + +# National TELEwar + +John Camardese +Rich Geyer +Neil Ganshorn + +Lawrence Technological University Southfield, MI + +Advisor: Ruth G. Favro + +# Summary + +Over $89\%$ of 303 million Americans own a cellphone, with a battery that needs to be recharged. All too often, the phone is left plugged in, constantly consuming energy. In addition, $79\%$ of Americans are served by home landline phones. + +By modeling energy consumption based on growth and decay of landlines and cell phones due to population changes, an optimized energy plan can minimize energy used by a country's communication infrastructure while still providing citizens with adequate telecommunication options. + +By modeling the cell-phone growth and consequent landline decay as a logistic predator-prey model—and applying real-world energy, population, and communications-use data—we determine an optimal telecommunication system. + +![](images/a5e422324dac898798b3213b9d598e69246513919d0b07cb6ded8e933edcf2de.jpg) +John Camardese, Rich Geyer, advisor Ruth Favro, and Neil Ganshorn. + +[EDITOR'S NOTE: This Meritorious paper won the Ben Fusaro Award for the Cellphone Problem. Only this summary appears in this issue.] + +# America's New Calling + +Stephen R. Foster + +J. Thomas Rogers + +Robert S. Potter + +Southwestern University + +Georgetown, TX + +Advisor: Richard Denman + +# Summary + +The ongoing cellphone revolution warrants an examination of its energy impacts—past, present, and future. Thus, our model adheres to two requirements: It can evaluate energy use since 1990, and it is flexible enough to predict future energy needs. + +Mathematically speaking, our model treats households as state machines and uses actual demographic data to guide state transitions. We produce national projections by simulating multiple households. Our bottom-up approach remains flexible, allowing us to: + +- model energy consumption for the current U.S., +- determine efficient phone adoption schemes in emerging nations, +- assess the impact of wasteful practices, and +- predict future energy needs. + +We show that the exclusive adoption of landlines by an emerging nation would be more than twice as efficient as the exclusive adoption of cellphones. However, we also show that the elimination of certain wasteful practices can make cellphone adoption $175\%$ more efficient at the national level. Furthermore, we give two forecasts for the current U.S., revealing that a collaboration between cellphone users and manufacturers can result in savings of more than 3.9 billion barrels-of-oil-equivalent (BOE) over the next 50 years. + +![](images/933b55b53880ee69736ea8f604ef3b8355ecaa7cd4bed1d6775073135595bc86.jpg) +Tommy Rogers, Stephen Foster, and Bob Potter. + +The text of this paper appears on pp. 367-384. + +# Wireless Networks: An Easy Cell + +Jeff Bosco + +Zachary Ulissi + +Bob Liu + +University of Delaware + +Newark, DE + +Advisor: Louis Rossi + +# Summary + +The number of cellphones worldwide raises concerns about their energy usage, even though individual usage is low ( $< 10\mathrm{kWh} / \mathrm{yr}$ ). We first model the change in population and population density until 2050, with an emphasis on trends in the urbanization of America. We analyze the current cellular infrastructure and distribution of cell site locations in the U.S. By relating infrastructure back to population density, we identify the number and distribution of cell sites through 2050. We then calculate the energy usage of individual cellphones calculated based on average usage patterns. + +Phone-charging behavior greatly affects power consumption. The power usage of phones consumes a large part of the overall idle energy consumption of electronic devices in the U.S. + +Finally, we calculate the power usage of the U.S. cellular network to the year 2050. If poor phone usage continues, the system will require $400\mathrm{MW / yr}$ , or 5.6 million bbl/yr of oil; if ideal charging behavior is adopted, this number will fall to $200\mathrm{MW / yr}$ , or 2.8 million bbl/yr of oil. + +![](images/58f504694d0aecc17f3fc37366a858806ff53dc7ca464ca53ef135f8c7bc2047.jpg) +Advisor Louis Rossi with team members Bob Liu, Jeff Bosco, and Zachary Ulissi. + +# The text of this paper appears on pp. 385-402. + +The UMAP Journal 30 (3) (2009) 225. ©Copyright 2009 by COMAP, Inc. All rights reserved. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice. Abstracting with credit is permitted, but copyrights for components of this work owned by others than COMAP must be honored. To copy otherwise, to republish, to post on servers, or to redistribute to lists requires prior permission from COMAP. + +# A Simulation-Based Assessment of Traffic Circle Control + +Christopher Chang + +Zhou Fan + +Yi Sun + +Harvard University + +Cambridge, MA + +Advisor: Clifford H. Taubes + +# Summary + +The difficulty of evaluating the performance of a control system for a traffic circle lies largely in crucial dependence on the local interactions among individual drivers. Traffic circles are relatively small compared to highways and are therefore susceptible to blockages caused by lane changes, entrances, and exits. A complete model must account for effects of such individual car behavior. Existing models, however, do not track performance at the level of individual cars. + +We propose a novel simulator-based approach to evaluating and selecting such control systems. We create a multi-agent discrete-time simulation of behavior under different control systems. The behavior of individual cars in our simulator is determined autonomously and locally, allowing us to capture the effects of local interactions. In addition, by modeling each car separately, we track the time spent in the traffic circle for each individual car, giving us a more specific measure of performance than the more commonly-used aggregate rate of car passage. + +Measuring the performance of several control strategies using both metrics, we find that the rate of incoming traffic and the number of lanes in the traffic circle are the major factors for optimal choice of a control system. Based on the simulated performance of traffic circles with varying values of these parameters, we have two different recommendations for traffic control systems based upon the rate of incoming traffic: + +- When the rate of incoming traffic is low, entering cars should yield to cars already in the circle. +- When the rate of incoming traffic increases beyond a certain threshold (which should be determined empirically), traffic lights should control entering traffic and the outermost lane of the traffic circle. These lights should be synchronized so that the time between successive lights turning green is the average time needed for a car to travel between them. + +For a low rate of incoming traffic, the circle is relatively clear of cars, so entering cars can merge in without blocking the road or slowing the flow. By making entering cars yield to cars in the circle, we maximize the total throughput of cars while maintaining average speed. + +When incoming traffic saturates the circle, allowing cars to merge freely into the circle impedes the flow of others. While throughput is still quite high, our simulation predicts that each car will spend an extremely long time in the circle. + +Instead, we recommend that traffic lights attenuate the incoming flow of cars. While cars must wait slightly longer to enter, the number of cars in the circle is limited, allowing those cars a reasonable speed. Our simulator predicts that this policy will allow fewer cars to travel through the circle at a much higher speed. + +By viewing the performance of the control system at the level of the individual cars, our simulator distinguishes between the performance of these two systems in this case and select the correct system to use. + +We therefore recommend as follows: For times with high occupancies and rates of incoming traffic, implement synchronized traffic lights; for other times, require entering cars to yield to cars in the circle. Under this system, the total throughput is maximized while still maintaining an acceptable level of individual performance. + +# Introduction + +The traffic circle is a type of circular intersection featuring traffic from multiple streets circulating around a central island, usually in one direction. An example is shown in Figure 1. Other examples of large traffic circles include Columbus Circle in New York City, while small, one-lane traffic circles often exist in residential neighborhoods. + +Traffic circles are often notorious for frequent traffic jams due to their unconventional design, and many methods exist to control traffic in a traffic circle; we investigate their impacts. + +![](images/1d5e5bc37d0b7339afe608b00e9133aa437e0993aa8bc4220ce659cd55e1f9a0.jpg) +Figure 1. An aerial view of Dupont Circle in Washington, DC. Source: U.S. Geological Survey, at http://en.wikipedia.org/wiki/index.html?curid=1017545. + +# Terms and Notation + +We consider a traffic circle to be a one-way circular road with two-way roads meeting the circle at T-junctions. In particular, we do not consider circles that have separate entry and exit ramps. We assume that each road carries cars into the circle at a fixed rate and that cars have an equal probability of leaving the circle through any of the other roads. For performance, we measure two statistics: + +- the average rate at which cars arrive at their desired exit location per time step, the average throughput; and +- the average number of time steps from a car arriving at the back of the queue to enter the circle to when it exits the circle, the average total time. + +# Problem Background + +Modern traffic circles have recently been recognized as safer alternatives to traditional intersections. Research by Zein et al. [1997] and Flannery and Datta [1996] using statistical methods has demonstrated that traffic circles bring added safety to both urban and rural environments. Attempts to understand the specific safety and efficiency benefits of traffic circles have taken four primary approaches: critical-gap estimation, regression studies, continuous models, and discrete models. + +- Critical-gap models build from how drivers empirically gauge gaps in traffic before merging or turning into a traffic stream. However, according to Brilon et al. [1997], attempts in the 1980s to model roundabout + +capacity based on gap-acceptance theory were not exceptionally promising; in particular, critical-gap estimation lacked valid procedures as well as general clarity [Brilon et al. 1999]. More recent research applying gap-acceptance models to understanding traffic circles has included Polus et al. [1997] and the modeling of unconventional traffic circles by Bartin et al. [2006]. + +Nevertheless, regression studies on empirical data + +- Regression studies on empirical data made much progress, beginning with Kimber [1980], who studied roundabouts in England and discovered a linear relation relating entry capacity to circulating flow and constants depending on entry width, lane width, the angle of entry, and the traffic circle size. Further regression studies have built extensively on Kimber's work, such as in Polus and Shmueli [1997], which determined the importance of traffic circle diameter in small-to-medium circles. +- Continuous models have included fluid-dynamic models [Helbing 1995; Bellomo et al. 2002; Daganzo 1995; Klar et al. 1996]; but those papers model traffic flow in standard traffic environments, not in traffic circles. +- Discrete models include cellular automata models [Fouladvand et al. 2004; Klar et al. 1996] and discrete stochastic models [Schreckenberg et al. 1995]. Discrete models are suitable for small environments such as traffic circles, where individual car-to-car interactions takes priority over traffic flow as a whole. Discretized approaches have attempted to model multilane traffic flows [Nagel et al. 1998]; but to our knowledge, there has been no research on discrete models of multilane traffic circles of varying sizes. + +# Our Results + +We approach traffic-circle control by first creating a simulator of traffic flow that treats individual cars as autonomous units, allowing us to capture local interactions, such as lane changes and traffic blockages due to cars entering and exiting. We validate this simulator against both a new stylized model of the situation and existing models of traffic-circle flow. + +Using the simulator, we implement and test various control systems on different types of traffic circles. Based on the simulated results, we isolate the rate of incoming traffic and the number of lanes in the traffic circle as the driving factors behind optimal choice of a traffic-control system. We thus recommend two different systems for different circumstances: + +- When the rate of incoming traffic is low, we recommend that entering cars yield to cars already in the circle. +- When the rate of incoming traffic increases beyond a certain threshold, we recommend traffic lights that control entering traffic and the outermost + +lane of the traffic circle. The lights should be synchronized so that the time between successive lights turning green is the average time needed for a car to travel between them. + +In subsequent sections, we + +- divide the problem into two portions and define our objectives for each; +- introduce the simulator and validate its performance against a mathematical analysis and models from other sources; +- use the simulator to analyze the performance of several types of traffic circles, to produce recommendations for which control systems should be used for each type; and +- provide an overview of the advantages and disadvantages of our approach and give directions for future work. + +# Simulator + +Our goal is a simulator that, given a set of conditions and traffic rules, can produce an accurate prediction of the behavior that will result from following these rules. To achieve this goal, we would like our simulator to fulfill the following requirements: + +- The simulator takes into account the local interactions between cars. Because cars enter, exit, and change lanes quite frequently, interactions between cars make a major contribution to the speed and efficacy of a traffic circle. +- The simulator can support variation in the number of cars, size of the circle, and number of lanes. +- The simulator can track properties of both the entire traffic circle and the individual cars passing through it. + +The real behavior of cars in a traffic circle may vary widely, but we restrict our simulated cars to idealized behavior: they follow the traffic regulations that we put in place, and no accidents happen. + +# Control System Evaluation + +We base our recommendations for a control method on the following statistics: + +1. Average throughput (the average rate at which cars pass through the traffic circle). +2. Average number of cars in the traffic circle. + +3. Average total time for each car to traverse the traffic circle, including time spent waiting to enter. +4. Average time that each car spends driving through the traffic circle. + +Statistics 1 and 2 measure global properties of the traffic circle, while statistics 3 and 4 are properties of each individual car. To evaluate a control system, we consider both the global performance and the differences in performance of the system for each individual. In particular, our goals are to: + +1. Maximize the average throughput of the traffic circle. +2. Minimize the total time spent traversing the traffic circle (for individual cars). + +We evaluate the performance of a traffic circle by the rate of cars passing through the circle (average throughput) and the total time required to traverse the circle (average total time). We choose control methods that perform best with respect to both of these metrics. + +# Simulator Details + +# Assumptions and Setup + +There are two approaches to model the behavior of traffic: + +1. Make a (usually continuous) abstraction away from the discrete interactions of cars and deal with a more stylized model of the entire system. +2. Model the behavior and movement of each car separately. + +Continuous and fluid-like models, as in the first possibility, are suitable under a macroscopic view of traffic, for instance in the study of traffic on long roads or highways. However, for intersections and traffic circles, where car-to-car interactions occur much more frequently, such a model seems inadequate. + +We follow the second approach to model traffic flow in a traffic circle using a multi-agent discrete time simulation. Our simulation is based around the following two key principles: + +1. It is microscopic. +2. Behavior and information are local. + +We do not use an abstract view of traffic as a flow but instead let each car in the traffic circle be its own individual agent. This allows us to account for the effects of car-to-car interaction, particularly in congested situations. From this interaction on the microscopic level, we then examine the macroscopic + +consequences of the simulation, instead of beginning with an arbitrary conception of what the macroscopic behavior should be. + +Each car is its own independent agent, trying to enter the traffic circle and exit at the desired exit as quickly as possible; no collaboration between cars or higher-level organizational principles exists. Also, only local information, namely the cars in the immediate neighborhood, is available to each individual car. + +# The Simulator + +The simulation operates using the following model: + +- Time is modeled in discrete time steps. +- The traffic circle is a rectangular grid. The width is the number of lanes in the traffic circle, and the height represents the length of the traffic circle. The upper edge wraps around to the lower edge (so that the grid is actually a circle). At any time step, each square of the grid can either be empty or hold one car. +- Certain squares in the outermost lane are entry squares. A queue of cars waits at each entry square to enter the circle. (These cars are not located on the grid itself.) The queues start off empty, and for each entry square, there is a fixed probability of a car being added to its queue at each time step. +- Certain squares in the outermost lane are exit squares, where cars can exit the circle. When a car is added to the queue for an entry square (and thus to the system), an exit square is chosen at random for the car. +- Each car has a speed indicated by how often it gets the chance to move. For example, faster cars may move at every time step, while slower cars may move less often. This difference simulates differing levels of impatience or aggression among drivers. + +In each time step, the simulation proceeds as follows: + +1. Determine the subset of all the cars in the system that will move during this time step. Randomly assign the order in which these cars will move. +2. Allow each such car to move. Cars move under the following rules: A car that is already in the traffic circle at position $(i,j)$ (i.e., lane $i$ , vertical position $j$ ) will, in the following order of preference, + +(a) Exit if $(i,j)$ is the exit square at which the car wishes to exit. +(b) Move forward to $(i,j + 1)$ if $(i,j + 1)$ is unoccupied. +(c) Move forward and right to $(i + 1, j + 1)$ if there is a lane to the right and locations $(i + 1, j)$ and $(i + 1, j + 1)$ are both unoccupied. + +(d) Move forward and left to $(i - 1, j + 1)$ if there is a lane to the left and locations $(i - 1, j)$ and $(i - 1, j + 1)$ are both unoccupied. +(e) Stay where it is. + +An exception occurs for cars that are about to exit—if the vertical distance between the car's current location and its desired exit is less than four times the horizontal distance, then items (b) and (c) above are switched. (This is to ensure that under uncongested situations, cars will be able to exit at their desired exits.) + +A car that is the first car in the queue at an entrance location will, in order of preference, + +(a) Move to the entry square if that square is unoccupied. +(b) Stay where it is. + +All later cars in the queue cannot move for this turn. + +In addition to the above rules, certain traffic control systems impose the following additional rules: + +(a) Outer-yield: A car at the front of an entrance queue and waiting to enter can enter only when both the entry square and the square directly behind it are empty. That is, if there is a car directly behind the entry square, the entering car must yield to that car. +(b) Inner-yield: If a car in the circle wishes to move onto an entry square (in the rightmost lane) but the queue waiting at that entrance is nonempty, then the car cannot move to that square. If the car has no other possible moves, then it does not move for that turn. This reflects the situation in which cars in the circle need to yield to entering cars. +(c) Traffic lights: In this system, a traffic light controls each entry square. At any time step, the light is either green for cars in the circle and red for the waiting queue, or vice versa. If it is green for cars in the circle, then the first car in the waiting queue cannot enter the circle. If it is green for the waiting queue, then no car in the circle can move onto the entry square. In a multilane circle, this traffic light controls traffic only in the outermost lane. This behavior is inspired by the design of metering lights at highway ramps. + +We consider two methods of synchronizing the traffic lights around the circle: + +i. All lights turn green and red simultaneously. +ii. The difference in time between each traffic light turning green and the next light turning green (and also the difference between their reds) is directly proportional to the distance between the two lights. The proportionality constant is chosen so that a car waiting at a traffic light that begins to move when that light turns green will reach the next light just as it turns green. + +3. For each entry queue, add a car to the end of that queue with the fixed probability for that entry location. +4. Have the traffic lights change if it is the correct time step to do so. + +# Validation Against Existing Empirical Models + +The two criteria on which we evaluate the various traffic control systems, average throughput and average total time, are not unrelated. In fact, our simulations indicate that increasing one comes at the cost of increasing the other. For the outer-yield system, we show in Figure 2 average total time against reserve throughput (maximum throughput minus average throughput). + +![](images/f0a86a6d9e2b90e81d0236c978f4ef83ed3ee38184c6779fc865cae43651aa70.jpg) +Figure 2. Average total time vs. reserve throughput. + +This inverse relationship is intuitive, since greater throughput indicates a greater volume of traffic on the road and hence both slower driving speeds and longer wait times to enter the circle. This result also matches the relationship between average total time and reserve capacity given by the Kimber-Hollis delay equation in Brilon and Vandehey [1998]. This agreement indicates that the results of our simulator are reasonable. + +# Validation Against a Simple Model + +To provide further verification of the accuracy of our simulator, we compare large-scale features of its output to a mathematical model for a simple case. In particular, we consider a single-lane traffic circle in which cars entering the circle yield to cars in the circle. For simplicity, we assume that all cars move at the same speed, one square per time step. We assume that roads at traffic circle are all two-way, so that each entry point is also an exit point. The model is given as follows. + +Suppose that there are $n$ entry/exit roads to the traffic circle, and that all cars have an equal probability of leaving through each of the $n$ roads. For $i = 1$ to $n$ , let $r_i$ be the probability that a new car appears at road $i$ at any time step. Let $x_i$ be the volume density of traffic in the segment of the circle between roads $i$ and $i + 1$ . The expected change in the number of cars between roads $i$ and $i + 1$ is given by a sum of four terms: + +- The probability that a car will leave this segment through exit $i + 1$ is $\frac{1}{n} \cdot x_i$ , since $x_i$ is the probability that there is a car in the exit square and the probability that this car wishes to exit is $\frac{1}{n}$ . +- The probability that a car will move from this segment to the next one is $x_{i} \cdot \frac{n - 1}{n} \cdot (1 - x_{i + 1})$ , since $\frac{n - 1}{n}$ is the probability that the car in the exit square will not exit and $1 - x_{i + 1}$ is the probability that the square after the exit square, which is the first square of the next segment, is unoccupied. +- The probability that a car will move from the previous segment to this segment is, similarly, $x_{i - 1} \cdot \frac{n - 1}{n} \cdot (1 - x_i)$ . +- The probability that a car will enter through entrance $i$ is the probability $p$ of a sufficiently large space at entrance $i$ for a car to enter, times the probability that there is a car there waiting to enter. This latter probability can be calculated as + +$$ +r _ {i} + r _ {i} (1 - r _ {i}) (1 - p) + r _ {i} (1 - r _ {i}) ^ {2} (1 - p) ^ {2} + \dots = \frac {r _ {i}}{r _ {i} + p - r _ {i} p}, +$$ + +since there is an $r_i$ probability of a car arriving at entrance $i$ this time step, an $r_i(1 - r_i)(1 - p)$ probability of a car arriving at entrance $i$ at the last time step (but not this time step) and remaining until this time step, etc. In our simulation, $p = (1 - x_{i-1})(1 - x_i)$ , since a car can enter the circle if the entry square and the previous square are unoccupied. + +So the expected change in the number of cars in this segment in one time step is + +$$ +\begin{array}{l} \Delta c _ {i} = - x _ {i} \cdot \frac {1}{n} - x _ {i} \cdot \frac {n - 1}{n} \cdot (1 - x _ {i + 1}) + x _ {i - 1} \cdot \frac {n - 1}{n} \cdot (1 - x _ {i}) + \\ \frac {r _ {i} (1 - x _ {i - 1}) (1 - x _ {i})}{r _ {i} + (1 - x _ {i - 1}) (1 - x _ {i}) + r _ {i} (1 - x _ {i - 1}) (1 - x _ {i})}. \\ \end{array} +$$ + +In equilibrium, this change should be 0 for all segments, giving a system of equations in the $x_{i}$ . If we consider the case where the roads have equal incoming traffic, i.e. $r_{i}$ is the same for all $i$ , then by symmetry the $x_{i}$ are the same for all $i$ , and we may solve the equation + +$$ +\Delta c = - x \cdot \frac {1}{n} - x \cdot \frac {n - 1}{n} \cdot (1 - x) + x \cdot \frac {n - 1}{n} \cdot (1 - x) + \frac {r (1 - x) ^ {2}}{r + (1 - x) ^ {2} + r (1 - x) ^ {2}} = 0 +$$ + +numerically for $x$ in terms of $r$ . Here, $x$ is the traffic volume density for the circle as a function of $r$ , the rate at which cars enter the circle through each + +road. The result of numerically solving for $x$ is shown in Figure 3 together with the corresponding plot generated by our simulator. The black data points were generated by our simulator and the curve was produced by the rudimentary model. + +![](images/03d1568707e202b0ae087341b7b8dff7c755d98f15641246fcd011f8a984a2bc.jpg) +Figure 3. Traffic volume density vs. rate of incoming vehicles. + +The volume density of both seems to grow in a somewhat linear fashion for low rates of incoming vehicles. When the number of incoming vehicles increases, the traffic circle appears to become saturated at a fixed density. The simulator and our mathematical model agree on these large-scale features. Disagreement about the critical rate of incoming vehicles might be explained by the fact that our mathematical model essentially considers the cars in each segment as equivalent, hence ignores the small-scale interactions that occur near gridlock. + +# Predictions and Analysis + +We apply the simulator to analyze different types of traffic circles. + +# Criteria + +We characterize traffic circles by the following variables: + +1. Rate of incoming vehicles: This is a result of the amount of traffic present on the roads entering the traffic circle and will influence the total number of vehicles trying to enter the circle and hence the traffic in the circle. + +2. Length: This affects the number of cars that can be contained in the circle at a single time, which has many implications for how the entry mechanism of the circle should be determined. +3. Number of lanes: This affects both the number of cars that can be in the circle at a time and their maneuverability around each other. Because cars can more easily pass one another with more lanes, increasing the number of lanes may reduce the effects of traffic blockages. +4. Number of incoming roads: This affects the rate at which cars need to enter and exit the traffic circle, which may influence the magnitude of traffic blockages. + +We wish to consider systems that are relatively close to conventional systems, since it would be impractical and hazardous to introduce radically different systems unfamiliar to drivers who do not encounter traffic circles frequently. Therefore, we will evaluate the performance of the following traffic control systems when we vary the parameters for our traffic circle: + +1. Outer yield: Cars attempting to enter the circle yield to cars already in the circle at all times. +2. Inner yield: Cars already in the circle yield to cars attempting to enter the circle. +3. Simultaneous lights: The intersections between the circle and other roads are controlled by traffic lights that all turn green/red at the same time. The traffic lights apply only to the outermost lane of the traffic circle. +4. Synchronized lights: The intersections between the circle and other roads are controlled by traffic lights for which the time interval between a light turning green and the next light turning green is proportional to the distance between the two lights. The traffic lights apply only to the outermost lane of the traffic circle. + +With the exception of the traffic lights, these control systems are all similar to existing control systems. However, there is a crucial difference between our traffic-light system and standard traffic lights: Stopping only the outer lane of the traffic circle allows traffic in the inner lane to proceed undisturbed, improving throughput. This approach is a hybrid of normal traffic lights and metering lights for congested highways. + +# Analysis + +To analyze the effects of control systems, we run each on circles with varying parameters and create plots of average throughput and average total time per car for each strategy (Figures 4-8). + +![](images/6573e2e003fa86bcfc98df60f781c2bb4961e70f6b6e48e85dd0330707b71b99.jpg) +Average throughput vs. Rate of incoming vehicles + +![](images/90a248f9127f00250ee978117bd97d0cddd311c6a229dde3e353225b59f7f243.jpg) +Average total time vs. Rate of incoming vehicles + +![](images/f903f0273726e953fa7be549f5c8775a5a4770a5135e3baa8bf04c21168a778e.jpg) +Average throughput vs. Rate of incoming vehicles + +![](images/1c0dcb650d242787be6bfa36ed09f064c8db1f2e0f2881654e897163a54ffeec.jpg) +Figure 4. Performance for 1 lane, length 100, 4 roads, rate variable. +Average total time vs. Rate of incoming vehicles + +![](images/19a9d6f3d0e53f09f468a26a9ed28b15f84ec93dd6b2877f741e9e3bdbe3d877.jpg) +Average throughput vs. Rate of incoming vehicles +Figure 6. Performance for 5 lanes, length 100, 4 roads, rate variable. + +![](images/9d76b04a9101d12447bbdffdfc0203d675070b6a88fb4631e21bc3fdd141fe52.jpg) +Figure 5. Performance for 3 lanes, length 100, 4 roads, rate variable. +Average total time vs. Rate of incoming vehicles + +![](images/6d3b5b4bd90065e09e49fd54327d446c786d8606345c280fda9f3b471bfe3320.jpg) +Average throughput vs. Length + +![](images/74fdb920cb1b53557c6e7469c4399c8654caf580b7b5047c3e8a2ffc3482bd32.jpg) +Average total time vs. Length + +![](images/a2c36b8c926ad400ec48f91d1f97dcc8f121558c100120136c40a36fe8625eca.jpg) +Figure 7. Performance for 3 lanes, rate 0.1, 4 roads, length variable. +Average throughput vs. Number of roads +Figure 8. Performance for 3 lanes, rate 0.1, length 100, roads variable. + +![](images/87ea6d53b3d54ad7473b8cff6c1f66a8af439efadbb345c3ccdf20aca18e55fa.jpg) +Average total time vs. Number of roads + +Our goal is to determine which parameters have the greatest effect on performance of the control systems. From the plots, we make the following observations: + +- In almost all the plots, the inner-yield system has almost no throughput, since the cars in the road become gridlocked because they too often yield to incoming cars and therefore cannot exit. The low value of the average total time for this system results from the fact that the only cars that can exit do so before the road becomes entirely gridlocked. As a result, we reject the inner-yield system. +- As the rate is varied in Figures 4-6, the throughputs of the outer-yield system and the traffic-light systems correspond for small values of the rate. However, for each system, the throughput reaches a plateau beyond a certain value of the rate. At this point, the circle has been saturated, meaning that it can no longer accept more cars from the incoming roads. + +- The throughput value at which saturation occurs is much higher for the outer-yield system. However, the amount of time required for each individual car to pass through the traffic circle under the outer-yield system is extremely high, almost an order of magnitude higher than needed under either traffic-light system. +- When there are either 3 or 5 lanes, the synchronized traffic-light system allows slightly greater throughput than the simultaneous traffic-light system. This might be explained by the fact that, with more lanes, cars can move in a more uniform manner, allowing them to use the synchronized lights and move through the circle more quickly. +- The number of lanes and the number of roads do not have a significant effect on either the throughput or the total time in the outer-yield system or in either of the traffic-light systems. However, traffic lights may perform worse for some values of the distance between roads, perhaps due to synchronization issues. + +In general, the outer-yield and traffic-light methods have an advantage over the inner-yield method, and the correct choice of control system is largely determined by the rate of incoming vehicles on each of the entry roads. + +# Recommendations + +Since the number of lanes and the number of incident roads do not significantly affect average throughput or average traversal time, we can restrict our attention to the rate of incoming cars and the number of lanes in the circle. + +The rate of incoming cars accounts for a large part of the variation in performance, as can be seen in Figures 4-6. + +- For values of this rate between 0 and 0.1 cars per time step, the performance of the outer-yield and traffic-light systems is identical, since in this range traffic is light and there is very little interaction between cars. +- As the rate increases to between 0.1 and 0.2 cars per time step, the traffic circle reaches its maximum throughput under the traffic-light system, while the average total time stays fixed. However, under the outer-yield system, the throughput continues to increase but at the cost of a rather dramatic increase in average total time. In this range, choosing between the outer-yield system and the traffic-light systems involves a tradeoff between throughput, the quantity of cars passing through, and total time, the speed at which cars pass through. +- Finally, as the rate increases above 0.2 cars per time step, the circle becomes saturated with cars, meaning that the average total time for the outer-yield system increases dramatically and there is gridlock, meaning + +that cars move extremely slowly and must wait a very long time to pass through. Under the traffic-light systems, however, a smaller number of cars can pass through, but the average total time required for them to pass remains similar to that with a much lower rate. Since the inner-yield system requires an extremely large total time in this range, the traffic-light systems are clearly superior for a rate of above 0.2 cars per time step. + +In each of these cases, the synchronized traffic lights allow for higher throughput than simultaneous traffic lights. + +We can now make the following recommendations: + +- For a low rate of entering cars, no traffic lights should be used. Instead, cars already in the circle should be given the right of way, and cars entering the circle should yield. +- As the rate of entering cars increases, synchronized traffic lights should be considered for the outermost lane (only), to ensure a reasonable traversal time for most cars. +- For large rates of entering cars, as may occur during rush hour, synchronized traffic lights should be used, to ensure that the traffic circle does not become congested. By preserving a reasonable flow of cars within the circle, synchronized traffic lights allow a slightly smaller number of cars to pass through the circle much more quickly, which is preferable to deadlock for all cars. + +For low and high rates, our recommendations agree with practice. An intersection with little traffic may have no traffic signals (alternatively, a traffic circle is installed explicitly in place of traffic lights). For highways, it is common to use metering lights during peak hours to regulate entry of vehicles, to ensure that cars already on the road can move at a reasonable speed. Our recommendations seem to be a mix of these two ideas applied to traffic circles. + +# Conclusions + +# Strengths + +Our simulator takes into account the behavior and outcomes of individual cars traveling through a traffic circle. By doing so, we are able to detect interactions at a microscopic level and to track the performance of a traffic control system for each individual rather than only in aggregate. Doing this allows our model to evaluate the effects of cars changing lanes and entering and from specific lanes. We validated the simulator against both an existing empirical model and the results of a simple model for the steady-state limit. + +We can simulate the performance of a widely varied spectrum of traffic control systems on a range of different traffic circles. Our results allow us to isolate the rate of incoming cars and the number of lanes in the traffic circle as the two parameters key to determining a good control system. + +We recommend the either an outer-yield system or synchronized traffic lights to control the traffic circle, depending on the rate at which cars enter the circle. + +# Weaknesses + +While our simulator attempts to model the behavior of drivers fairly accurately, it cannot completely capture the dynamics of lane-changing and braking. Further, while using a discrete-time, discrete-space model for the simulator allows us to capture the local multiagent nature of individual drivers, it forces us to make simplifications about the continuity of car movement and about simultaneous actions. + +In addition, our simulation does not take into account the fact that in an actual traffic circle, the inner lanes have shorter length than the outer lanes. + +We consider only traffic lights with simultaneous or synchronized light changes, and it is infeasible computationally for us to consider a wider variety of switching approaches. + +# Alternative Approaches and Future Work + +We could evaluate the safety of a control system by counting the number of conflicting desired movements at the local level. We could then compare systems by safety as well as by performance and hence evaluate the claim that certain types of traffic circles are safer than intersections [Flannery and Datta 1996]. + +# References + +Bartin, Bekir, Kaan Ozbay, Ozlem Yanmaz-Tuzel, and George List. 2006. Modeling and simulation of unconventional traffic circles. Transportation Research Record: Journal of the Transportation Research Board 1965: 201-209. http://trb.metapress.com/content/y77tu5l173658858/. +Bellomo, N., M. Delitala, V. Coscia, and F. Brezzi. 2002. On the mathematical theory of vehicular traffic flow I: Fluid dynamic and kinematic modelling. Mathematical Models and Methods in Applied Sciences 12: 1217-1247. +Brilon, Werner, and Mark Vandehey. 1998. *Roundabouts—The state of the art in Germany. Institute of Transportation Engineers (ITE) Journal* 68: 48-54. + +Brilon, Werner, Ning Wu, and Lothar Bondzio. 1997. Unsignalized intersections in Germany—A state of the art. In Proceedings of the Third International Symposium on Intersections without Traffic Signals, Portland, Oregon, 61-70. http://www.ruhr-uni-bochum.de/verkehrswesen/vk/deutsch/Mitarbeiter/Brilon/Briwubo_2004_09_28.pdf. +Brilon, Werner, Ralph Koenig, and Rod J. Troutbeck. 1999. Useful estimation procedures for critical gaps. _Transportation Research Part A_ 33: 161-186. http://www.sciencedirect.com/science/article/B6VG7-3VF9D7R-2/2/2b325096a3b448fd0c0a09952c091ff4. +Daganzo, Carlos F. 1995. Requiem for second-order fluid approximations of traffic flow. Transportation Research Part B: Methodological 29: 277-296. http://www.sciencedirect.com/science/article/B6V99-3YKKJ1D-F/2/f9d735df36de4048d1e62a0a20e844b0. +Flannery, Aimee, and Tapan K. Datta. 1996. Modern roundabouts and traffic crash experience in United States. *Transportation Research Record: Journal of the Transportation Research Board* 1553: 103-109. http://cat.inist.fr/?aModele=afficheN&cpsidt=2633749. +Fouladvand, M. Ebrahim, Zeinab Sadjadi, and M. Reza Shaebani. 2004. Characteristics of vehicular traffic flow at a roundabout. *Physical Review E* 70: 046132. http://prolaaps.org/abstract/PRE/v70/i4/e046132. +Helbing, Dirk. 1995. Improved fluid-dynamic model for vehicular traffic. Physical Review E 51: 3164-3169. +Kimber, R.M. 1980. The traffic capacity of roundabouts. TRRL Laboratory Report 942. Crowthorne, UK: Transport and Road Research Laboratory. +Klar, Axel, Reinhart D. Kühne, and Raimund Wegener. 1996. Mathematical models for vehicular traffic. Surveys on Mathematics for Industry 6: 215-239. http://citeseer.ist.psu.edu/old/518818.html. +Nagel, Kai, Dietrich E. Wolf, Peter Wagner, and Patrice Simon. 1998. Two-lane traffic rules for cellular automata: A systematic approach. Physical Review E 58: 1425-1437. http://prola.aps.org/pdf/PRE/v58/i2/p1425_1. +Polus, Abishai, and Sitvanit Shmueli. 1997. Analysis and evaluation of the capacity of roundabouts. *Transportation Research Record: Journal of the Transportation Research Board* 1572: 99-104. http://trb.metapress.com/content/p1j1777227757852. +Polus, Abishai, Sitvanit Shmueli Lazar,and Moshe Livneh. 2003. Critical gap as a function of waiting time in determining roundabout capacity. + +Journal of Transportation Engineering 129 (5) (September/October 2003): 504-509. http://cedb.asce.org/cgi/WWdisplay.cgi?0304088. +Schreckenberg, M., A. Schadschneider, K. Nagel, and N. Ito. 1995. Discrete stochastic models for traffic flow. Physical Review E 51: 2939-2949. +Zein, Sany R., Erica Geddes, Suzanne Hemsing, and Mavis Johnson. 1997. Safety benefits of traffic calming. *Transportation Research Record: Journal of the Transportation Research Board* 1578: 3-10. http://trb.metapress.com/content/875315017kux6689/. + +![](images/6275585bf3906c648e760c89c0e2a867f84e34aa814c96cbdb939459e0bf68d5.jpg) +Zhou Fan, Christopher Chang, and Yi Sun. + +# One Ring to Rule Them All: The Optimization of Traffic Circles + +Aaron Abromowitz + +Andrea Levy + +Russell Melick + +Harvey Mudd College + +Claremont, CA + +Advisor: Susan E. Martonosi + +# Summary + +Our goal is a model that can account for the dynamics of vehicles in a traffic circle. We mainly focus on the rate of entry into the circle to determine the best way to regulate traffic. We assume that vehicles circulate in a single lane and that only incoming traffic can be regulated (that is, incoming traffic never has the right-of-way). + +For our model, the adjustable parameters are the rate of entry into the queue, the rate of entry into the circle (service rate), the maximum capacity of the circle, and the rate of departure from the circle (departure rate). We use a compartment model with the queue and the traffic circle as compartments. Vehicles first enter the queue from the outside world, then enter the traffic circle from the queue, and lastly exit the traffic circle to the outside world. We model both the service rate and the departure rate as dependent on the number of vehicles inside the traffic circle. + +In addition, we run computer simulations to have a visual representation of what happens in a traffic circle during different situations. These allow us to examine different cases, such as unequal traffic flow coming from the different queues or some intersections having a higher probability of being a vehicle destination than others. The simulation also implements several life-like effects, such as how vehicles accelerate on an empty road but decelerate when another vehicle is in front of them. + +In many cases, we find that a high service rate is the optimal way to maintain traffic flow, signifying that a yield sign for incoming traffic is most effective. However, when the circle becomes more heavily trafficked, + +![](images/022ef89ad22680d0fd11038d208c71c2f35b588977c3035d859b2206c80145b2.jpg) +Figure 1. A simple traffic circle. Traffic circles may have more than one lane and may have a different number of intersections. + +a lower service rate better accommodates traffic, indicating that a traffic light should be used. Thus, a light should be installed in most circle implementations, with variable timing depending on the expected amount of traffic. + +The main advantage of our approach is that the model is simple and allows us to see clearly the dynamics of the system. Also, the computer simulations provide more in-depth information about traffic flow under conditions that the model could not easily show, as well as enabling visual observation of the traffic. Some disadvantages to our approach are that we do not analyze the effects of multiple lanes nor stop lights to control the flow of traffic within the circle. In addition, we have no way of analyzing singular situations, such as vehicles that drive faster or slower than the rest of the traffic circle, or pedestrians. + +# Introduction + +Traffic circles, often called rotaries, are used to control vehicle flow through an intersection. Depending on the goal, a traffic circle may take different forms; Figure 1 shows a simple model. A circle can have one or more lanes; vehicles that enter a traffic circle can be met by a stop sign, a traffic light, or a yield sign; a circle can have a large or small radius; a circle can confront roads containing different amounts of traffic. These features affect the cost of the circle to build, the congestion that a vehicle confronts as it circles, the travel time of a vehicle in the circle, and the size of the queue of vehicles waiting to enter. Each of these variables could be a metric for evaluating the efficacy of a traffic circle. + +Our goal is to determine how best to control traffic entering, exiting, and traversing a traffic circle. We take as given the traffic circle capacity, the + +arrival and departure rates at each of the roads, and the initial number of vehicles circulating in the rotary. Our metric is the queue length, or buildup, at each of the entering roads. We try to minimize the queue length by allowing the rate of entry from the queue into the circle to vary. For a vehicle to traverse the rotary efficiently, its time spent in the queue should be minimized. + +We make the following assumptions: + +- We assume a certain time of day, so that the parameters are constant. +- There is a single lane of circulating traffic (all moving in the same direction). +- Nothing impedes the exit of traffic from the rotary. +- There are no singularities, such as pedestrians trying to cross. +- The circulating speed is constant (i.e., a vehicle does not accelerate or decelerate to enter or exit the rotary). +- Any traffic light in place regulates only traffic coming into the circle. + +# The Models + +# A Simplified Model + +We model the system as being continuous; our approach can be thought of as modeling the vehicle mass dynamics of a traffic circle. The simplest model assumes that the rate of arrival to the back of the entering queue and the rate of departure from the queue into the traffic circle are independent of time. Thus, the rate of change in the length of the queue is + +$$ +\frac {d Q _ {i}}{d t} = a _ {i} - s _ {i}, \tag {1} +$$ + +where $Q_{i}$ is the number of cars in the queue coming in from the $i$ th road, $a_{i}$ is the rate of arrival of vehicles into the $i$ th queue, and $s_{i}$ is the rate of removal, also called the service rate, from the $i$ th queue into the traffic circle. + +We introduce the parameter $d_{i}$ , the rate at which vehicles exit the traffic circle. We let $C$ be the number of vehicles traveling in the circle. Then we model the change in traffic in the rotary by the difference between the influx and outflux of vehicles, where the outflux of vehicles depends on the amount of traffic in the rotary: + +$$ +\frac {d C}{d t} = \sum s _ {i} - C \sum d _ {i}. \tag {2} +$$ + +# An Intermediate Model + +The model above simplifies the dynamics of a traffic circle. The most glaring simplifications are that there is no way to indicate that the circle has a maximum capacity and that the flow rate into the traffic circle $s_i$ does not depend on the amount of traffic already circulating. These are both corrected by proposing that the traffic circle has a maximum capacity $C_{\mathrm{max}}$ . As the number of vehicles circling approaches this maximum capacity, it should become more difficult for another vehicle to merge into the circle. At the extreme, when the traffic circle is operating at capacity, no more vehicles should be able to be added. Now, the $s_i$ in the previous model can be represented logistically as + +$$ +s _ {i} = r _ {i} \left(1 - \frac {C}{C _ {\mathrm {m a x}}}\right), +$$ + +where $r_i$ is how fast vehicles would join the circle if there were no traffic slowing them down. Thus, the equation governing the rate at which the $i$ th queue length changes becomes + +$$ +\frac {d Q _ {i}}{d t} = a _ {i} - r _ {i} \left(1 - \frac {C}{C _ {\max }}\right), \tag {3} +$$ + +and the equation for the number of vehicles in the traffic circle becomes + +$$ +\frac {d C}{d t} = \sum r _ {i} \left(1 - \frac {C}{C _ {\max }}\right) - \sum d _ {i} C. \tag {4} +$$ + +# A Congestion Model + +The previous two models still fail to take into account congestion, which alters the circulation speed, which in turn affects the departure rate $d_{i}$ of vehicles from the circle. Equation (3) still holds, but we need to vary $d_{i}$ . The vehicles will travel faster if there is no congestion, so they will be able to depart at their fastest rate $d_{i,\max}$ . When the circle is operating at maximum capacity, the departure rate will decrease to be $d_{i,\min}$ . Thus, the number of vehicles present in the circle is affected positively in the same manner as in (4), but the lessening factor changes to the weighted average of the $d_{i,\max}$ and $d_{i,\min}$ : + +$$ +\begin{array}{l} \frac {d C}{d t} = \sum r _ {i} \left(1 - \frac {C}{C _ {\mathrm {m a x}}}\right) \\ - C \left[ \sum d _ {i, \mathrm {m a x}} \left(1 - \frac {C}{C _ {\mathrm {m a x}}}\right) + \sum d _ {i, \mathrm {m i n}} \left(\frac {C}{C _ {\mathrm {m a x}}}\right) \right]. \quad (5) \\ \end{array} +$$ + +# Extending the Model Using Computer Simulation + +We create a computer simulation in Matlab to account for variables that would be too complicated to use in the mathematical model. The mathematical model does not address the vehicles' speeds while inside the traffic circle, so the computer simulation focuses mostly on areas related to vehicle speed: + +- enabling drivers to accelerate to fill gaps in the traffic (with a maximum speed), +- forcing drivers to decelerate to maintain distance between vehicles, +- requiring that drivers accelerate and decelerate when entering and exiting the circle, +- giving probabilistic weights to the different directions of travel, +- keeping track of time spent within the traffic circle for each vehicle, and +- giving each intersection a different vehicle introduction rate. + +Figure 2 on p. 250 shows an outline of the program flow and design. + +# Simulation Assumptions + +This model makes several key assumptions about the vehicles and the circle: + +- All vehicles are the same size, have the same top speed, and accelerate and decelerate at the same rate. +- The circle has four intersections and a single lane of traffic. +- All drivers have the same spatial tolerance. +- There are no pedestrians trying to cross the circle. + +# Limitations + +The assumption of one lane is not a key factor, because we assume that vehicles travel at the same speed. Hence, we do not need to put the slow vehicles in one lane and vehicles passing them in another lane. However, in reality there will indeed be slower vehicles, and vehicles decelerating to exit would offer opportunities for other vehicles to use a different lane to maintain a faster speed. Additionally, we cannot let emergency vehicles through the circle if there is only one lane; for a more detailed discussion of emergency vehicles and traffic circles, see Mundell [n.d.]. + +By not allowing control devices inside the circle, we restrict possible configurations. We also limit the effectiveness of our stoplight model; it prevents vehicles from entering the circle but does not inhibit the movement of vehicles within in the circle. + +![](images/4f31239b381a9c793e5eda5133ffff123cb204950440d1b2f5331130fd908ba8.jpg) +Figure 2. Program flow. Each intersection is modeled as a queue of vehicles with a traffic control device. Vehicles are added to the queue at a constant rate. For a vehicle to leave the queue and enter the traffic circle, the area in the circle must be clear of other vehicles. Additionally, if the queue has a traffic light, the light must be active. + +Since we do not allow for different vehicle properties (size, acceleration, top speed, etc.), we cannot model the effects of large trucks, motorcycles, or other nonstandard vehicles (such as large and unwieldy emergency vehicles) on the flow of traffic. + +Giving all of the vehicles the same acceleration and top speed, along with forcing all drivers to have the same spatial tolerance, prevents modeling aggressive drivers and their interaction with timid ones. Additionally, since cars in the simulation decelerate before exiting, even if they are already moving slowly, we generate a small proportion of false traffic backups. + +Limiting the size and number of intersections of the circle does not really limit our ability to model real-world traffic circles. Since we are mostly looking at driver behavior with the computer simulation, we should see the same behaviors as we scale up the circle and its corresponding traffic. + +# Analyzing the Models + +# The Simplest Model + +In all of the above models, the rate $r_i$ is indicative of the regulation imposed at the $i$ th intersection. A near-zero $r_i$ indicates that a traffic light is in use; a larger $r_i$ indicates that a yield sign, regulating only the incoming traffic, is in place. + +For the simplest model, we can use (1) and (2) to find explicit formulae for the queue length and the number of vehicles in the rotary by integrating with respect to time: + +$$ +Q _ {i} = [ a _ {i} - s _ {i} ] t + Q _ {i 0}, \qquad C = \frac {\sum s _ {i}}{\sum d _ {i}} + \left(C _ {0} - \frac {\sum s _ {i}}{\sum d _ {i}}\right) e ^ {- \sum d _ {i} t}. +$$ + +Therefore, given the inputs of the system, we can predict the queue length. To minimize the queue length, we solve (1) for when the queue length is decreasing $(dQ_{i} / dt < 0)$ and find that the $s_i$ term should be maximized. + +# Intermediate Model + +For the model with a carrying capacity, again we find explicit formulae for the queue length and the number of vehicles in the rotary: + +$$ +Q _ {i} = \left[ a _ {i} - r _ {i} \left(1 - \frac {C}{C _ {\mathrm {m a x}}}\right) \right] t + Q _ {i 0}, +$$ + +$$ +C = \frac {\sum r _ {i}}{\frac {\sum r _ {i}}{C _ {\mathrm {m a x}}} + \sum d _ {i}} + \left(C _ {0} - \frac {\sum r _ {i}}{\frac {\sum r _ {i}}{C _ {\mathrm {m a x}}} + \sum d _ {i}}\right) e ^ {- \left(\frac {\sum r _ {i}}{C _ {\mathrm {m a x}}} + \sum d _ {i}\right) t}. +$$ + +We can also solve for where (3) is less than zero to find the service rates for which the queue lengths are decreasing: + +$$ +r _ {i} > \frac {a _ {i}}{1 - \frac {C}{C _ {\mathrm {m a x}}}}. +$$ + +# Congestion Model + +In modeling congestion, the model is too complex to intuit what conditions would minimize the queue length. The differential equation (5) is quadratic: + +$$ +\frac {d C}{d t} = A C ^ {2} + B C + D, +$$ + +![](images/05725b229d45d8fc294dc4df5890f58e13339efd3089fd6d37e649b783d1391d.jpg) +Figure 3. The relationship between $dC/dt$ and $C$ for the congestion model using sample parameters values $r_1 = r_2 = r_3 = r_4 = 60$ , $d_{1,\max} = d_{2,\max} = d_{3,\max} = d_{4,\max} = 2$ , $d_{1,\min} = d_{2,\min} = d_{3,\min} = d_{4,\min} = 0.5$ , and $C_{\max} = 30$ . + +where + +$$ +A = \frac {\sum d _ {i , \max}}{C _ {\max}} - \frac {\sum d _ {i , \min}}{C _ {\max}}, \quad B = - \left(\frac {\sum r _ {i}}{C _ {\max}} + \sum d _ {i, \max}\right), \quad D = \sum r _ {i}. +$$ + +Since $\sum d_{i,\max} > \sum d_{i,\min}$ , it will always be the case that $A > 0$ . In addition, $B < 0$ and $D > 0$ . This means that the curve for $dC / dt$ is a concave-up quadratic curve with a positive $y$ -intercept and a global minimum at some $C > 0$ . Furthermore, for $C = C_{\max}$ , we have + +$$ +\frac {d C}{d t} = - \frac {d _ {i , \mathrm {m i n}}}{C _ {\mathrm {m a x}}}, +$$ + +which is always negative for $d_{i,\min} > 0$ . Thus, the global minimum for the curve must be in the fourth quadrant. Figure 3 shows an example of such a curve, using sample parameters. + +We notice from Figure 3 that there are two equilibrium points for the differential equation: + +$C = \frac{-B - \sqrt{B^2 - 4AD}}{2A}$ is a stable equilibrium point, and + +$C = \frac{-B + \sqrt{B^2 - 4AD}}{2A}$ is an unstable equilibrium point. + +Also, since for $C = C_{\mathrm{max}}$ , we have $dC / dt < 0$ , the number of vehicles will eventually decrease to an equilibrium value less than $C_{\mathrm{limit}} < C_{\mathrm{max}}$ . + +Since our metric for how well a traffic circle operates depends on how many vehicles are in the queues, we would like the queue flow $(a_{i} - s_{i})$ to be as small as possible. In other words, we would like $s_i$ to be as large as possible. In the congestion model, the queue flow is given by (3). + +Without loss of generality, we analyze queue 1. The equations for each queue differ only by their $a_{i}$ and $r_i$ , and we keep these the same for each queues in the simulations. Since the only changing variable in (3) is $C$ , when $C = C_{\mathrm{limit}}$ the queue length $Q_{1}$ will also be at its equilibrium. + +Using this fact, we can evaluate whether to use a traffic light or not and how long the light should be red. We compare different values for the service rate constant $r_1$ and the value of $dQ_1 / dt$ at $C = C_{\mathrm{limit}}$ . The results can be seen in Figure 4, which shows that when $r_1$ increases, $dQ_1 / dt$ decreases. + +![](images/302631f13e595c58686328b664ec66cc3a6f1b473965e9fff0b32f1dd2c8a97e.jpg) +Figure 4. The relationship between $r_1$ and $dC/dt$ for the congestion model with $C = C_{\mathrm{limit}}$ . The parameter values are $d_{1,\max} = d_{2,\max} = d_{3,\max} = d_{4,\max} = 2$ , $d_{1,\min} = d_{2,\min} = d_{3,\min} = d_{4,\min} = 0.5$ , $C_{\mathrm{max}} = 30$ , and $r_1$ changed from 1 to 60. + +A real-life situation is congestion of the traffic circle. Decreasing $d_{1,\min}$ would cause vehicles to exit the circle more slowly when there is more congestion. Using lower departure rates to approximate slower vehicle speeds inside the circle, we can examine what happens for decreasing values of $d_{1,\min}$ . The results are shown in Figure 5. For values of $d_{1,\min} < 0.5$ , the smallest value for $dQ_1 / dt$ is not at $r_1 = 60$ but at a smaller value. + +Another situation that the congestion model can approximate is additional lanes. A crude approximation is that each lane adds $C_{\mathrm{max}}$ to the capacity. Figure 6 shows the results of plotting $r_1$ versus the $C_{\mathrm{max}}$ for different numbers of lanes. As in the previous plots, the correlation is negative. + +# Simulation Results + +An interesting effect that we see in our simulation is the buildup of vehicles in front of each exit. As vehicles decelerate to exit, they force vehicles behind them to decelerate to maintain a safe distance. This buildup creates a longer queue at the intersection before the exit, since the buildup prevents those vehicles from entering the circle. In Figure 7, we see a large number of vehicles in the fourth queue and a buildup in the fourth quadrant. + +![](images/e6c4e78d20556ad31f2694fdd2eb5786802c20f6ac4cfa01d5688d58915147ed.jpg) +Figure 5. The relationship between $r_1$ and $dC / dt$ for the congestion model with $C = C_{\mathrm{limit}}$ , with parameter values $d_{1,\max} = d_{2,\max} = d_{3,\max} = d_{4,\max} = 2$ , and $C_{\mathrm{max}} = 30$ . The values of $r_1$ range from 1 to 60 for different values of $d_{1,\min}$ . + +![](images/a1ff69f7bc5273db6177ff7392f5638f55397270bc3dd939d01f263cb5d23d00.jpg) +Figure 6. The relationship between $r_1$ and $dC/dt$ for the congestion model with $C = C_{\mathrm{limit}}$ . The parameter values are $d_{1,\max} = d_{2,\max} = d_{3,\max} = d_{4,\max} = 2$ , $d_{1,\min} = d_{2,\min} = d_{3,\min} = d_{4,\min} = 0.5$ , $C_{\mathrm{max}} = 30$ , and $r_1$ changed from 1 to 60. + +![](images/1323608e886f9710e833a148b8011273ea20202be5e9033cc29b9ea9dd6a4b43.jpg) +Figure 7. Vehicles build up before the first intersection as vehicles slow down to exit. Additionally, the queue at the fourth intersection is quite long, because vehicles cannot enter the traffic circle. + +![](images/1076fa594fca97b57ec7e9b1469d52c47fc88106baa5c3a20bf8a67d7ff3e46a.jpg) + +Another interesting element of real life that the simulation shows is the bunching and expanding effect that vehicles experience. Because vehicles can decelerate more quickly than they accelerate, the vehicles bunch up behind a slow moving vehicle, then expand again as that vehicle accelerates into the free space ahead. Figure 8 shows an example of this compaction. + +![](images/26de99ab3638fd3c06add04cb0fc4a9dccf113c60f70de16a8ffecd87b6f1c05.jpg) +Figure 8. The arrow in the second quadrant points out a real-life effect, bunching, which happens because drivers decelerate faster than they accelerate. + +We test several rotary and vehicle setups to explore optimal circle design: + +- A single intersection with high arrival and service rates creates a large traffic buildup in the quadrant immediately following it, even though the vehicles have random destinations. Figure 9 shows the buildup in quadrant 1 when the first intersection (at angle 0) has a high arrival and service rate. However, queue 1 is not appreciably longer than the others. + +![](images/03730fad8285edbf36045ca4eaef18f4554b014b4fb6aea59af9e189a2d6709c.jpg) +Figure 9. The first intersection has both high arrival and service rates, which creates a traffic buildup before the next intersection. However, the queue for the first intersection does not increase, since there is limited traffic coming from the intersection behind it. + +![](images/967d08a3dad110572ebf6f8f678825e4c44408cda9ec0e5881d5bc5a9682d29c.jpg) + +- One intersection having a much higher chance of being a destination creates the expected buildup in front of the likely exit (Figure 10). However, it also creates a substantial buildup in front of the previous exit and a severe increase in that intersection's queue as vehicles are prevented from entering the circle. The buildup in the adjacent road must be taken into account when constructing a traffic circle at a high-volume intersection. +- If one intersection has a high service rate and the standard arrival rate, and another intersection has a high arrival rate and standard service rate, the traffic distribution is mostly random, with a slight tendency towards backups in the quadrant following the intersection with high service rate. We expect this result, since the intersection with high service rate can add only as many vehicles as in its queue, which is limited by its low arrival rate. Also, the intersection with high arrival rate and low service rate has a much longer queue than the other intersections, entirely as expected. + +# Conclusion + +We model the dynamics of a traffic circle to determine how best to regulate traffic into the circle. As shown in Figure 6 on p. 256, increased capacity decreases the queue flow, which leads to a decrease in queue length. This result indicates that a multiple-lane traffic circle might better accommodate more cars by decreasing the length of the queue in which they wait. However, as shown in the same figure, the marginal utility of increasing the maximum capacity does decrease. When applying a cost function (with cost proportional to the space that the circle occupies), there would exist an + +![](images/cadb2d5e8f35826782a7d4b14d255b4edd01f19862c389ec7a9cd54b75da6376.jpg) +Figure 10. The first intersection has a higher probability of being chosen as a destination. This creates a buildup in front of that intersection and a smaller buildup in front of the previous intersection. It also creates a very large increase in the queue of the previous intersection since those vehicles cannot enter the full circle. + +![](images/6112e482523fbd5bcfbb1830066f6947a0f0a621b6121e4d5e2b3f2093397530.jpg) + +optimum size of the traffic circle. + +Although the simpler models indicate that letting vehicles into the rotary as fast as possible would be optimum, analysis of the congestion model shows that if $d_{i,\min}$ is sufficiently small, then the highest service rate is no longer optimal. The implication of this result is that traffic lights could make travel through the rotary more efficient. When many vehicles use the traffic circle, such as during the morning and evening commutes, there could be enough vehicles so that the $C_{\mathrm{limit}}$ is reached. In this case, using traffic lights would help ease congestion. However, the duration of the red light should be adjusted according to the $d_{i,\min}$ for the specific traffic circle. + +In addition to the mathematical models, we create a computer simulation that tracks individual vehicles' progress through the traffic circle, and their effect on other vehicles. Our simulation shows several traffic effects that can be observed in real life, namely a buildup of vehicles in front of the exits and vehicles bunching together and expanding apart as drivers brake and accelerate. We also test several traffic circle configurations. + +# Recommendations + +Based on both our mathematical and computer models, we recommend: + +- Yield signs should be the standard traffic control device. Most of the time, letting vehicles enter the circle as quickly as possible is optimal. +- For a high-traffic rotary, traffic lights should be used. With high traffic, slowing the rate of entry into the circle helps prevent congestion. + +- If any single road has high traffic, its vehicles should be given preference in entering the circle. Doing so helps prevent a large queue. +- Introduce separate exit lanes. Traffic can build up in front of each intersection as cars exit, so a separate exit lane could help keep traffic moving. + +# References + +Mundell, Jim. n.d. Constructing and maintaining traffic calming devices. Seattle Department of Transportation. http://www.seattle.gov/Transportation/docs/TORONTO4.pdf. + +![](images/ca23734681ee7d87b8fab6ffa901bb124a4d32db9d6fc6306be2c9efee42c972.jpg) + +Aaron Abromowitz, Andrea Levy, and Russell Melick. + +# Three Steps to Make the Traffic Circle Go Round + +Zeyuan Allen Zhu + +Tianyi Mao + +Yichen Huang + +Tsinghua University + +Beijing, China + +Advisor: Jun Ye + +# Summary + +With growing traffic, control devices at traffic circles are needed: signals, stop/yield signs, and orientation signs—a special sign that we designed. + +We create two models—one macroscopic, one microscopic—to simulate transport at traffic circles. The first models the problem as Markov chain, and the second simulates traffic by individual vehicles—a "cellular-automata-like" model. + +We introduce a multi-objective function to evaluate the control. We combine saturated capacity, average delay, equity degree, accident rate and device cost. We analyze how best to control the traffic circle, in terms of: + +- placement of basic devices, such as lights and signs; +- installation of orientation signs, to lead vehicles into the proper lanes; and +- self-adaptivity, to allow the traffic to auto-adjust according to different traffic demands. + +We examine the 6-arm-3-lane Sheriffhall Roundabout in Scotland and give detailed suggestions for control of its traffic: We assign lights with a 68-s period, and we offer a sample orientation sign. + +We also test smaller and larger dummy circles to verify strength and sensitivity of our model, as well as emergency cases to judge its flexibility. + +# Introduction + +We develop two models to simulate traffic flow in a traffic circle. The macroscopic model uses a Markov process to move vehicles between junctions, while the microscopic model concentrates on the behavior of each vehicle, using a modified cellular-automata algorithm. The outcomes of these two approaches show great consistency when applied to a real scenario in Scotland. + +We characterize a "good" traffic control method in terms of five main objectives and combine them with an overall measure. + +We employ a genetic algorithm to generate the final control method, in particular to determine the green-light period. We also consider the ability to deal with unexpected affairs such as accidents or breakdowns. + +# General Assumptions + +- The geometric design of the traffic circle cannot be changed. +- The traffic circle is a standard one (at grade) with all lanes on the ground, that is, no grade separation structure. +- The flow of incoming vehicles is known. +- People drive on the left (since the example later is from the UK). +- Pedestrians are ignored. +- Motorcycles move freely even in a traffic jam. + +# Terminology and Basic Analysis + +# Terminology + +- Junction: an intersection where vehicles flow in and out of the traffic circle. +- Lane: part of the road for the movement of a single line of vehicles. The number of lanes directly affects flow through the circle by limiting entrance and exit of vehicles. However, since both the conventional design and real-time photos suggest that vehicles exit easily, our model ignores restrictions on outward flow. +- $l_0$ : the number of lanes in the traffic circle. +- Section: part of the traffic circle between two adjacent arms. +- Yield/stop signs: A yield sign asks drivers to slow down and give right of way; a stop sign asks drivers to come to a full stop before merging. + +- Orientation sign: a sign indicating the lane for vehicles to take according to their destination. +- Traffic light: a signaling device using different colors of light to indicate when to stop or move. A traffic light with direction arrows performs much better [Hubacher and Allenbach 2002], so we are inclined to use such a light. However, compared to yield / stop signs, traffic lights slow vehicle movement. At the same time, however, even at a remote motorway traffic circle with few pedestrians, a traffic-light malfunction will probably lead to an accident [Picken 2008]. +- Cycle period: the time in which a traffic light experiences exact stages of all three colors. An optimal cycle period is critical whenever traffic lights are employed. The method we use is called the Webster equation [Garber and Hoel 2002]. The value that we use in our model is calculated as $68\mathrm{s}$ . +- Green-light period: the time that a traffic light keeps green in one cycle. +- Timesteps: a sequence of characters denoting the start/end time of red/green lights. + +# A Glance at Sheriffhall Roundabout + +![](images/1a0825022f0f426646fce13ecc1e0397a2157b58a698c4f8f24770b1c7f4082e.jpg) +Figure 1. The Sheriffhall Roundabout. Source: Google Earth. + +One characteristic of this traffic circle (Figure 1) is that the arms in the southwest (6) and northeast (3) directions have larger flow than the others. The arms in the north (2) and south (5) directions have two lanes, while the other four arms and the circle have three lanes. We model the traffic circle as a ring with an inner radius of $38.9\mathrm{m}$ and an outer radius of $50.4\mathrm{m}$ . + +We use the origin-destination flow (Table 1) given by Yang et al. [Maher 2008]. Since the traffic demand is far from saturated, we experiment on different scalings of this inflow matrix, specifically, multiples by 1.2, 1.4, 1.6, and 1.8. + +Table 1. Origin-destination flows (vehicles/hr) at Sheriffhall Roundabout. Source: Wang et al. in Maher [2008]. + +
From\To123456
1-001887767
20-8374195
320-119791007
433812963-0208
51161241420-16
69017298823610-
+ +# Simulation Models + +# Model I: The Macroscopic Simulation + +Usually, we do not know where each vehicle enters and exits the circle; we know only the numbers of vehicles coming in and out of each arm, so we adopt a macroscopic simulation. + +We first combine the lanes in the sections and arms together and regard them as one-lane roads. We then explain how the multilane simulation works. + +# Assumptions + +- Vehicles in the same section of the circle are distributed uniformly in the section. +- The arrival rate at each arm is constant in the period that we simulate. +- For simplicity, we consider an ideal round traffic circle (Figure 2). The macroscopic simulation itself does not depend on the shape of the circle. + +# Sections and Arms + +We divide the traffic area into sections and take vehicles in the same section as a whole. We label the sections and the arms as in Figure 2. Associated to section $i$ are the quantities: + +1. Number $\mathrm{num}_i^t$ of vehicles in the section at time $t$ . +2. Number $\mathrm{arm}_i^t$ of vehicles waiting to enter through one arm at time $t$ . +3. The maximum number $\mathrm{cap}_i^t$ of vehicles that can enter the traffic section through one arm per unit time. + +![](images/7b3ff7e32821b30418b25fbe855f33daa10994fb40a61302f0ccf9783aa6c094.jpg) +Figure 2. Sample traffic circle. + +# A Markov Process + +The traffic state at time $t + 1$ depends only on the traffic state at time $t$ , so traffic is a Markov process. To describe the state of the whole system, only the quantities $\mathbf{num}_i^t$ and $\mathbf{arm}_i^t$ are needed. To implement the simulation, we must determine $\mathbf{num}_i^{t + 1}$ and $\mathbf{arm}_i^{t + 1}$ , for $i = 1,2,\ldots ,n$ . + +In principle, we can calculate the transition probability matrix; but not in our problem. For a traffic circle with four arms/sections and each holding up to 10 vehicles, the number of traffic states is $10^{8}$ . + +Considering this sobering fact, we use the expectations $\overline{\mathrm{num}}_i^t$ and $\overline{\mathrm{arm}}_i^t$ , instead of the actual distribution of cars, to denote a state. + +# The Simulation Process + +![](images/5b4f6a73df44a044a2a85a32c16c7aaf693a0feda1fe5a5ce6657f2cd343b02e.jpg) +Figure 3. Flows at a junction. + +- $\overline{\mathrm{num}}_i^t \times \mathrm{out}_i^t$ vehicles leave the circle from section $i$ . The ratio $\mathrm{out}_i^t$ drops when $\overline{\mathrm{num}}_i^t$ approaches its capacity. +- To deal with the junction, there are two streams $\overline{\mathrm{num}}_i^t\cdot (1 - \mathrm{out}_i)$ and $\mathrm{cap}_i^t$ trying to flow into the next section. If there is a traffic light, only one of them is allowed. If stop/yield sign is used (at the arm side, for example), then only a small fraction of $\mathrm{cap}_i^t$ can flow in. This fraction is denoted by the disobey rate $\alpha_{\mathrm{stop}}$ or $\alpha_{\mathrm{yield}}$ . +- An inflow of $\mathbf{in}_i$ newly-arrived vehicles runs into arm $i$ . + +# Multilane Traffic Circle + +We assume that vehicles do not change lanes within arms or sections, which means that they can change lanes only at junctions. + +To treat lanes differently, we need to know what proportion of vehicles pass through each lane. At each junction, the outflow for a given lane is distributed into successive lanes according to their popularity. + +![](images/b10d8be87ab7612517740be720566bb97ed668c6bd80bfe096ffe924b255b178.jpg) +Figure 4. A two-lane circle divided into lanes. Each arc on the right denotes a single lane. + +# Model II: The Microscopic Simulation + +Partially inspired by sequential cellular automata, we adopt a microscopic model. The traffic circle is divided into $l_{0}$ lanes. Vehicles are points with polar coordinates but with discrete radius values. We model the behavior of each individual vehicle, with the help of some general principles: + +- Traffic coming in: As described in Table 1, the number of vehicles per hour is given in a matrix $\left(a_{i,j}\right)_{n\times n}$ . We use a Poisson distribution with mean $a_{i,j} / T$ to describe the incoming vehicles from arm $i$ to arm $j$ . +- Lane choosing and changing: For a specific vehicle from arm $i$ to arm $j$ , the driver has a desired ideal lane to be in. The hidden principle is [SetupWeasel 1999]: The more sections the vehicle has to pass before its exit, the more likely the driver will wish to take an inner lane, both in the arm and in the circle. We adopt this rule. + +- Vehicle speed: We define a maximum speed and a maximum acceleration for vehicles, and record the speed individually. The principles for a vehicle to accelerate or decelerate are: + +- When a vehicle faces a red light or other vehicles, its speed decreases to zero. +- When a vehicle changes lanes, it decelerates. +- Otherwise, a vehicle attempts to accelerate up to maximum speed. + +- The function of a yield sign: When a vehicle faces a yield sign, it checks whether the lane is empty enough for it to enter the junction. If not, the vehicle waits until it is empty enough—but with a disobey rate $\alpha_{\mathrm{yield}}$ , it ignores the sign and scrambles. Naturally, this reaction affects the accident rate. +- The function of a stop sign. When a vehicle faces a stop sign, it should stop instantaneously. At the next time step, it functions as if at a yield sign; the only difference is that it will accelerate from a zero speed. The disobey rate is $\alpha_{\text {stop }}$ . +- The effect of traffic lights: Just like normal. + +We discretize time and follow the rules above for each vehicle after it comes to the circle. We calculate the average traversal time for a vehicle, as well as the accident rate (by the total number of touches of vehicles). A vivid view of the simulation result is presented in Figure 5. + +![](images/f7aa0cb8bfd49b23eab4dfd163eec5c9197917d240a624635f2dc015a70c2a87.jpg) +Figure 5. The vehicles around the traffic circle. + +# Comparison and Sensitivity Analysis + +# Results + +We use the two different models to simulate a real traffic circle: The Sheriffhall Roundabout in Scotland. We use the traffic-light configuration in Maher [2008]. For simplicity, we consider only the average time needed for a vehicle to traverse the traffic circle. This value is 42.7 s for Model I and 41.6 s for Model II. The two results are close, so we can believe that the actual traversal time is around 42 s. + +# Sensitivity + +We analyze sensitivity by running the program with modified parameters (see Table 2). + +Table 2. Sensitivity test of simulation mode. + +
Parametervariationmodel Imodel II
\( v_{max} \)+10%-2.6%-8.5%
-10%10.5%11.1%
\( l_0 \)+1-19.6%-16.4
-1121%65.2%
\( r_{out} \)+10%-7.3%-3.9%
-10%1.1%3.3%
Traffic flow+10%10.6%7.0%
-10%-3.0%-6.7%
+ +The two models give similar sensitivity results. The average passing time is relatively insensitive to all the parameters except $l_{0}$ . This is reasonable, since the number of traffic lanes in the circle affects the passing time significantly. + +Model II is a random simulation, which enables us to calculate the standard deviation of the traversal time, which is no larger than $3\%$ of the mean. + +# Complexity + +The time complexity of the algorithms for the two models is proportional to the maximum number of vehicles that the circle can hold and the number of iterations. In practice, 1,000 iterations suffice. + +Model I is a little simpler than Model II, since we do not need to trace each individual. Conversely, Model II needs more a priori information than + +Model I. Since the two models are consistent and give similar results, we adopt Model II for further study. + +# The Multi-Objective Function + +# Basic Standards + +We want to include both subjective evaluations (such as the feelings of drivers) and objective measures (such as the expense of devices). Also, the standards should be calculated from available data. We choose five evaluation standards: + +- Saturated flow capacity: The threshold flux to avoid backing up traffic on the arms. +- Average delay: The difference between the average time to traverse the traffic circle and the time to traverse an empty one. +- Equity degree: A multi-arm traffic circle may distribute the incoming flow inequitably, to the annoyance of drivers. The relative difference in average delay is the equity degree. +- Accident expectation: The average number of accidents per vehicle. +- Device cost: The total expense of traffic signs and lights. + +# How the Objectives Are Affected + +# Saturated Flow Capacity + +A yield sign is likely to work effectively, since it seldom causes unnecessary stops for vehicles. A stop sign, however, at least adds the acceleration/deceleration delay to every vehicle rushing inside. The efficiency of a traffic light is highly related to its green-light period. Fixed-period lights sometimes block vehicles from entering an empty circle, while adaptive ones can work according to conditions. + +In fact, a traffic circle with yield signs at all junctions bears the heaviest traffic in the simulations above, and traffic lights are left with great potential to improve in optimization. + +# Average Delay + +The average delay is controlled by the incoming flow. The delay time will increase rapidly when traffic starts to congest. In our model, the delay time of a vehicle is calculated when it exits the traffic circle. When this delay time is considered in the overall objective, there should be penalties on congestion, which is calculated from the current flow and the saturated flow capacity. + +# Equity Degree + +Equity degree is calculated directly from the delay time distribution. Not only the flow distribution but also the total flux contributes to the equity degree, since high flux may lead to unexpected distribution failures. + +# Accident Expectation + +We assume that each kind of signal reduces accidents by a specific percentage; we use data from Hubacher and Allenbach [2002], Transport for London... [2005], and Fitzpatrick [2000]. + +# Device Cost + +This expense is based on the numbers of each kind of signal. + +# The Combined Objective: The Money Lost + +Now we come to a combined objective, the combined expense (CE), that takes into account expense and economic loss, which we attempt to minimize. + +The prices of traffic-control devices are easy to find [Traffic Light Wizard n.d.; TuffRhino.com n.d.]. Apart from the expense of maintenance and operation, we calculate the average operating cost per hour for each kind of device. Since traffic lights consume much electricity, we ignore the money spent on other types of devices. A traffic light is expected to cost $0.23/hr [Wang 2005; Ye 2001]. + +For accident expense losses, we take data from an annual report of a local traffic office on average loss per accident [Hangzhou Public Security Bureau...2006] and set + +$$ +\text {A c c i d e n t} = \$ 6 3 0 \times \text {F l u x}. +$$ + +The average delay time must be accompanied by a cost of delay. According to the Federal Highway Administration [2008], about \(1.20 per vehicle is lost in a delay of 1 hr: + +$$ +\text {D e l a y} = \$ 1. 2 0 \times \text {F l u x} \times \text {A v e r a g e d e l a y t i m e}. +$$ + +The unused part of saturated capacity takes care of any extra incoming traffic; we set its value as + +$$ +\text {Capacity bonus} = 5 \% \times \$ 1.2 \times (\text {Saturated capacity} - \text {flux}) \times +$$ + +$$ +\left(\text {A v e r a g e d e l a y t i m e}\right), +$$ + +in which $5\%$ is the probability of an unexpected vehicle coming. + +Equity degree (ED) is a tricky component in the determination. The most annoying situation is to keep two "main arms" open to traffic by sacrificing all other arms. Equity degree is estimated to be a function of the number of arms $n$ : + +$$ +\mathrm {R e f e r e n c e e q u i t y d e g r e e (R E D)} = \sqrt {\frac {n (n - 2)}{2 (n - 1)}}. +$$ + +The equity degree will be normalized by this reference and appear in a penalty on delay expense: + +$$ +\text {C o r r e c t e d} = \text {D e l a y} \times \left(1 + \frac {\mathrm {E D}}{\mathrm {R E D}}\right). +$$ + +The combined index is then calculated as + +$$ +\mathrm {C E} = \text {C o r r e c t e d d e l a y e x p e n s e - C a p a c i t y b o u n s + A c c i d e n t l o s s +} \quad \text {D e v i c e c o s t}, +$$ + +which serves as the final objective function that we use in the following optimization. + +# Application: Evaluate Typical Arrangements + +We take a glance at three general control methods: pure traffic light, stop sign only, or yield sign only. + +We first normalize the five objectives, converting values to an interval between 0 and 1, from worst to best. A superficial look at the radar chart of Figure 6 raises doubt about the expensive traffic lights. However, traffic lights are superior in controlling the accident rate, while the two signs may be hazardous by accelerating the flow. The convoluted relationship is clear when we compare their CE values, in Table 3. + +![](images/82a61ae588a11ff93c815f197286d0dd4547a1a9e15c229192b4c10984882ec6.jpg) +Figure 6. A view of 5 objectives of 3 general control methods. + +The results above suggest that traffic lights are worthwhile for heavy traffic. Optimization, however, needs more insight. + +Table 3. Combined expense for 3 typical control methods. + +
Control MethodThe Combine Expense (US$/hour)
Traffic light66.76
Stop sign103.29
Yield sign116.61
+ +# Optimization Model + +# The All-Purpose Solution + +Because the objective function is calculated in our simulation model, an analytical form for it is difficult to obtain. In such a situation, a quasi-optimal solution is welcome, and approximation algorithms become candidates. + +In this problem, a normal approximation algorithm can fall into local maxima. However, some high-level technique can be used such as simulated annealing or—what we use—a genetic algorithm. Specifically, the traffic controls in different junctions are used as genes. The configuration of a traffic circle is a vector of genes, containing all the devices used in different junctions. Table 4 gives details. + +Table 4. Explanation of the genetic algorithm used for optimization.. + +
ProcessExplain
BreedingCombine the traffic control methods of two different configurations.
MutationRandomly mutate the traffic control in a single junction.
EvolutionLocally adjust the traffic controls in all junctions, and seek for better solution.
+ +We consider three kinds of traffic control devices: 1) traffic lights, 2) yield/stop signs, and 3) orientation signs, a special kind of traffic sign that we designed ourselves. We call the first two basic devices. + +# Step 1: Basic Device and Timestamp Choice + +A traffic junction can be equipped with any one of the following five devices: 1) traffic light, 2) yield sign in the circle, 3) yield sign at the entrance, 4) stop sign in the circle, and 5) stop sign at the entrance. Besides, the timestamps of red/green lights for traffic lights are also changeable. + +# Sheriffhall Roundabout + +Considering all potential variables above, we run our program against the Sheriffhall Roundabout, using the origin-destination flow data in Table 1, assuming that this flow matrix remains fixed over a one-hour period. The solution of our program shows that traffic lights should be used rather than stop/yield signs; otherwise, the accident rate will be dramatically higher. + +In Figure 7, green (light) represents right of way for vehicles from the incoming road, and red (dark) indicates right of way for vehicles in the circle. The optimal configuration creates a long period of red light for all junctions and allows digestion of vehicles quickly during the interval. This configuration accelerates the flows but has a lower saturated flow capacity, as Table 5 summarizes. + +![](images/16497dec5954f68229df07df1555c5312f38f328e392c62986b8b03877b7beef.jpg) +Figure 7. The traffic light timestamps in 6 junctions (green (light) vs. red (dark)). Period = 68 s (calculated in assumption). Original flow information is used. + +Table 5. +The multi-objectives of the optimal configuration of Sheriffhall, original flow. + +
ObjectiveValue
Saturated Flow +Capacity6904 vehicles / hour
Average Delay42.763 seconds / vehicle · hour += 62.04$ / hour
Equity Degree0.3187
Accident Expectation4.63$ / hour
Device Cost1.38$ / hour
Combined expense78.98$ / hour
+ +![](images/f0c0ad1b9a33b8a3c84bc1d8c68b3e44d118141f788ed97f7cfb64b0a3b61922.jpg) + +# Sheriffhall Roundabout with $1.8 \times$ Original Inflow + +When the incoming flow density increases to 1.8 times as much, the optimal configuration shows a significant difference—see Figure 8. + +![](images/860fce125915292d99eb1a63d2f70505cb2273cc5f3674b5a41cbe8cf91f07b3.jpg) +Figure 8. The traffic light timestamps in 6 junctions. Period = 68 s (calculated in assumption). Original flow × 1.8 is used. + +In Figure 8, the green-light periods for all junctions are shortened to let the circle digest the greater number of incoming vehicles. There is no longer a long period with all junctions having a red light. As an alternative, there is free passage between Junction 3 and Junction 6 (shadowed stripe), which greatly increases the saturated flow capacity but reduces the traversal speed (see Table 6). To see why, one needs to look at the origin-destination flow Table 1, in which the flow between Junction 3 and 6 constitutes a significant portion of all the inflows. The white stripe in Figure 8 actually gives a good opportunity for vehicles to travel between them. + +The multi-objectives of the optimal configuration of Sheriffhall, original flow $\times 1.8$ . + +Table 6. + +
ObjectiveValue
Saturated Flow8354 vehicles / hour
Capacity
Average Delay81.278 seconds / vehicle·hour = 117.91$ / hour
Equity Degree0.3042
Accident5.41$ / hour
Expectation
Device Cost1.38$ / hour
+ +![](images/85a02c093c4506b0ea82c947b33cef60b6c6e00527de34d1bd9f995859a776df.jpg) + +# Step 2: Orientation-Sign Placement + +Normally, the number of lanes in a traffic circle and the number of junctions are not equal. In some countries, a hidden rule [SetupWeasel 1999] is: the vehicle nearer its exit should stay left (Remark: we are driving on the left!) We refine this rule. + +Let there be $n$ arms. Suppose that a vehicle is at Junction $a$ ( $1 \leq a \leq n$ ) and its destination is $b$ ( $1 \leq b < n$ ) junctions farther on. We manage two variables $\mathrm{lower}_a^b$ and $\mathrm{upper}_a^b$ so that such a vehicle is suggested to stay in the range $[\mathrm{lower}_a^b, \mathrm{upper}_a^b]$ . Our aim is to distribute vehicles into lanes to minimize congestion. To optimize these intervals $[\mathrm{lower}_a^b, \mathrm{upper}_a^b]$ , we use a genetic algorithm again. + +Figure 10 demonstrates the effect of the orientation sign of Figure 9 in reducing the average delay, for different amounts of inflow. As the number of incoming vehicles increases, the positive effect of our orientation sign becomes evident. The configuration without orientation sign has saturated flow capacity 8354 (Table 6), and this number has increased to 8812 with the help of this newly-introduced sign. In short, the very last potential capacity has been extracted in our model. + +![](images/163b0d18d3f2f06cde443196c8dce4ac4a9b0df78e286cdf132475ae143866e2.jpg) +Figure 9. The orientation sign over the junction entrance. (At junction 3, with $1.8 \times$ original inflow.) + +# Step 3: Time Variance and Self-Adaptivity + +Origin-destination flows vary from morning to evening. The easiest way to handle this is to run our previous program with different traffic demand information for different time periods. Actually, we can go further, and make the traffic control self-adaptive by using traffic lights. + +Given the traffic light timestamps calculated in Step 1, and assume that in the following hour the traffic demands change to new values. We select the original configuration as our seed, and carry out the genetic algorithm to gain a similar but better solution. Figure 11 gives an example. + +One may find that the timestamps change little and hence will not significantly affect vehicles already in the circle. As night falls, traffic demands + +![](images/83bbcd9c1e0424a3d6d87e37e7c871c0318021d53ee73245b1a37c0939f8faaf.jpg) +Figure 10. The magic effect of the orientation sign. + +![](images/c14c3014d73d30d0726c9ced315747070418fdf8b6fa0b44484a962a6a9a8546.jpg) +Figure 11. Self-adaptivity as inflow drops in $1\mathrm{hr}$ from 1.8 to $1.4\times$ original inflow. + +fall off, and the traffic lights could be replaced in effect by yield signs by switching the lights to flashing yellow, an international signal [Wikipedia 2009] to remind drivers to be careful. + +# Verification of the Optimization Model + +# The Circle at Work + +Figure 12 shows that when the inflow is 1.8 times as high as in Table 1, the traffic circle still works. + +# Accuracy + +As a follow-up study to verify the optimization model, we need to test it on different traffic circles. For lack of data, we create our own dummy + +![](images/8c8ddd020660f9d1e52fe0c64ccecea65d7a431fe303edc7a107b37c0b1d7f44.jpg) +Figure 12. 39 seconds later, most of the vehicles waiting at Junction 6 move in. + +traffic circles. In particular, we test a large traffic circle with 12 arms and 6 lanes, and the result shows that our model can deal with such large cases. + +Testing on a dummy suburban circle with 4 arms and relatively lower traffic demand, we find as optimal solutions either in-circle stop signs or else a mixture of stop signs and traffic lights (Figure 13). In this example, the origin-destination flow between Junction 1 to Junction 3 is remarkably greater than all other pairwise flows. + +![](images/90ccee62cf4b29bb596a725b9ae60a964236280b4e82a2b0f10d2a98e5e3fe9c.jpg) +Figure 13. Two intuitive configurations generated by our model. The one on the left has two in-circle stop signs and guarantees fast pass from left to right; the one on the right has a mixture of traffic lights and stop signs. + +# Sensitivity + +We tested sensitivity of our model by running it 50 times. Table 7 shows the mean and standard deviations for runs against various level of inflow. + +Table 7. Sensitivity test of the optimization model. + +
The multiple of income flowAverage DelayStandard Deviation
1.0 times42.76 seconds / vehicle·hour0.95 seconds / vehicle·hour
1.2 times47.22 seconds / vehicle·hour1.56seconds / vehicle·hour
1.4 times51.99 seconds / vehicle·hour2.54 seconds / vehicle·hour
1.6 times61.54 seconds / vehicle·hour3.81 seconds / vehicle·hour
1.8 times81.28 seconds / vehicle·hour8.30 seconds / vehicle·hour
+ +# Emergency Case + +Our model can simulate an emergency. In Figure 14, one of the cars breaks down and block an entire lane. However, the traffic circle still works, but the average delay time has increased by $10\mathrm{~s}$ . + +The self-adaptivity of our model lets us adjust the light timestamps and reduce the traffic jam in an emergency case. However, because of limited time, we cannot describe the adaptivity here. + +![](images/94d7bc23069afb0aa367f105e32576c6e85bba53627be9a85a47dd3a2a1717c3.jpg) +Figure 14. Breakdown of a vehicle slows the traffic, but traffic still circulates. + +# Conclusion + +To estimate the overall performance of a traffic circle with a specific vehicle flow, we develop two simulation models. The first uses a Markov process to consider the entire flow, the second devotes its attention to the individual behavior of each vehicle. + +We choose five objectives to evaluate the control method and convert them to a combined expense. We apply this standard to a real-life traffic circle with typical traffic control device setups. + +We offer an optimization model to select traffic devices and determine the green-light period when traffic lights are used. In addition, we introduce orientation signs as a thoroughly new measure to bring efficiency. The flexibility of these solutions is proved when confronted with accidents. + +# References + +Federal Highway Administration, U.S. Department of Transportation. 2008. The importance of transit. Chapter 14 in 2004 Status of the Nation's Highways, Bridges, and Transit: Conditions and Performance. http://www.fhwa.dot.gov/policy/2004cpr/chap14.htm. +Fitzpatrick, Kay. 2000. Accident Mitigation Guide for Congested Rural Two-lane Highways. Transportation Research Board, National Research Council. +Garber, N.J., and L.A. Hoel. 2002. Traffic and Highway Engineering. 3rd ed. Pacific Grove, CA: Brooks/Cole. +Hangzhou Public Security Bureau Traffic Police Detachment. 2006. March on the city's briefing on road traffic accidents. www.hzpolice.gov.cn/ tabid/67/InfoID/9343/Default.aspx. +Hubacher, Markus, and Roland Allenbach. 2002. Safety-related aspects of traffic lights. http://www.bfu.ch/PDFLib/676_68.pdf, http://www.bfu.ch/English/Forschung/Forschungsergebnisse/pdfResults/r48e.pdf. +Maher, Mike. 2008. The optimization of signal settings on a signalized roundabout using the cross-entropy method. Computer-Aided Civil and Infrastructure Engineering. 23: 76-85. +Picken, Andrew. 2008. Businesses want Sheriffhall flyover. *Edinburgh Evening News* (26 June 2008). http://edinburghnews.scotsman.com/edinburgh/Businesses-want-Sheriffhall--flyover.4225078. jp. +SetupWeasel. 1999. Traffic circles. http://www.bbc.co.uk/dna/h2g2/A199451. +Traffic Light Wizard. n.d. http://trafficlightwizard.com. +Transport for London Street Management—London Road Safety Unit 5. Do traffic signals at roundabouts save lives? IHT Transportation Professional (April 2005). http://www.tfl.gov.uk/ assets/downloads/SignalsatRoundabouts-TransportationProfessiona-Article.pdf . + +TuffRhino.com. n.d. www.tuffrhino.com/Yield_Sign_p/sts-r1-2.htm. + +Wang, Shen Lin. 2005. People look forward to the traffic lights at an early date, "induction." (Chinese). People's Government of Kaifeng. + +http://www.kaifeng.gov.cn/html/ 4028817b1d926237011d9332cfe300bb/3214.html. + +Wikipedia. 2009. List of variations in traffic light signalling and operation. + +http://en.wikipedia.org/wiki/unusual Uses_ofTrafficLights. + +Ye, Ran. 2001. California power-saving traffic lights out of new tactics to use energy-saving foam. (Chinese). news.eastday.com (15 August 2001). + +http://www.shjubao.cn/epublish/gb/paper148/20010815/class014800004/hwz462812.htm. + +![](images/817c6de5e4f5b5fe92cdfd08eaf88a85c8caff98a7cd16a00d346a6be1d65e7a.jpg) +Yichen Huang, Zeyuan Allen Zhu, Tianyi Mao, and team advisor Jun Ye. + +# Pseudo-Finite Jackson Networks and Simulation: A Roundabout Approach to Traffic Control + +Anna Lieb + +Anil Damle + +Geoffrey Peterson + +Dept. of Applied Mathematics + +University of Colorado + +Boulder CO + +Advisor: Anne Dougherty + +# Summary + +Roundabouts, a foreign concept a generation ago, are an increasingly common sight in the U.S. In principle, they reduce accidents and delays. A natural question is, "What is the best method to control traffic flow within a roundabout?" Using mathematics, we distill the essential features of a roundabout into a system that can be analyzed, manipulated, and optimized for a wide variety of situations. As the metric of effective flow, we choose time spent in the system. + +We use Jackson networks to create an analytic model. A roundabout can be thought of as a network of queues, where the entry queues receive external arrivals that move into the roundabout queue before exiting the system. We assume that arrival rates are constant and that there is an equilibrium state. If certain conditions are met, a closed-form stationary distribution can be found. The parameters values can be obtained empirically: how often cars arrive at an entrance (external arrival rate), how quickly they enter the roundabout (internal arrival rate), and how quickly they exit (departure rate). We control traffic by thinning the internal arrival process with a "signal" parameter that represents the fraction of time that a signal light is green. + +A pitfall of this formulation is that restricting the capacity of the round- + +about queue to a finite limit destroys the useful analytic properties. So we utilize a "pseudo-finite" capacity formulation, where we allow the roundabout queue to receive a theoretically infinite number of cars, but we optimize over the signal parameter to create a steady state in which a minimal number of waiting cars is overwhelmingly likely. Using lower bound calculations, we prove that a yield sign produces the optimal behavior for all admissible parameter values. The analytic solution, however, sacrifices important aspects of a real roundabout, such as time-dependent flow. + +To test the theoretical conclusions, we develop a computer simulation that incorporates more parameters: roundabout radius; car length, spacing, and speed; period of traffic signals; and time-dependent inflow rates. We model individual vehicles stochastically as they move through the system, resulting in more-realistic output. In addition to comparing yield and traffic-signal control, we also examine varied input rates, nonstandard roundabout configurations, and the relationships among traffic-flow volume, radius size, and average total time. However, our simulation is limited to a single-lane roundabout. This model is also compromised by the very stochasticity that enhances its realism. Since it is nondeterministic, randomness may mask the true behavior. Another drawback is that the computational cost of minimization is enormous. However, we verify that a yield sign is almost always the best form of flow control. + +# Introduction + +A report from the Wisconsin Dept. of Transportation notes that "to many, the idea of replacing four-way signaling with a roundabout seems like replacing hot dogs with crepes at the ballpark" [McLawhorn 2002]. For many Americans, the roundabout (traffic circle, rotary) is a foreign idea, even though the first one was built in New York in 1903. Roundabouts fell out of favor in the U.S.; but since midcentury, as studies showed how much safer and more efficient they can be, there has been a resurgence in their construction [National Cooperative Highway Research Program 1998]. Half of the states in the U.S. now have roundabouts, more than 1,000 installations. One study indicated that, on average, fatal crashes decreased $90\%$ after traditional traffic lights were replaced by roundabouts [Arizona Department of Transportation n.d.]. + +A crucial aspect of efficiency and safety is entry. Until the 1920s, "yield-to-right" rules gave right of way to incoming cars, which tended to cause "locking" and delays at high traffic volumes. British studies indicated that adopting "priority-to-the-circle" rules allows more cars to move through the circle more quickly and diminishes accident rates. The deflection of entering traffic serves to prevent excessive speed within the roundabout and to reduce further the incidence of accidents [National Cooperative Highway Research Program 1998]. So in modern roundabouts, incoming traffic + +yields to traffic in the circle (and changes direction to some extent). With that rule, entry may be governed in various ways. The simplest and most common is a yield sign at each entry point. The U.S. Dept. of Transportation advises that roundabouts "should never be planned for metering or signalization" [Robinson et al. 2000]. + +We develop a mathematical model for flow in roundabouts. We introduce assumptions used in determining the key parameter inputs and developing a metric for "effectiveness." We subsequently formulate and solve a simple analytic model of networked queues in an equilibrium state. After discussing limitations of the analytic model, we adapt it into a computer simulation that allows for detailed analysis and can be used to optimize the flow-control method. + +# Assumptions + +Exponential arrivals/departures: Arrivals and departures follow a Poisson process, with exponentially-distributed interarrival times. + +Local variable selection: External forces such as weather, special events, or acts of God may alter the system, but we do not address these factors. + +Unbounded output: Although blockages would affect the roundabout, we assume that cars can always leave the roundabout at their exits. + +Yield and stop sign equivalence: In terms of efficiency, a stop sign performs only as well as (or worse) than a yield sign, so a yield sign is preferable. Stop signs may, however, may be appropriate for safety, for example, with high pedestrian traffic. + +# Effectiveness + +The most effective roundabout design minimizes delay. + +# Analytic Formulation + +Our analytic model consists of a network of $\mathrm{M} / \mathrm{M} / s$ queues (queues with Markovian arrivals, Markovian departures, and $s$ servers), known as a Jackson network. (Figure 1). We give the model's parameters in Table 1. We assume that there is a steady state and no explicit time dependence. Effectiveness is quantified by the probability of few cars waiting, which can be calculated from the stationary distribution of the Jackson network. For the most effective roundabout, the most likely stationary states will be those with the fewest total cars, implying that delay is minimized. + +![](images/71b7d6a5f9c87ab240c1c44601a5eefc8e030a2b46313514df734bef3de3bc10.jpg) +Figure 1. Visual schematic of queuing network. + +Table 1. Summary of parameters to the analytic model. + +
NNumber of streets which connect to roundabout
λiExternal arrival rate of cars to entrance i
σiRate at which cars may enter roundabout from entrance i
μRate at which cars may exit roundabout
π(n1,···nN+1)Stationary distribution for the network
+ +# Additional Assumptions + +Constant arrival rates: Constant arrival rates produce a system for which we can both derive a stationary distribution analytically and understand asymptotic behavior. The equilibrium behavior serves as a basis to build a more-complex and realistic simulation. + +Perfect driver behavior: We do not allow drivers to miss their exits or break the rules (swerve wildly while talking on cellphones, scrape tires on the curb, mow down pedestrians, or cut in front of other drivers). These behaviors are infrequent enough that we can neglect them. + +# Description of Simple Queuing Network + +The basic idea behind our Jackson network is to break the system up into queues. We assume that the roundabout is the intersection of $N$ streets, which yields a network of $N + 1$ queues. Each street contributes an input stream, modeled as an $\mathrm{M} / \mathrm{M} / 1$ queue with its own arrival rate $\lambda_{i}$ . An input queue releases cars at rate $\sigma_{i}$ from the incoming street into the roundabout. The presence of traffic lights or yield signs is represented by a thinning parameter $g$ , the percentage of time when a traffic light at the intersection is green; setting $g = 1$ corresponds to a yield sign. Thus, cars enter at a thinned rate $g\lambda_{i}$ . The queue representing the roundabout itself is an $\mathrm{M} / \mathrm{M} / N$ queue, where the $N$ servers represent the $N$ exits. + +The stationary distribution $\pi(n_1, \ldots, n_{N+1})$ (if it exists) for this system of queues gives the asymptotic fraction of time that the system spends in the state with $n_i$ cars in queue $i$ , for all $i$ . We are interested in a network for which the stationary distribution can be found. Then we choose $g$ such that the system spends a larger fraction of time in a state where the total number of cars in the system is minimal. + +With a limited-capacity queue for the roundabout, the input queues would input cars into it only if there is space. However, finite-capacity queuing networks do not generally yield closed-form solutions for stationary distributions [Bouchouch et al. 1992]. + +To ensure an analytic solution, we allow infinite capacity and get a stationary distribution, which can tell us the probability that a certain total number of cars is "stuck" in the system. In the original model, cars wait in the street from which they are to enter; in our model, they wait inside the (infinitely large) roundabout. We are not actually concerned with where they wait but rather with how many wait and how likely it is that many cars will be waiting. + +If the roundabout is not full, finite- and an infinite-capacity roundabout queues are equivalent because an incoming car can always enter. Now suppose that the roundabout is full: + +(i) If a car waits in its street, three events must occur before it exits the system: another car in the circle must leave it (with departure rate $\mu$ ), the new car must enter it (rate $\sigma$ ), and the new car must exit (rate $\mu$ ). +(ii) If a car "waits" in the roundabout and then exits, three events will also have occurred: the car will have entered (rate $\sigma$ ) and joined the interior queue. Since there are only $N$ servers, another car must exit that queue (rate $\mu$ ) before the car in question is served (rate $\mu$ ). + +Either treatment is the superposition of three Poisson processes with the same set of rates, and order is unimportant. Thus, the queue exhibits "pseudo-finite capacity," since it mimics the qualitative behavior of a network where one queue size is bounded and the others are infinite. + +# Formulation of Stationary Distribution + +In an equilibrium state, the input of each queue equals its output. Let $r_i$ be the asymptotic departure rate from queue $i$ , equal to the sum of the arrival rates to queue $i$ . Defining $p(i,j)$ as the probability that a car leaving queue $i$ enters queue $j$ , we can write an expression for the asymptotic departure rate: + +$$ +r _ {j} = \lambda_ {j} + \sum_ {i = 1} ^ {N + 1} r _ {i} p (i, j), +$$ + +in matrix form expressed as + +$$ +\mathbf {r} = \boldsymbol {\Lambda} + \mathbf {r p}, +$$ + +where $\mathbf{r}$ is a row vector of departure rates, $\Lambda$ is a row vector of arrival rates, and $\mathbf{p}$ is the matrix with elements $p(i,j)$ . + +Following Durrett [1999], we define two conditions on the system: + +(A) For each queue $i$ , there exists a path of positive probability along which it is possible to exit the system. +(B) Define $\varphi_{i}(n)$ as the departure rate from queue $i$ when that queue contains $n$ cars and let + +$$ +\psi_ {i} (n) = \left\{ \begin{array}{l l} \prod_ {m = 1} ^ {n} \varphi_ {i} (m), & n \geq 1; \\ 1, & n = 0. \end{array} \right. +$$ + +Then there exists a positive constant $c_{j}$ such that + +$$ +\sum_ {n = 0} ^ {\infty} \frac {c _ {j} r _ {j} ^ {n}}{\psi_ {j} (n)} < \infty . +$$ + +Durrett [1999] shows that if condition (A) is met, then the matrix $(\mathbf{I} - \mathbf{p})$ is invertible; and if condition (B) is also met, then a stationary distribution $\pi$ exists with form + +$$ +\pi (n _ {1}, \dots , n _ {N + 1}) = \prod_ {j = 1} ^ {N + 1} \frac {c _ {j} r _ {j} ^ {n _ {j}}}{\psi_ {j} (n _ {j})}. +$$ + +We apply this approach to our system, where $\Lambda = (\lambda_1,\dots ,\lambda_N,0)$ . The $(N + 1)\mathrm{st}$ queue has zero external arrival rate because it is the queue for the roundabout. + +The $(N + 1)\times (N + 1)$ matrix for $\mathbf{p}$ has the form + +$$ +\left( \begin{array}{c c c c} {1 - g} & {0} & {\dots} & {g} \\ {0} & {\ddots} & {\dots} & {\vdots} \\ {0} & {\ddots} & {1 - g} & {g} \\ {0} & {0} & {\dots} & {0} \end{array} \right). +$$ + +From any location in the network, there is a nonzero probability of exiting the system. Thus, condition (A) is satisfied, the matrix $\mathbf{I} - \mathbf{p}$ is invertible, and we can write the vector of asymptotic release rates as + +$$ +\mathbf {r} = \boldsymbol {\Lambda} (\mathbf {I} - \mathbf {p}) ^ {- 1}. +$$ + +The simplicity of our system allows us to solve directly for $(\mathbf{I} - \mathbf{p})^{-1}$ via Gauss-Jordan elimination: + +$$ +\left( \begin{array}{c c c c} \frac {1}{g} & 0 & \ldots & 1 \\ 0 & \ddots & \ldots & \vdots \\ 0 & \ddots & \frac {1}{g} & 1 \\ 0 & 0 & \ldots & 1 \end{array} \right). +$$ + +Thus, the asymptotic departure rates have the form + +$$ +\mathbf {r _ {j}} = \left\{ \begin{array}{l l} \lambda_ {j} / g, & 1 \leq j \leq N; \\ \sum_ {i = 1} ^ {N}, \lambda_ {i} & j = N + 1. \end{array} \right. +$$ + +Now we formulate the parameters required by condition (B) to solve for a stationary state. For the entry queues, we have + +$$ +\varphi_ {j} (n) = \sigma_ {j}, \qquad 1 \leq j \leq N; +$$ + +and for the roundabout queue, we have + +$$ +\varphi_ {N + 1} (n) = \left\{ \begin{array}{l l} n \mu , & 1 \leq n \leq N; \\ N \mu , & n > N. \end{array} \right. +$$ + +Also, we formulate for the entry queue: + +$$ +\psi_ {j} (n) = \sigma_ {j} ^ {n} \qquad 1 \le j \le N; +$$ + +and for the roundabout queue: + +$$ +\psi_ {N + 1} (n) = \left\{ \begin{array}{l l} n! \mu^ {n}, & 1 \leq n \leq N; \\ N! (N \mu) ^ {n}, & n > N. \end{array} \right. +$$ + +We investigate under what circumstances condition (B) is met. For the entry queues $(1 \leq j \leq N)$ , we need to find a positive constant $c_{j}$ such that + +$$ +\sum_ {n = 0} ^ {\infty} c _ {j} \left(\frac {\lambda_ {j}}{g \sigma_ {j}}\right) ^ {n} < \infty . +$$ + +We can choose a nonzero $c_{j}$ only if this geometric series converges, which occurs when + +$$ +\frac {\lambda_ {j}}{g \sigma_ {j}} < 1. +$$ + +For the roundabout queue, we examine the convergence of + +$$ +\sum_ {n = 0} ^ {N} \left(\frac {r _ {N + 1}}{\mu}\right) ^ {n} \frac {1}{n !} + \sum_ {n = N + 1} ^ {\infty} \frac {1}{N !} \left(\frac {r _ {N + 1}}{N \mu}\right) ^ {n}. +$$ + +For fixed $N$ , the first term is finite and does not affect convergence. The second sum is a geometric series that converges if + +$$ +\frac {r _ {N + 1}}{N \mu} < 1. +$$ + +Thus, the two conditions necessary for the existence of equilibrium and a stationary distribution for our queuing network are + +(i) $\lambda_{j} < g\sigma_{j}$ , +(ii) $\sum_{i=1}^{N} \lambda_{i} < N \mu$ . + +If these conditions are met, we can solve for the stationary distribution. First, we choose the constant $c_{j}$ such that + +$$ +\sum_ {n = 0} ^ {\infty} \frac {c _ {j} r _ {j} ^ {n}}{\psi_ {j} (n)} = 1. +$$ + +We find: + +$$ +\begin{array}{l} \frac {1}{c _ {j}} = \frac {1}{1 - \frac {\lambda_ {j}}{g \sigma_ {j}}}, \qquad q \leq j \leq N; \\ \frac {1}{c _ {N + 1}} = \sum_ {n = 0} ^ {N} \left(\frac {r _ {N + 1}}{\mu}\right) ^ {n} \frac {1}{n !} + \frac {1}{N !} \left[ \frac {1}{1 - \frac {r _ {N + 1}}{N \mu}} - \sum_ {n = 0} ^ {N} \left(\frac {r _ {N + 1}}{N \mu}\right) ^ {n} \right]. \\ \end{array} +$$ + +These can be used in the closed form of the stationary distribution presented in [Durrett 1999]: + +$$ +\begin{array}{l} \pi (n _ {1}, \dots , n _ {N + 1}) = \left(1 - \frac {\lambda_ {1}}{g \sigma_ {1}}\right) \left(\frac {\lambda_ {1}}{g \sigma_ {1}}\right) ^ {n _ {1}} \times \dots \\ \times \left(1 - \frac {\lambda_ {N}}{g \sigma_ {N}}\right) \left(\frac {\lambda_ {N}}{g \sigma_ {N}}\right) ^ {n _ {N}} \left(\frac {c _ {N + 1}}{N !}\right) \left(\frac {r _ {N + 1}}{\mu N}\right) ^ {n _ {N + 1}}. \\ \end{array} +$$ + +# Optimization of Stationary State + +The parameters $\mu$ , $\lambda_{j}$ , and $\sigma_{j}$ are fixed by the physical location of the roundabout and the number of cars that use it. Hence, the stationary state $\pi$ is a function of $g$ that can be optimized over $g$ . The idea is to maximize the amount of time spent in a state in which the total number of cars in the system is less than or equal to the capacity of the roundabout. Define + +$$ +\mathcal {K} \equiv \left\{\mathrm {a l l} \left\{n _ {i} \right\} _ {i = 1} ^ {N + 1} \mathrm {s u c h t h a t} n _ {1} + \dots + n _ {N + 1} = k \right\} +$$ + +and define + +$$ +\pi (k) = \sum_ {\text {a l l} \{n _ {i} \} \in \mathcal {K}} \pi (n _ {1}, \ldots , n _ {N + 1}). +$$ + +We analyze how $\pi(k)$ depends on $g$ for small $k$ . For a given $k$ , the number of terms in the sum is the number of nonnegative integer solutions to + +$$ +n _ {1} + \dots + n _ {N + 1} = k, +$$ + +which is given by the well-established formula [Ross 2006] + +$$ +\frac {(N + k) !}{N ! k !}. +$$ + +The number of terms in the sum grows exceptionally quickly, so directly examining $g$ -dependence is impossible. Instead, we establish a lower bound for $\pi(k)$ in terms of $\pi(0, 0, \ldots, 0)$ , the fraction of time in which no cars remain in the system. For this case, denoted $\pi(0)$ , we have: + +$$ +\pi (0) = \prod_ {i = 1} ^ {N} \left(1 - \frac {\lambda_ {i}}{g \sigma_ {i}}\right) \left(\frac {c _ {N + 1}}{N !}\right). +$$ + +Neither $c_{N+1}$ nor $N!$ depends on the choice of $g$ . Therefore, $\pi(0)$ is maximized over $g$ if the product + +$$ +\prod_ {i = 1} ^ {N} \left(1 - \frac {\lambda_ {i}}{g \sigma_ {i}}\right) +$$ + +is maximized over $g$ . The conditions under which this stationary distribution was constructed include + +$$ +\frac {\lambda_ {i}}{g \sigma_ {i}} < 1, +$$ + +ensuring that all terms of the product are between 0 and 1. Therefore, for a fixed set of constraints $\{\lambda_i / \sigma_i\}$ , the optimal choice of $g$ minimizes each $\lambda_i / g\sigma_i$ so as to maximize the quantity $1 - (\lambda_i / g\sigma_i)$ . Therefore, the largest $g$ will maximize $\pi(0)$ . Given the constraint $0 < g \leq 1$ , the optimal choice is $g = 1$ . + +Every other stationary state can be written in terms of $\pi (0)$ : + +$$ +\pi (k) = \pi (n _ {1}, \ldots , n _ {N + 1}) = \pi (0) \left(\frac {c _ {N + 1}}{N !}\right) \left(\frac {\lambda_ {1}}{g \sigma_ {1}}\right) ^ {n _ {1}} \dots \left(\frac {r _ {N + 1}}{N \mu}\right) ^ {n _ {N + 1}}. +$$ + +We establish a lower bound for $\pi (k)$ by defining + +$$ +\frac {\epsilon}{g} \equiv \min \left\{\frac {\lambda_ {i}}{g \sigma_ {i}}, \frac {r _ {N + 1}}{N \mu} \right\}, \qquad C \equiv \frac {c _ {N + 1}}{N !}. +$$ + +We assert that since each term in the product is less than or equal to 1, the sum of the powers of these terms is $k$ ; since there are $(N + k)! / N!k!$ distinct elements of $\mathcal{K}$ , we have + +$$ +\pi (k) \geq \frac {(N + k) !}{N ! k !} C \left(\frac {\epsilon}{g}\right) ^ {k} \pi (0). +$$ + +In the event that + +$$ +\min \left\{\frac {\lambda_ {i}}{g \sigma_ {i}}, \frac {r _ {N + 1}}{N \mu} \right\} = \frac {r _ {N + 1}}{N \mu}, +$$ + +all $g$ -dependence comes from $\pi(0)$ , which is maximized for $g = 1$ . In the event that for some index $j$ we have + +$$ +\min \left\{\frac {\lambda_ {i}}{g \sigma_ {i}}, \frac {r _ {N + 1}}{N \mu} \right\} = \frac {\lambda_ {j}}{g \sigma_ {j}}, +$$ + +we first define + +$$ +\max \left\{\frac {\lambda_ {i}}{g \sigma_ {i}} \right\} = \frac {\delta}{g}, +$$ + +which allows us to assert + +$$ +\pi (0) \geq \left(1 - \frac {\delta}{g}\right) ^ {N}, +$$ + +which in turn implies that + +$$ +\pi (k) \geq \frac {(N + k) !}{N ! k !} C \left(\frac {\epsilon}{g}\right) ^ {k} \left(1 - \frac {\delta}{g}\right) ^ {N}. +$$ + +We turn our attention to the behavior of the part that governs the $g$ -dependence of the lower bound of $\pi(k)$ : + +$$ +f (g) = \left(\frac {\epsilon}{g}\right) ^ {k} \left(1 - \frac {\delta}{g}\right) ^ {N}. +$$ + +We differentiate with respect to $g$ and find that + +$$ +\frac {\partial f}{\partial g} = \frac {\epsilon^ {k} (g - \delta) ^ {N - 1} [ (N - k) (g - \delta) + N g ]}{g ^ {2 (k - N)}}. +$$ + +Since $\epsilon > 0$ , and $g - \delta > 0$ according to the assumptions with which we set up the system, the sign of $\partial f / \partial g$ is determined by the expression + +$$ +(N - k) (g - \delta) + N g, +$$ + +which is guaranteed positive for + +$$ +k < N + \frac {N g}{g - \delta}. +$$ + +So, for small $k$ , the slope is positive for all $g$ in our domain, implying that increasing $g$ increases the lower bound on $\pi(k)$ , which ensures that the stationary distribution is larger. For our analytic model, the value of $g$ that guarantees the largest lower bound on $\pi(k)$ for small $k$ is $g = 1$ , regardless of other parameters. Our analytic model always recommends a yield sign. + +To examine the actual stationary state behavior, we implement a computer program that calculates $\pi(k)$ for each value of $k$ , summed over all the stationary states for which the total number of cars in the system equals $k$ . We examine this quantity for a wide range of values of $\lambda$ , $\sigma$ , and $\mu$ . In all cases, the stationary distribution for lower $k$ values is highest for $g = 1$ . In Figures 2 and 3, we compare the lower-bound behavior and the actual behavior for a four-entrance roundabout. We examine both the case where all input rates are equal and the case where they are not. Our lower-bound-estimate curves and our calculated curves have very similar shapes. Thus, a choice of $g$ that maximizes the area under the lower-bound curve for small $k$ also maximizes the area under the actual curve. This fact validates our use of the lower-bound estimate as a basis for the optimal choice of $g$ . + +Our analytic formulation always finds the optimal entrance rule to be a yield sign at every intersection. Although this is in part a result of the limitations of the model, such as lack of time-dependence, it is mostly consistent with both the results of our computer simulation and our research into real-world practices. + +![](images/d65fc07ffd7871bed974c3c5f364c1f3ca636c9746f66ae38f182e6af5c02e54.jpg) +(a) Actual value. + +![](images/aba1091e6d0f4b95a51d982c913a17680d0388ff531b348c0f385a3f88cc2b28.jpg) +(b) Lower bound estimate. +Figure 2. Comparison of actual stationary distribution and lower bound estimate for unequal input rates. + +![](images/8470be8d21f2df18c5e48f770e3afc2671065b472b37dfa2819b6d115bb257fb.jpg) +(a) Actual value. + +![](images/66f0d322da25576c23cf6a51b59a27aabc9bc50a96b9bb64c623936f7c445aa6.jpg) +(b) Lower bound estimate. +Figure 3. Comparison of actual stationary distribution and lower bound estimate for equal input rates. + +# Computer Simulation + +Given the weaknesses of the analytic model, we adapt it to create a computer simulation with the freedom to change some assumptions in order to create a more realistic model. + +# Assumption Modifications + +Independent arrival processes: The probability of a car approaching the circle from one street does not depend on the probability of a car approaching the circle from a different street, nor does it depend on the probability distribution of how cars enter or leave the traffic circle. + +Drivers' intentions: Every driver wants to leave the traffic circle through a specific exit in the least amount of time possible. However, since a driver may be confused or unaware of surroundings, we define a fixed probability for a car to leave the circle successfully. While this feature allows for the possibility of getting stuck in the circle forever (reminiscent of Chevy Chase in National Lampoon's European Vacation), the probability of continually missing the exit is vanishingly low. + +Constant car length and speed: Vehicles all have the same length and speed. Adding variation would introduce unnecessary complexity into the model. + +Yield sign is optimal for low traffic volume: According to both literature and common sense, a periodic traffic light in a roundabout with few cars only hampers flow. + +# Computer Simulation of One-Lane Roundabout + +We want to compare our analytical results to a more-realistic simulation. We simulate cars arriving to a theoretical traffic circle, entering the circle, moving through it toward, and exiting as desired. + +We fix the length of the car at $5\mathrm{m}$ and vary the speed inside the circle from 8 to $13\mathrm{m / s}$ , based on the ranges presented in Robinson et al. [2000]. The capacity of the roundabout (the number of cars that can be inside at any one time) is determined by vehicle length, vehicle speed, and roundabout radius. At full capacity, cars inside the roundabout are spaced by 1 s of driving, ensuring sufficient space to maneuver. + +# Description of Simulation Process + +Our simulation determines when cars arrive to the circle from each entrance street, considered independently. For a random variable $U \sim$ + +Uniform[0, 1], the variable - $\ln (U) / \lambda$ is exponentially distributed with parameter $\lambda$ ; we use the latter random variable to determine the interarrival times for each entrance road. + +We vary arrival rates by time of day; fewer cars should arrive at night than in the middle of the day or during rush hours. To account for this behavior, we scale the peak arrival rates. The scaling function $f(t)$ consists of narrow Gaussians centered at each rush-hour time and a smaller-amplitude slowly-varying Gaussian centered at midday. This function is plotted in Figure 4 for rush periods of 1 hr each at 8:00 A.M. and at 5:00 P.M. + +![](images/d96531d1b0051c080c5ba632fdd505cd6283e6158919d5bbda047b947a9229b2.jpg) +Figure 4. Time-dependent arrival rate multiplier. + +The arrival times for each entrance queue are recorded and computed prior to simulation. At each simulated arrival time, we add a vector to the right end of a dynamic matrix that represents the entry queue. The vector, corresponding to a car, contains parameters that govern the car's behavior: arrival time, destination, and probability of missing the exit. The matrix columns represent the order of cars waiting to enter traffic circle; and because we treat the entrance queue as a first-in/first-out buffer, only the car of the leftmost column can enter the circle. + +A car's destination is determined by relative exit popularity. When a car arrives at its exit, a random variable $U \sim \text{Uniform}[0,1]$ is simulated; if the value is less than 0.05, the car misses its exit and stays in the traffic circle. + +To simulate traffic moving through the circle, we divide the circle into discrete positions based on the circumference and the length of a typical car. We number these positions in the same direction as the flow of traffic. Vectors from the leftmost position in an entry-queue matrix are placed into the traffic circle if the entry position and the position immediately behind the entrance are both vacant. Thus, we obtain a "circle matrix" where each column pertains to a position. Moving an entire column of the matrix + +simulates an individual's movement through the circle. + +At regular time intervals, based on the speed of the cars and the size of the circle, we rotate the columns of the matrix. After each rotation, cars check to see if they have reached their destination and, if so, determine whether they exit. Once a car exits, it calculates time spent in the circle by subtracting arrival time from exit time. The simulation then erases the values of the vector representing the car's current position from the circle matrix to indicate that that car has left the circle. After exiting cars leave, cars waiting to enter the circle make the following two checks, both of which must be satisfied in order to enter the circle: + +Check traffic signal: The car checks a signal matrix whose rows are indexed by the entrance locations and whose columns represent a fraction of time in a traffic-light cycle. Thus, each entry $(i,j)$ of the signal matrix indicates whether the $i$ th light is red or green during the $j$ th signal interval, where each signal interval is + +$$ +\frac {2 0}{\text {r o u n d} \left(\frac {\text {c a r l e n g t h} + \text {s p e e d}}{\text {s p e e d}}\right)} +$$ + +seconds long. At the start of the simulation, $j = 1$ ; once one signal interval has elapsed, $j = j + 1$ . Iteration continues until we reach the end of the signal matrix, signifying the end of the traffic-light cycle; at that point, $j$ is set to 1. The time $t$ of the simulation step determines which value of $j$ is used. If the entry of the matrix is 0, the light is red and the car cannot enter the circle; if the entry is 1, the light is green and the car can enter if there is space. + +For each run of the simulation, three signal matrices are used: one each for late night/early morning, rush hours, and midday. A signal matrix whose entries are all identically 1 is referred to as a yield matrix because it acts like a yield sign; the late night/early morning signal matrix is always a yield matrix. + +Check for cars in the circle: Cars permitted to enter the circle by the signal matrix must nonetheless yield to traffic in the circle. A car checks the circle matrix to see if both the entrance position and the position before it are unoccupied, so that it does not hit a car in the circle nor cut one off. + +If both conditions are satisfied, the simulation puts the car into the circle by removing the leftmost column of its entry matrix and copying it into the entrance position on the circle matrix. + +# A Metric to Measure Traffic Flow + +We can use the average time spent in the system per car over one day as a good estimator of how the simulation behaves. However, cars during + +rush hour should be waiting longer than cars at midday or at night. As a result, the maximum time spent should give us a sense of the worst-case scenario. A good flow-control system (or signal matrix) should produce lower values for both the average time and the maximum time. + +# Justification of Experimental Methodology + +Literature search and our analytic model reveal that yield control is by far the most common and effective form of roundabout flow control, so we use our simulation to test the effectiveness of a yield sign vs. a traffic light. + +First, we assume that late at night and early in the morning, when traffic flow is minimal, a yield sign (or a perpetually-green traffic light) would be optimal. We ran three simulations on each of 100 combinations of matrices, 98 of which were randomly generated; we always compared the random signal matrix results to the yield signal matrix and a fixed non-yield signal matrix. Every matrix set was run on the same roundabout. + +We want to eliminate matrices that represent unrealistic periods of red light. We force our midday signal matrix to satisfy the following (where $\mathbf{g}_{\mathrm{yield}}$ represents the matrix of all ones and $\mathbf{g}_{\mathrm{mid}}$ is our midday matrix): + +$$ +\left\| \mathbf {g} _ {\text {y i e l d}} - \mathbf {g} _ {\text {m i d}} \right\| _ {\infty} \leq 2. \tag {1} +$$ + +For our rush-hour signal matrix $\mathbf{g}_{\mathrm{rush}}$ , we enforce the following condition: + +$$ +\left\| \mathbf {g} _ {\text {y i e l d}} - \mathbf {g} _ {\text {r u s h}} \right\| _ {\infty} \leq 3. \tag {2} +$$ + +These conditions force a sufficient number of 1s in each row of the matrix. We enforce slightly different conditions during rush hour vs. midday because of the decreased traffic volume at midday; as traffic volume decreases, necessity for control decreases. + +# Simulation Results, Part 1: Flow-Control Considerations + +From the analytic model, we conclude that the most effective control is for entering cars to yield to cars in the circle. We wish to see if this result holds for the more-complicated simulation; in terms of simulation variables, we want to know if the yield matrix is the optimal choice of signal matrix. + +Using the yield matrix, we ran simulations using different relative distributions for input rates from four entrance streets, with the ratio of smallest input rate to largest ranging from 1:1 to 1:8. We plot the number of cars in each part of the system against time. Traffic congestion appears in the plots as extreme peaks in the density. + +In Figure 5(a), all streets have the same entrance rate. As one can see in the second row from the back, the majority of cars enter the circle almost immediately. + +We observe similar behavior when some streets have higher input rates than others; we consider a street to have the "major" input rate if its rate is as high or higher than the others. In Figure 6(a), two streets have major inputs, but the plots appear almost exactly the same as in Figure 5(a), with a discrepancy only in the peak total density. Furthermore, with only one major street (Figure 6(b)), we see even better performance—the peaks are significantly decreased. This shows that yield signs are self-regulating enough to behave well under both high input and low input. + +We now turn our attention to systems with traffic lights at each entrance. This means that the traffic-signal matrices contain both 1s and 0s, although we enforce the condition that no row contains all 0s (stopping all traffic). Also, we use these non-yield methods only during the rush-hour and midday periods and use the standard yield matrix at night. + +Using the same input rates as in Figure 5(a), we obtain the plots in Figure 5(b). The shapes of the plots appear similar but the scaling is different. The maximum peak in Figure 5(a) barely reaches 50 cars, but in Figure 5(b) the peak reaches 70 cars. + +Non-yield control may not be optimal, but we should not jump to that conclusion. We ran 100 such trials with different random traffic-signal matrices and compared them with a trial that used the yield matrix. The results are in Figure 7. The horizontal lines indicate the mean (396 s), median (232 s), and minimum (23 s) values. The yield matrix, with an average of 32 s, was not the best trial (although the granularity of the plot partially hides this fact). In fact, 4 of the 100 trials with random non-yield traffic matrices beat the yield matrix by a margin of about 9 s. + +Nonetheless, these few results do not shatter the conclusions from our analytic model. The matrices that seem to improve flow are extremely similar to the yield matrix, with only one or two 0s in the entire matrix and no row containing more than one 0, and we used these matrices only during peak traffic hours (less than $1/12$ of the day); so that these matrices showed better performance than the yield matrix can be attributed to chance. The overall experimental result is telling: $96\%$ of the trials were significantly worse than the yield matrix, and the "better" matrices improved the process by only a small margin, too small to warrant the cost of traffic lights. + +We also tested our simulation on various roundabouts. For five other parameter sets, varying size, speed, and input flow, we ran the same experiment with 19 random matrices per parameter set. In each case, the yield matrix performed as well as, or better than, any of the signal matrices. Thus, we base our recommendation on consistent results over 200 trials across various roundabout designs. + +# Simulation Results, Part 2: Size Considerations + +Because larger traffic circles have higher capacities, we investigate the effects of the circle radius on traffic flow. As the radius of the circle increases, + +![](images/59d5441f57ae88567ec0d75a52eb9e32a079c0a7d653b0051f34aabb5eb8a98d.jpg) +(a) Yield flow control. + +![](images/f023bb9045dba4e4a96c1853965ff44590303f5ae7b21dbb4c7ec5bb191bb33b.jpg) +(b) Non-yield flow control. +Figure 5. Car density comparison of yield vs. non-yield when all entrances have similar input rates. + +![](images/58c5d022e0ab899ec7cac5b7416ae6379a478a57e0623e7d6480869ae4ab1714.jpg) +(a) Two major input rates. + +![](images/d89f42ceaea7a70c2ff43582cbf730f8326429893b59f040b0e35ca490f822d9.jpg) +(b) One major input rate. +Figure 6. Car density for yield flow-control. + +![](images/faeb97ced2728c2ad838bd2c68e76bfd9cb6bef3e0dff8fd4ea41dde566b09e4.jpg) +(a) 100 trials. +Figure 7. Average time spent in system with random traffic-signal matrices, for 100 trials. + +the number of cars that can fit in the circle also increases, so fewer cars should be waiting in the entrance queues. Thus, for large total input, a larger circle should perform better. Of course, as we increase the radius, the circle becomes like a very long one-way street that curves; cars take a long time to pass through simply because they drive farther. Larger circles cost more and demand more space, so we wish to find an optimum radius. + +However, we cannot use our simulation to find an exact relation between total input rate and optimum radius. Simulation results may demonstrate typical behaviors, yet the nature of random simulation prevents establishing an exact function of optimum radius in terms of radius. + +Using the yield matrix and two major streets with two minor streets, we ran a set of trials varying total input rate and circle radius. In Figure 8, the flat plane represents "well-behaved" systems where larger radii produce lower average time in a one-lane roundabout. Also, for fixed radius, the average time spent in the system increases with total input. + +What is most interesting in the plot is the rapid change in behavior after the total input rate goes above 3,000 cars/hr (just under one car per second). We expect more delays as more cars try to enter the system, but we also expect larger radii to decrease the delays. With total input 4,000 cars/hr, a circle with radius $35\mathrm{m}$ performs better than one of $30\mathrm{m}$ , as expected; but one of radius $40\mathrm{m}$ performs worse than both, which is entirely unexpected. Thus, we conclude that for total peak flow of less than 3,000 + +![](images/1030f85474096dc9460cc00cc95fb5c9f863817a4670ec46bb00334a19177d3e.jpg) +Figure 8. Average time spent in system for various input rates and circle radii. + +cars/hr, increasing radius is directly correlated with decreasing average total time, but at higher flow rates, the correspondence becomes erratic. + +This unexpected behavior reveals the limitations of our model. A single-lane roundabout with four entrances cannot handle grossly inflated input, regardless of size. + +# Strengths and Weaknesses + +# Analytic Model + +The analytic model is limited in many ways. We compromise many kinds of complexity to formulate a closed-form stationary distribution; but in the end, the sheer variety of equivalent states that our system could take thwarts analysis. Our lower-bound calculations for the stationary distribution are pretty but provide a bound that is an order of magnitude less than the function itself. We can show that the lower bound grows with $g$ for small $k$ , but we do not prove that the overall shape of the lower bound always emulates the actual function. We do show that the two functions are behaviorally similar in two specific cases. + +This model is useful as a basis for our computer simulation and narrowing our search for effective control systems. + +# Computer Simulation + +The computer simulation copes with many of the limitations of the analytic model. It introduces time-dependent flow, limits the capacity of the roundabout, and more directly simulates the action of a traffic light as a discrete system rather than as a time-averaged parameter. This formulation allows us to explore a wide range of parameters beyond the convergence constraints of the analytic model. + +The computer simulation is limited by the vastness of the parameter space. We could not implement an optimal signal-matrix search because determining the functional value of a signal matrix is computationally intensive and because the dimensionality of the variable space is so large. The independent variable space for a signal matrix for a 4-entrance roundabout has 16 dimensions, and the simulation uses 3 different signal matrices in every run. + +The analytic model was useful, therefore, in restricting our search to signal matrices close to the yield matrix. We ran hundreds of trials with randomly-generated signal matrices containing no more than three 0s per row. Within this search space, the yield matrix performed better in the vast majority of cases. Thus, our simulation confirms that compared to yield signs, traffic signals have at best comparable efficacy. + +The simulation is limited in scope. It does not account for pedestrian traffic, driver mistakes or accidents, weather conditions, or other factors. It is also limited to one-lane roundabouts. As Figure 8 shows, flow rates in excess of 2,500 cars/hr clog the roundabout, regardless of input control. To some extent, increasing flow can be mitigated by increasing roundabout radius; however, for flow rates in excess of 3,000 cars/hr, a two-lane roundabout is necessary. A simple case of this would be a roundabout with outer "express" lanes from which a vehicle can travel only from one entrance to the next exit. In this case, traffic signals would always impair flow, because the "express" lanes are always vacant for an entering vehicle. + +# Conclusion + +Our search through literature, parameter space, and computer-generated experimental results bring us to a conclusion validated in intersections across the U.S.: yield-sign control is nearly always the best way to regulate roundabout entry. + +# References + +Arizona Department of Transportation. n.d. Modern roundabouts. http: //www.azdot.gov/CCPartnerships/Roundabouts/index.asp. + +Bouchouch, A., Y. Frein, and Y. Dallery. 1992. A decomposition method for the analysis of tandem queing networks with blocking before service. In Queueing Networks with Finite Capacity: Proceedings of the Second International Conference on Queueing Networks with Finite Capacity, May 28-29, 1992, edited by Raif O. Onvural and Ian F. Akyildiz, 97-112. Amsterdam, Holland: North-Holland. +Durrett, Rick. 1999. Essentials of Stochastic Processes. New York: Springer. +McLawhorn, Nina. 2002. Roundabouts and public acceptance. http://on.dot.wi.gov/wisdotresearch/database/tsrs/ tsroundabouts.pdf. +Medhi, J. 2003. Stochastic Models in Queueing Theory. San Diego, CA: Academic Press. +National Cooperative Highway Research Program. 1998. Modern Roundabout Practice in the United States. Washington, DC: National Academy Press. +Robinson, Bruce W., et al. 2000. *Roundabouts: An Informational Guide*. Washington, DC: Federal Highway Administration, U.S. Dept. of Transportation. http://www.tfhrc.gov/safety/00068.htm, June 2000. +Ross, Sheldon. 2006. A First Course in Probability. 7th ed. Englewood Cliffs, NJ: Prentice Hall. + +![](images/9087e2093b037e5df49837fa090920e05c634157b4c28d839404ec79a8cd4f1e.jpg) +Geoffrey Peterson, Anil Damle, Anna Lieb, and advisor Anne Dougherty. + +# Judge's Commentary: The Outstanding Traffic Circle Papers + +Kelly Black + +Dept. of Mathematics + +Clarkson University + +P.O.Box 5815 + +Potsdam, NY 13699-5815 + +kjblack@gmail.com + +# Overview of the Problem + +Teams who decided to explore the "A" problem in this year's Mathematical Contest in Modeling examined ways to control the movement of vehicles in a traffic circle. A broad overview of the criteria developed by the judges and the experiences of the judges is given. + +In the following section, a brief overview of the problem statement is explored. Next, an overview of the judging itself is given. In the subsequent section, a list of some of the common approaches adopted by the teams is given. Finally, a list of some of the common themes and more detailed points that emerged as the judging proceeded is given. + +# Traffic Circles + +The focus on the "A" problem is to control the movement of vehicles in a traffic circle. A number of controls are explicitly given in the problem statement. The teams who submitted papers for this problem mainly focused on the given controls and very few examined other types of controls. + +The problem statement includes two requirements. First, the teams were asked to find a way to control the flow of traffic in an optimal way. Second, the teams were asked to write a summary of their findings. These two aspects are explored individually in the subsections that follow. + +# The Goal + +The goal for this problem is to find a way to move vehicles through a traffic circle in an optimal way. This was stated in the second paragraph of the problem statement : + +The goal of this problem is to use a model to determine how best control traffic flow in, around, and out of a circle. + +It is not clear what "best" means. It was left open for the teams to decide what "best" means. The teams were required to make it clear in their report how they interpreted this part of the problem: + +State clearly the objective(s) you use in your model for making the optimal choice as well as the factors that affect this choice. + +The judges expected the teams to clearly describe the objectives, and we expected that the subsequent evaluation of the model be consistent with the stated objectives. This can be difficult for the teams to achieve given the dynamic of writing as a team, the nature of how approaches evolve as the problem is explored, and the intense time pressure. Teams that managed to maintain a high level of consistency tended to elicit a more-positive response from the judges. + +# Technical Summary + +An essential requirement was to write a technical summary. The requirements for the technical summary were given in the problem statement. This was a difficult aspect to the problem. The teams were expected to provide a broad set of guidelines for a traffic engineer in a brief note. + +The traffic engineer should be able to read the summary and have a strong sense of the different methods available. Additionally, the different circumstances that impact the decision should also be included. Examples of important parameters are the radius or geometry of the circle, the rate of flow of traffic coming into the circle, and the density of traffic coming into the circle. Very few teams considered the traffic capacity of roads leaving the circle, and most assumed that the incoming traffic was a primary limiting factor. + +The traffic engineer is also expected to obtain a broad understanding of the conditions for which the model is applicable. This implies that the engineer should be able to read the summary and obtain a basic understanding of how the model was developed and an understanding of the potential pitfalls. + +Writing the summary was a difficult task for the teams. The teams had a diverse amount of information to convey in two pages. The teams that managed to convey a sense of the basic models, the underlying assumptions, and the limitations of their models tended to make a stronger impression. + +# Grading Process + +First, a brief overview of the evaluation process is given. The papers are evaluated in three stages. There is an initial round where the focus is on which papers to remove from the pool. The second, or screening round, focuses on which papers meet the minimal requirements for an advanced score. In the final round, the judges focus on which papers meet the highest standards. + +# Initial Grading + +The initial round is designed to remove papers from the pool that are not likely to meet the standards in the following round. Each paper is read by at least two people. Papers that receive consistent low scores are not passed on to the next round. Papers with mixed reviews are read by more people. When the reviewers are unsure, they try to err on the side of caution and pass the paper on to the next round. + +It is absolutely essential that a paper be well-written and have a clear, concise summary to make it past the initial round. A paper that does not provide a clear overview including results and a synopsis of the techniques used will not make a strong impression on the judges. The summary and the rest of the paper must also be consistent. Differences between the summary and the following pages can be immediately apparent and do not make a positive impression of the paper. + +# Screening Rounds + +As the judges examine papers in the next set of rounds, they try to decide if the paper meets the minimal requirements to do well in the following rounds. The number of times that a paper is read in these rounds varies from year to year. Again the judges try to err on the side of caution; but as the rounds proceed, the criteria for doing well becomes increasingly stringent. + +It is still important to have a strong summary, but the need for consistency across the whole paper is more important. The need for proper citations and correct grammar is also important. This year, a large body of literature was available for the teams. It was even more important than usual to include proper citations and make clear what work was done by the team and what work was found in the literature search. + +# Final Rounds + +In the final rounds of judging, the focus is on finding the best submission. At this point, each paper is read many times, and more time is available for each reading. The judges are able to focus more on each individual step and focus on consistency across the whole paper. The papers that remain in these final stages must maintain high scores to move forward. + +# Approaches + +The flow of traffic in roundabouts is an active research area. The available literature influenced many of the teams. Most teams used either a deterministic approach or a stochastic approach. Here we examine each of these approaches separately. + +# Deterministic + +The teams that adopted a deterministic approach tended to make greater use of models based on partial differential equations. There are a variety of different conservation laws that have been derived to model traffic flow. Such models tend to focus on relatively simple traffic geometries and require considerable adaptations to model a traffic circle. + +At first glance, a conservation law for a traffic circle seems to avoid the issues associated with boundary conditions because it is a periodic geometry. Unfortunately, the exits and entrances of the feeder roads create other difficulties. Adapting models to include the exits and entrances occupied the majority of the modeling efforts. + +The second difficulty with this approach is to find an approximation to the solution. The equilibrium solutions to the equations are piecewise-constant functions, and the conversation law gives rise to shocks. Given the complex boundaries, the method of characteristics is complicated, and the numerical approximations can be daunting since the techniques must account for upwinding. + +# Stochastic + +The majority of teams used a stochastic approach. In general, they examined either queues or networks, and a common approach was to use a hybrid model combining the two. A typical paper included an overview of the model, some theoretical results for a simple situation, and results for a computational model. + +Teams adopting this approach were expected to use proper citations because of the wide body of work available. The judges also paid more attention to the consistency across the whole paper. The summary, model, results, and discussion had to be consistent. + +Another issue that emerged with some papers is the disconnect between the section in the paper discussing the theory and that with the numerical simulations. Many of the top-rated papers provided some theoretical results for simplistic geometries or simulations. The majority of these went on to include the results of numerical simulations for the more complicated cases. The few teams that provided a confirmation of the numerical model on a simple geometry made an immediate positive impression. + +The other issue is how to report the results of simulations in a coherent manner. The development of the model requires a probabilistic approach. The + +analysis of the numerical trials requires a shift to a statistical approach. The majority of teams simply reported means and sometimes standard deviations. Few teams reported results using qualitative methods such as boxplots or histograms, and even fewer teams made use of appropriate quantitative statistical methods. + +Finally, when designing the numerical trials, few teams examined a range of values for the parameters in their models. Every year, the judges rate this aspect of the problem as a crucial part of the problem. We expect to see an exploration of the results given small changes in parameters or assumptions. The few teams that did examine this aspect immediately caught the judges attention. + +# Common Themes + +In the previous section, some observations specific to this year's competition are given. Some general observations that come up every year are explored here. + +# Summary + +The summary is an important part of the team's entry. It is the first thing that a judge will read. The summary is the first impression. It is vital that a paper have a complete and well-written summary to make it past the initial rounds. It is also vital that the details in the summary be consistent with the rest of the paper. + +Writing a one-page summary of the team's efforts is a difficult task. The teams are expected to provide a brief overview of the problem. They are then expected to let the reader know their specific conclusions and recommendations. Finally, the teams are expected to provide the reader with an overview of the approach that they used. + +It is difficult to include all three of these parts within the one-page summary. Many teams find it tempting to include a large amount of background information or provide clever narratives motivating the problem. Unfortunately, such material in the summary can drastically reduce the amount of space available to discuss the team's results and discussion of the approach that they adopted. + +# Grammar, Punctuation, and Equations + +The presentation of the team's model and results cannot be separated from the model itself. A team must have a reasonable model including a basic analysis of the model. The teams are expected to then share their results in a clear and concise discussion. + +Teams that do not make use of proper grammar and punctuation are not likely to make it past the initial rounds of the competition. Teams must know + +how to include equations in their writing and use proper punctuation. Advisers should not take it for granted that their students know how to do these things. + +# Proper Citations + +The judges expect every entry to include proper citations. Many teams are comfortable exploring the resources available to them, and it is unusual to come across an entry with a unique approach. The different types of approaches can be easily categorized, and the judges quickly figure out the sources available for each approach. + +# Sensitivity and Stability + +Sensitivity and stability are always important. The few teams that make a concerted effort to explore this aspect of their model will almost always stand out. Exploration of the sensitivity of a model can be as simple as testing what happens for a different range of values in a parameter, and it can include the use of more sophisticated methodologies such as an exploration of a sensitivity matrix. + +Every year, teams are able to implement nontrivial numerical simulations. The teams must make decisions about what numerical trials to examine. It is extremely rare for teams to scale a problem as a way to decide the combination of parameters that are important. + +# Figures and Tables + +The integration of graphs and tables into a paper is a challenge for many teams. It is not uncommon to see entries in which figures and tables are included with no detailed discussion of them. The teams need to integrate the figures and tables into their discussion. + +Given the increased use of simulations and numerical results it is vital that the teams find a way to include descriptions of their figures and tables into their narrative. The teams need to make sure to let their readers know the key aspects of their figures and tables and inform their readers how to look at the figures and tables. + +# Consistency Across the Paper + +The teams have a limited time to understand the problem, derive a mathematical description of the problem, perform the requisite analysis of their model, and then come back and interpret their work with respect to the original context. Over the course of the weekend, teams make decisions and explore a variety of different approaches. The time constraints make it extremely difficult to complete a paper in which the wide array of assumptions and analyses are consistent across the whole paper. + +# Conclusions + +A team's submission must satisfy a wide array of criteria to be successful and proceed through each stage of the judging. The presentation and grammar are vital aspects of a submission. The team's results are given through the filter of the team's writing. + +The team must provide a strong analysis. The teams only have four days, and the judges do not expect extensive and sophisticated models. A careful analysis of the resulting model is required, though. + +Each year, the expectations are different, but there are a few constants. For example, a clear discussion of the basic assumptions—with some justification, citations, and a discussion of the implications—is necessary. Additionally, judges always expect a focused discussion on stability and sensitivity. + +In this year's competition, the use of simulation was a part of the majority of entries. Incorporating an analysis of simulations is a difficult task, and the top entries did a remarkable job of integrating the development and analysis of their model with the discussion of the results of their numerical trials. + +Teams that were able to tie together the theoretical analysis of their model along with their numerical trials received immediate positive recognition. The best entries were able to develop multiple models of varying complexity and verify their numerical models with the theoretical results of the simpler models. + +# About the Author + +Kelly Black is a faculty member in the Dept. of Mathematics and Computer Science at Clarkson University. He received his undergraduate degree in Mathematics and Computer Science from Rose-Hulman Institute of Technology and his Master's and a Ph.D. from the Applied Mathematics program at Brown University. He has wide-ranging research interests, including laser simulations, ecology, and spectral methods for the approximation of partial differential equations. + +# Mobile to Mobil: The Primary Energy Costs for Cellular and Landline Telephones + +Nevin Brackett-Rozinsky + +Katelynn Wilton + +Jason Altieri + +Clarkson University + +Potsdam, NY + +Advisor: Joseph Skufca + +# Summary + +We determine that cellphones are the optimal communication choice from an energy perspective, using a comprehensive analysis based on multiple factors. We split phones into three categories: cellular, cordless landline, and corded landline. We average the energy used in manufacture and transportation over the life of each phone. To account for the inefficiency of production, we calculate in terms of primary energy, which is the amount of fuel supplied to a power plant per unit of energy produced for consumption. We use real-world data for population, number of mainlines, and cellphone subscriptions. + +During the transition, as cellphones overtake landlines, part of the population owns both types of phone. As a result, the total energy used by telephones increases. We fit a competing-species model to past statistics; it forecasts that the net energy cost of the cellphone revolution (1995-2025) in the U.S. will be 84 TWh. At the start of this period, there were 0.1 cellphones per capita; at the end there will be 0.1 landlines per capita. Energy savings will begin in 2022. After this transition, savings will be $30\mathrm{GWh / d}$ . The competing-species model is a proven technique; we apply it to telephone lines and cellphones per capita, and also use it in conjunction with population projections to develop a closed-form solution. + +The most energy-efficient way to provide phone service in a country with no existing infrastructure is to construct a cellular network. By amortizing the fixed setup costs over the lifetime of the phone system, the energy used during construction is negligible. For a country similar to the U.S., the annual savings would be 12 TWh. Over the next 50 years, the energy savings would equal 0.5 billion barrels of oil. + +Cellphone chargers waste energy, but the total energy wasted would be almost five times as great if everyone instead used a cordless phone. Continuing advances in charger technology are reducing charger waste. If all cellphone chargers in the future meet a 5-star Energy Star rating, they will be 10 times as efficient as now. + +Our model is supported by historical data and numerous publicly available statistics. One factor not accounted for is the maintenance and operating power required for cell towers and physical telephone lines. + +# Introduction + +Over the past 15 years, cellphone subscriptions in the U.S. have increased dramatically. At the same time, growing concerns over oil supplies have increased public consciousness of energy efficiency. We compare the energy use of cellphones to that of traditional landlines. Major factors include: + +- power used while charging, +- power used while idle, +time charging each day, +time idle each day, +- energy to manufacture and transport the phone, +- lifespan of the phone, and +- total number of phones. + +These values, many of which depend on the type of telephone, allow for a comprehensive analysis of the energy consequences of the cellphone revolution. Our model quantifies the effects of cellular and landline telephones on power consumption. + +# Assumptions + +- Cellphones and landline phones compete for the same market. +- Residential, commercial, nonprofit, and government telephones are included in the total number of phones. + +- The total number of phones is averaged by household. +- Every cellphone comes with a charger and lithium-ion battery [1]. +- A cellphone's battery will not be replaced but discarded with the phone. +- Overcharging or undercharging a lithium-ion battery does not affect its life or performance [6]. +- Nickel-hydride batteries are used in cordless phones [7]. +- The total energy used in manufacturing a landline phone is half that of manufacturing a cellphone. +- A person may own more than one telephone. +- In a household with cellphones, each of its $m$ members has their own cellphone. +- Every person within the population is part of a household. +- A charger is any item used to recharge batteries, including those within electronic devices such as laptop computers, cellphones, and cordless phones. Appliances such as televisions, refrigerators, and microwaves are not included, since they are not rechargeable devices. +- The fixed energy required to construct telephone infrastructure, when averaged over the duration of the phone system, is negligible. + +# Important Variables + +- $H$ , the number of households in the country; +- $Z_{\mathrm{cell}}$ , the number of cellphones per hundred people; +- $Z_{\text {landline }}$ , the number of landlines per hundred people; +- $N_{\mathrm{cell}}$ , the number of cellphones; +- $N_{\text {landline }}$ , the number of landline phones; +population; +- power drawn by each type of phone when idle; +- power drawn by each type of phone when charging or active; +- $W_{p} = 3.0128$ , ratio of primary energy input at a power plant to energy drawn off the grid [9]; +- on average, a cellphone's battery must charge for one hour a day [2]; +- $75\%$ of landline phones are cordless and $25\%$ are corded; +- the average lifespan of a corded landline phone is 20 years; + +- the average lifespan of a cordless landline phone is 10 years; +- cellphones last 1.5 years [3,4], whereas lithium-ion batteries and chargers last 3–4 years [5]; +- each landline connection has an average of $m$ phones connected to it; and +- $m = 2.37$ members in the average household [8]. + +# Part 1: Existing Infrastructure + +# Transition + +The U.S. has a mixture of cellphones and traditional landline phones. Currently, $84\%$ of the population has a cellphone subscription, with $16\%$ of U.S. households owning only cellphones [10]. The U.S. is currently in transition from exclusive use of landlines to exclusive use of cellphones. During this transition, cellphones and landline phones compete for consumers. The target market is the entire population, which grows over time. As such, the number of cellphones and landlines per hundred people is time-dependent, as seen in Figure 1. + +![](images/45ff09c33ab9e9390180a1bed640d562ee630f4bf7ea2d0d1129ddf94a00b88a.jpg) +Figure 1. Historical data for phone ownership in the U.S. + +As cellphones became popular, the number of landlines decreased. This suggests the data can be described with a differential competing-species model [11, 12, 13]. The competing-species model describes two species that both require a single finite resource and impede each other from acquiring + +it. The system of equations for this model is + +$$ +\frac {d x}{d t} = x (a _ {1} - b _ {1} x - c _ {1} y), \qquad \frac {d y}{d t} = y (a _ {2} - b _ {2} y - c _ {2} x), +$$ + +which cannot be solved analytically. In these equations, $x$ is the population of one species, $y$ is the population of the other species, and $a_1$ and $a_2$ are the unconstrained growth rates of the populations. The ratios $a_1 / b_1$ and $a_2 / b_2$ are the maximum populations for each species. The coefficients $c_1$ and $c_2$ are competition factors accounting for the negative effect that each species has on the growth of the other. + +For our purposes, the two species are cellphones and landlines. The resource is market saturation among the proportion of the population of the U.S. willing to purchase phones. When total phone ownership exceeds the equilibrium value, one of the two types of phone will have to die out, or become obsolete. The competition model can be applied by taking $Z_{\mathrm{cell}}$ as the number of cellphones per hundred people and $Z_{\mathrm{landline}}$ as the number of landlines per hundred people. We determined appropriate coefficients for this model graphically by solving the equations numerically in Matlab [14]. The results are + +$$ +\frac {d Z _ {\mathrm {c e l l}}}{d t} = Z _ {\mathrm {c e l l}} \left[ 0. 3 1 5 - \left(\frac {0 . 3 1 5}{1 1 0}\right) Z _ {\mathrm {c e l l}} - \left(4. 7 7 \times 1 0 ^ {- 4}\right) Z _ {\mathrm {l a n d l i n e}} \right], +$$ + +$$ +\frac {d Z _ {\mathrm {l a n d l i n e}}}{d t} = Z _ {\mathrm {l a n d l i n e}} \left[ 0. 2 1 - \left(\frac {0 . 2 1}{1 1 0}\right) Z _ {\mathrm {l a n d l i n e}} - \left(2. 5 0 \times 1 0 ^ {- 3}\right) Z _ {\mathrm {c e l l}} \right]. +$$ + +With these coefficients, the model fits the historical data accurately from 1995, when cellphones reached a penetration level of 10 per hundred people, to 2006, the last year when data for both phone types was available. The graphical solution and its projection through 2030 appears in Figure 2. This model predicts that the market will support up to 1.1 cellphones per capita, or up to 1.0 landlines per capita. Based on an average of 2.37 telephones connected to each landline, there is a maximum of 2.37 landline phones per person. Included in these numbers are residential, commercial, nonprofit, and government owned phones. The "Cellphone Revolution" is taken as the time period from 1995, when cellphones first reached a saturation of 0.1 per capita, through 2025, when landlines drop below a saturation of 0.1 per capita. + +Using this model, the total number of cellphones and the total number of landline telephones can be predicted for any future year. These numbers are used to determine the energy requirements in terms of gigawatt-hours per day (GWh/day). The power needed for each type of phone is in Table 1. For our purposes, a cordless phone is a landline telephone with either batteries or electronics, which draws constant power from the electrical grid. A corded phone gets all of its power from the telephone line. The energy to manufacture, ship, and dispose of a cellphone equals $180 \mathrm{MJ}$ , or $50 \mathrm{kWh}$ . + +![](images/d618a57393ad4c6d1cb00f2e4d5f1bf4996395bd7997591fae0fb80de31c6c71.jpg) +Figure 2. Historical data with projections from the competition model. + +All power quantities are listed in terms of rate of primary energy use, which accounts for the fact that for every watt-hour drawn from the grid, 3.0128 watt-hours worth of fuel were used to produce it [9]. In this way, we account for inefficiencies in the power generation and distribution systems [15]. The three cellphone types listed correspond to the power required for their chargers when idle. + +Table 1. +Primary power levels. + +
Idle (W)Active (W)Active hours (h)Fixed (Wh)Life (d)Daily energy (Wh)
Coreded phone00.45212500072003.92
Cordless phone5.1210.242250003600140.11
Cellphone (avg)0.90415.06150000540128.44
Cellphone (new)0.30115.06150000540114.59
Cellphone (5-star)0.09015.06150000540109.74
+ +The number of active hours corresponds to call time for corded phones, charging time for cellphones, and charging time for cordless phones. The manufacturing energy for a landline phone is assumed to be half that of a cellular phone, due to less-complex circuitry. A cordless phone has half the life of a corded phone, because it is more likely to get lost or broken. The final column of Table 1, showing lifetime average power per device in Wh/d, is calculated using + +$$ +P = \frac {(\mathrm {I d l e W a t t s}) (2 4 - \mathrm {H o u r s A c t i v e}) + (\mathrm {A c t i v e W a t t s}) (\mathrm {H o u r s A c t i v e}) + \mathrm {F i x e d}}{\mathrm {L i f e}}. +$$ + +The daily energy use for all phones is calculated using + +$$ +\text {D a i l y E n e r g y} = \left(P _ {\text {C e l l A v g}}\right) \left(N _ {\text {c e l l}}\right) + \left[ 0. 7 5 \left(E _ {\text {c o r d l e s s}}\right) + 0. 2 5 \left(E _ {\text {c o r d e d}}\right) \right] \left(N _ {\text {l a n d l i n e}}\right). +$$ + +There are 271,856,247 cellphones and 276,867,152 landline phones [20], meaning that the U.S. uses 64.3 GWh/d for telephones. Figure 3 shows the total power produced for telephones during the transition period from 1995 through 2030. A baseline conservatively projects what power levels would have been needed if cellphones had not become popular. + +![](images/5dffc75c628380d97e0b86f39b42941a7f91b4b2b7c3554939385c83d5d85ea5.jpg) +Figure 3. Energy for phones during transition period. + +The power used by landlines begins to decline as cellphone power usage grows. The net change in power production during this transition is initially positive. After the year 2021, the transition state becomes more energy-efficient than the projected baseline, as seen in Figure 4. This occurs because cellphones require less primary energy per day than landlines. + +Over the course of the transition period from 1995 to 2025, an additional 84 TWh of energy must be produced for telephones. However, starting in 2022, annual energy savings result. + +# Steady State + +The steady state occurs when the entire market for telephones is satisfied. Based on the model of the transition period, this will include only cellphones. When that occurs, and the two types of phones are in equilibrium, the limiting value is 1.1 cellphones per person. We will have $H = 126,316,181$ households [8] with $m = 2.37$ members/household. + +The total energy requirements for the steady state, based on the data in Table 1, are shown in Table 2. Energy-efficient chargers decrease the load. + +![](images/c58482de0515642e29af2c24ac49b9c7b968bd4f67cef2e0dcd30cc6f9b6c196.jpg) +Figure 4. Difference in energy generated for phones during transition. + +Table 2. Energy requirements for steady state, by charger efficiency. + +
DeviceEnergy cost (GWh/d)
Cellphone (average)42
Cellphone (new)38
Cellphone (5-star)36
+ +# Part 2: No Existing Infrastructure + +# Optimal State + +To determine the optimal system for providing telephone service in a country roughly the same size as the U.S. but lacking existing communications infrastructure, we compare the power requirements of each type of phone. The fixed energy required to construct telephone infrastructure, averaged over the duration of the phone system, becomes negligible. The limiting values for landline and cellular phone penetration are 2.37 and 1.1 phones per person respectively, the same as in the U.S. The energy needed per day is the population multiplied by the phone penetration factor, and the energy per day per phone. Figure 5 shows the projected power requirements for the country over time. + +From these data, corded landline phones are the most energy-efficient, using about 3.2 GWh/d. However, universal use of corded phones is not a realistic scenario. When landline infrastructure is present, $75\%$ of landline phone are assumed to be cordless, and $25\%$ corded. Also, there are three levels of cellphone chargers to consider: the current average charger in the U.S., the more-efficient chargers currently being manufactured, and + +![](images/e4837f71912a04a38a90c201248bd49055c91ebbeab0fa1d7e1d5b75c3b60744.jpg) +Figure 5. Phone energy need forecast for saturated market. + +the energy-conserving 5-star chargers that are not yet common [21]. Calculating the energy use of these in the saturated market results in the power requirements shown in Figure 6. + +![](images/bbabc170c3776931ee5e2bed6ce63b491a8277ac5aaa9fad3cbd1974eb6268b4.jpg) +Figure 6. Realistic phone energy need forecast for saturated market. + +From an energy perspective, it is most beneficial to create the infrastructure for a cellphone communication system. Passing legislation to decrease the amount of waste that chargers create could be used to make this state even more energy efficient. + +# Additional Factors + +Outside of impacts on energy consumption, there are still numerous other factors that determine which type of phone will be favored by the general population: + +- Cellphones provide greater mobility while increasing safety, especially while travelling alone or in a small group. +- Cellphones allow older children to have increased independence without putting themselves in danger [22]. +- Impromptu scheduling changes and emergencies can be more easily handled with immediate communication available. +- Cellphones can make employees easier to reach; this could increase productivity by allowing employees to perform their jobs while not physically in the office. +- Cellphones are also used to replace watches, cameras, and alarm clocks, facts that may impact overall energy usage [23]. + +Cellphones also have negative consequences: + +- It is suspected that cellphones contribute to brain cancer and tumors due to radiation from both cellphones and cell towers [24, 25]. +- Cellphones can interrupt family life, straining relationships. Adults who use a cellphone for work sometimes let work interfere with family life, while children become attached to cellphones as a means of contacting peers, leading to more peer-based and fewer family-based activities [26]. +- The nature of a cellphone can limit the ability to contact a group, such as a family. Instead of making a single call, it may be necessary to call each member separately, wasting time and effort, since there is no universal means of communication. +- Cellphone rings often interrupt family dinners, movies, classes, sporting events, and concerts, decreasing people's enjoyment of the experience; they are also a distraction to people while at work [27]. +- Cellphones can increase response time for emergency vehicles, because a cellphone's position is much more difficult to locate than a landline's [28]. +- Cellphones generally have higher prices, more expensive plans, and a shorter lifespan than landline phones [29]. +- Cellphones are also more likely to be lost or stolen due to their transportable nature. +- Battery life is limited. + +- With more cellphones in existence, there will likely be fewer pay phones or landlines in public places for use in emergencies. + +# Part 3: Effects of Charger Negligence + +Often, cellphones are left overnight to charge; and in the morning when they are detached, the charger is left plugged into the wall, still drawing current. This practice wastes energy, since cellphones need to be charged for only a portion of the night. To determine the maximum amount of wasted energy by cellphone users in the U.S., we take into account both of these negligent practices in the equation below, where $W_{t}$ is the total amount of wasted energy generated by cellphones through overcharging $(W_{o})$ and failing to unplug the charger when not in use $(W_{u})$ : + +$$ +W _ {t} = W _ {p} + W _ {u}. +$$ + +To quantify this, it is necessary to create models for both types of waste. The general format of the equation for any type of waste from a charger is + +$$ +W = H C B W _ {p} P h L. +$$ + +The waste $W$ is based on the number $H$ of households, the average number $C$ of chargers per household, the average amount of power $P$ drawn during this time, and the hours $h$ per day of wasteful practice. There are also conversion factors for the waste due to power plants $(W_{p})$ , and the conversion of watts to barrels of oil. The value of $L$ is 1.1 phones/person at the steady state. + +To use this equation, the time that cellphone users waste must be calculated using the difference between the time charged and the charging time needed; the power also needs to be customized for this type of waste $(P_v)$ : + +$$ +W _ {v} = H C B W _ {p} P _ {v} \left(h _ {\mathrm {c h a r g i n g}} - h _ {\mathrm {n e e d e d}}\right) L. +$$ + +The second form of waste can be modeled in a similar manner, using the number of hours that the charger is in the idle state ( $h_{\mathrm{idle}}$ ). The power consumption will also need to be specified ( $P_{u}$ ). + +$$ +W _ {u} = H C B W _ {p} P _ {u} h _ {\mathrm {i d l e}} L. +$$ + +This results in an overall model for the waste from cellphones in terms of barrels of oil: + +$$ +W _ {c} = H C B W _ {p} \left[ P _ {u} h _ {\mathrm {i d l e}} + P _ {v} \left(h _ {\mathrm {c h a r g i n g}} - h _ {\mathrm {n e e d e d}}\right) \right] L. +$$ + +To calculate the total waste of cellphones, we use the values in Table 3, many of these values are reported data or calculated from reported data. The only values approximated are those for time per day spent charging + +Assuming that people leave their cellphones charging while they sleep, all cellphones would charge for approximately $8\mathrm{hr}$ /night, as noted above. As assumed earlier, each cellphone requires charging for only $1\mathrm{hr}/\mathrm{d}$ . If the charger is left plugged in all of the remaining time, $16\mathrm{hr}/\mathrm{d}$ is idle time. Using these values and assumptions, all of the cellphones within a country the size of the U.S. would waste the equivalent of $6,254\mathrm{bbl}/\mathrm{d}$ of oil due to careless cellphone use. + +Table 3. Cellular charger waste components. + +
FactorValue
H126,316,181 households in U.S. [8]
C2.37 cellphone chargers/household (one per person) [8]
B1 barrel of oil / (1.6998 × 106Wh) [30, 31]
Wp3.0128 [9]
Pu0.3 W [9, 17,18]
Pv0.845 W [32]
hidle16 hr
hcharging8 h
hneeded1 hr
L1.1 (cell), 2.37 (landline)
+ +# Part 4: Charger Negligence + +Waste due to battery chargers applies to types of electronics beyond cellphones. We consider three types of chargers: cellphone chargers, cordless phone chargers, and other chargers, such as for laptops and MP3 players. Overall waste $W_{T}$ is modeled as the sum for cellphones $(W_{c})$ , cordless phones $(W_{l})$ , and other types of chargers $(W_{o})$ : + +$$ +W _ {T} = W _ {c} + W _ {l} + W _ {o}. +$$ + +The waste due to cellphones can be calculated as in Part 3. The waste due to cordless phone and other chargers is calculated similarly, although only waste due to the charger left idle should be accounted for: + +$$ +W _ {l} = H C _ {l} B W _ {p} P _ {l} h _ {l} L, \qquad W _ {o} = H C _ {o} B W _ {p} P _ {o} h _ {o} L. +$$ + +The values of power usage and hours left charging differ from those in the previous equations. + +Overall waste can be calculated, in terms of barrels of oil, as + +$$ +\begin{array}{l} W _ {T} = H C _ {l} B W _ {p} P _ {l} h _ {l} L + H C _ {o} B W _ {o} P _ {o} h _ {o} L, \\ W _ {T} = H C B W _ {p} \left[ P _ {u} h _ {\text {i d l e}} + P _ {v} \left(h _ {\text {c h a r g i n g}} - h _ {\text {n e e d e d}}\right) \right] L + H C _ {o} B W _ {o} P _ {o} h _ {o} L. \\ \end{array} +$$ + +The first equation corresponds to waste in a cordless-phone-dominant state, while the second models waste in a cellphone-dominant-state. + +Relevant values are in Table 4, where a few values are reasoned. For instance, the number of hours that a cordless phone would be idle is based on the assumption that a phone charges $2\mathrm{hr} / \mathrm{d}$ , is used $1\mathrm{hr} / \mathrm{d}$ , so is idle $21\mathrm{hr} / \mathrm{d}$ [2]. In addition, the amount of energy drawn by all other chargers is assumed to be constant and approximately the same as for an average cellphone charger (0.3 W) [9, 17, 18]. The number of chargers per household is the product of the average number of people in the household and an approximation for the number of chargers present and used within the household. + +Table 4. +Charger waste components. + +
FactorValue
H126,316,181 households in U.S. [8]
B1 bbl oil (1.6998 × 10^6 Wh) [30,31]
Wp3.0128 [9]
C_l2.37 cordless phone chargers/household [8]
P_l1.7 W [16]
h_l21 hr/d
Co3.318 chargers/household
Po0.3 W
ho16 hr
C2.37 chargers/household (one per person) [8]
Pu0.3 W [9, 17,18]
hidle16 hr
Pv0.845 W [32]
hcharging8 hr
h-needed1 hr
L1.1 (cell), 2.37 (landlines)
+ +These values give an average waste of 49,000 bbl/d of oil for all chargers in a cordless-phone-dominated U.S., compared to 10,000 bbl/d of oil in a cellphone-dominated U.S., as shown in Table 5. + +Table 5. +Charger waste. + +
Charger typebbl/d of oil ×103
Cellphone6.3
Cordless phone45.9
Other3.7
Total (cellphone state)10.0
Total (cordless state)48.6
+ +# Part 5: Economic and Population Growth + +Based on the assumption that all $m$ members of $H$ households within the Pseudo U.S. have phones, the changes in economic status will not affect the total energy used by phones. To determine the energy usage projected over the next 50 years, we model the population of the Pseudo U.S. as equal the population of the actual United States. Based on data from the U.S. Census Bureau [33, 34, 35, 36, 37, 38, 39, 40], we create a regression describing population, $T_{\text{population}}$ , as a function of time, $X_{\text{year}}$ , as seen in Figure 7 expressed by + +$$ +T _ {\mathrm {p o p u l a t i o n}} = 3 2 1 6 9 8 0 X _ {\mathrm {y e a r}} - 6 1 5 6 7 5 2 7 3 2. +$$ + +![](images/e1fb6de6827848b27e9009cee33ecc93a0d7b0af450e40afee75912ef734af79.jpg) +Figure 7. U.S. population. + +This model matches the Census Bureau's predictions and can be used in conjunction with the energy equations developed in Part 1 to determine the total energy used by the Pseudo U.S. at any given time. To find the total energy used over each 10-year period, we integrate the population function from $X_{\text{year } n}$ to $X_{\text{year }(n + 10)}$ and multiply by $E_{\text{phone}}$ : + +$$ +E _ {\text {u s e d}} = 3 6 5 \left(E _ {\text {p h o n e}} \int_ {X _ {\text {y e a r n}}} ^ {X _ {\text {y e a r (n + 1 0)}}} T _ {\text {p o p u l a t i o n}} d X _ {\text {y e a r}}\right). +$$ + +Under the optimal scenario where cellphones with 5-star chargers saturate the market, the energy used for phone service each decade is listed in Table 6. + +The total number of barrels of oil that must be provided to power plants over the next 50 years for this scenario is 503 million. + +Table 6. Total phone energy per decade. + +
DecadeEnergy (bbl of oil ×106)
2010s84
20s92
30s101
40s109
50s117
Total503
+ +# Analysis of the Model + +# Verification + +To verify this model, one would have to obtain data for the next 10 years and compare the actual results to the predicted. Historical data cannot be used to verify this model, because data regarding the consumption of energy due to phone usage is not readily available. It was possible to verify the competition model for the data graphically. Most of the statistics used could be verified through additional research. + +# Strengths + +- Simplicity: This model is simple enough that it entails a small amount of mathematical skill to operate. In addition, it is easily converted into an electronic form, such as Microsoft Excel or Matlab, and can therefore be visually displayed so that nearly no mathematical knowledge is necessary to understand the model. +- Developed from historical data: Population trends and the competition model were based off of real data from the Census Bureau and the CTIA Wireless Association. +- Extendable: To include additional factors, the model could be extended by additional terms with little impact on the functionality in the energy equations. +- Flexible: The equations used in this problem could be applied to other competing products that use energy. +- Closed-form solution: With the appropriate data, this model will generate numerical and graphical solutions. +- Calculation time: Due to the simplicity of the calculations, this model can be solved in a relatively short amount of time. + +- Includes variations: This model accounts for cell, corded, and cordless phone usage, as well as a combination of the three. This allows for a more complete analysis. +- **Considers outside factors:** This study considers the implications of mobility and convenience for a realistic approach to energy efficiency. +- Energy production costs: The costs in the model take into account the inefficiencies of power generation by looking at total energy produced to render the energy consumed. + +# Weaknesses + +- Forecasting: The model does not account for any changes in technology over the time period. +- Infrastructure costs: The initial infrastructure cost was assumed to defray to zero over time in order to decrease the number of inputs needed for the model. In reality these costs could potentially have an effect, especially in the short term. +- Infrastructure maintenance costs: The infrastructure maintenance and operations costs were not accounted for due to lack of data. For a more robust model, another term could be added to the energy equation to account for this energy consumption. Examples include the power used by each cellphone tower, approximately $1 - 10\mathrm{kW}$ , and the average power used to repair telephone lines damaged by storms. +- Assumptions: Simplifying assumptions had to be made in order to create a solvable model. In addition, some values used in the calculations had to be estimated. +- Inputs: This model requires a large amount of data, some of which is difficult to obtain. + +# Conclusion + +Landlines are the most energy-feasible option only when all phones are corded phones. Otherwise, the most efficient means of providing telecommunication is through cellphones. This is based on: + +- The steady state of the country with existing infrastructure would be 36–42 GWh/d. +- With no established infrastructure, it would be more energy-beneficial to have corded phones running on landlines (3.2 GWh/d); but other factors, such as preference for cordless technology, suggests that a cellphone infrastructure may be a safer investment. + +- Cellphone and other charger negligence would cause a maximum of 10,000 bbl/d to be wasted. Cellphone charger negligence would cause a maximum of 6,000 bbl/d per day to be wasted, while cordless phone negligence would result in a waste of 45,000 bbl/d per day. +- The analysis of the telecommunications industry for the future shows that cellphones will be the most viable option, since they will require only 500 million barrels of oil over the next 50 years. Due to the social benefits of cellphones, as well as their energy efficiency relative to cordless phones, a cellphone dominant state should be accepted in the current infrastructure. Despite the fact that cellphones are less efficient than corded landline phones, they are more accepted by the general public. + +# References + +[1]DiscountCell. 2008. Cell phone batteries. http://wwwdiscountcell.com/cellular/cell-phone-batteries.htm. +[2] ACE3. n.d. Energy consumption of household products. http://www.accee.org/buildings/resappl_type/dtatbl.pdf. +[3] U.S. Environmental Protection Agency. 2002. OSWER Innovations Pilot. The effectiveness of cell phone reuse, refurbishment and recycling programs. http://www.epa.gov/oswer/docs/iwg/CellPhones.pdf. +[4] ReCellular. n.d. Environmental data. Environmental statistic—Cell phone for soldiers. http://www Att.com/Common/merger/files/pdf/CPFS_EarthDay/electronic_waste.fs.pdf. +[5] Broussely, M., S. Herreyre, P. Biensan, P. Kasztejna, K. Nechev, and R.J. Staniewicz. 2001. Aging mechanism in Li ion cells and calendar life predictions. Journal of Power Sources 97-98 (July 2001): 13-21. +[6] Responsible Energy Corporation. 2008. Li-ion battery FAQs. Frequently asked questions about lithium ion (Li-ion) batteries. http://www.greenbatteries.com/libafa.html. +[7] Walmart Stores, Inc. 2009. Search results for "cordless phone". http://www.walmart.com/search/search-ng.do?search_query= cordless+phone. +[8] U.S. Census Bureau. 2007. State & County QuickFacts. http://quickfacts.census.gov/qfd/states/00000.html. +[9] Singhal, Pranshu. 2005. Integrated Product Policy Pilot Report. Stage I Final Report: Life cycle environmental issues of mobile phones. http://ec.europa.eu/environment/ipp/pdf/nokiamobile_05_04.pdf. + +[10] CTIA The Wireless Association. 2008. CTIA Advocacy. Wireless quick facts. http://www.ctia.org/advocacy/research/index.cfm/AID/10323. +[11] Giordano, Frank R., Maurice D. Weir, and William P. Fox. 2003. Mathematical Modeling. 3rd ed. Pacific Grove, CA: Brooks/ Cole. +[12] Meisterton-Gibbons, Michael. 1995. A Concrete Approach to Mathematical Modelling. New York: John Wiley & Sons. +[13] Farlow, Jerry, James E. Hall, Jean Marie McDill, and Beverly H. West. 2007. Differential Equations and Linear Algebra. 2nd ed. Upper Saddle River, NJ: Pearson Education. +[14] MathWorks. 2007. Matlab R2007b. www.mathworks.com. +[15] De Decker, Kris. 2008. The right to 35 mobiles. Low-tech Magazine (13 Feb 2008) http://www.lowtechmagazine.com/2008/02/the-right-to-35.html. +[16] MacKay, David. 2009. Sustainable Energy—Without the Hot Air. Cambridge, UK: UIT Cambridge. http://www.inference.phy.cam.ac.uk/sustainable/book/tex/cft.pdf. +[17] Motorola, Inc. 2008. Corporate responsibility report 2007. http://www.motorola.com/mot/doc/7/7130_MotDoc.pdf. +[18] Motorola, Inc. n.d. Motorola eco facts. http://direct.motorola.com/Hellomoto/ecofacts. +[19] Maxim Integrated Products. 1998. Draw $150\mathrm{mW}$ of isolated power from off-hook phone line. 6 July 1998. http://www.maxim-ic.com/appnotes.cfm/an_pk/1923. +[20] CTIA The Wireless Association. 2008. CTIA Advocacy. Wireless quick facts. http://www.ctia.org/advocacy/research/index.cfm/AID/10323. +[21] Begun, Daniel A. 2008. Phone makers monitor charger energy consumption. http://hothardware.com/News/Phone-Makers-Monitor-Charger-Energy-Consumption. +[22] Tobin, Declan. 2005. Teens and cell phones. http://ezinearticles.com/?Teens-And-Cell-Phones&id=24519. +[23] Kwan, Michael. 2008. Feature: Common uses for cell phones beyond voice calls. Mobile Magazine (22 October 2008) http://www-mobilemag.com/content/100/340/C16461. +[24] Levitt, Blake B. 1998. Cell-phone towers and communities: The struggle for local control. *Orion Afield* (Autumn 1998) http://arts.envirolink.org/arts_andactivism/BlakeLevitt.html. + +[25] Quiring, Lynn. n.d. Beacons of harm: Cell phone towers and mobile phone masts. http://omega.twoday.net/stories/4908113. +[26] MSNBC.Com. 2006. Study links family problems with excessive cell phone use. http://www.cellphones.ca/news/post001611/. +[27] Gulli, Cathy. 2009. Help! My office is ring tone hell. Maclean's (26 January 2009): 52. http://www2.macleans.ca/2009/01/21/help-my-office-is-ring-tone-hell/. +[28] Channel 3000. 2008. Rock County 911 dispatchers say cell calls more difficult to pinpoint. http://www.channel3000.com/news/16124111/detail.html. +[29] Coombes, Andrea. 2004. Cutting the phone cord? Not so fast. http://www.marketwatch.com/story/abandoning-landline-telephones-not-quite-there-yet. +[30] Cooper, Alan H. 1999. Part III—Administrative, procedural, and miscellaneous. Internal Revenue Service, Dept. of the Treasury. http://www.irs.gov/pub/irs-drop/n-99-18.pdf. +[31] Energy Information Administration. 2001. Appendix H: Conversion factors. In Annual Energy Outlook 2001, 251-252. http://www.eia.doe.gov/oiaf/archive/aeo01/pdf/apph.pdf. +[32] MacKay, David. 2008. Phone chargers—the truth. http://www.inference.phy.cam.ac.uk/sustainable/charger. +[33] U.S. Census Bureau. 2008. U.S. population projections. National population predictions released 2008 (based on Census 2000). http://www.census.gov/population/www/projections/summarytables.html. +[34] U.S. Census Bureau. 2008. Table 1. Projections of the Population and Components of Change for the United States: 2010 to 2050. http://www.census.gov/population/www/projections/files/nation/summary/np2008-t1.xls. +[35] U.S. Census Bureau. 2008. Population estimates. http://www.census.gov/popest/estimates.php. +[36] U.S. Census Bureau. 2008. National and state population estimates. Annual population estimates 2000 to 2008. http://www.census.gov/popest/states/NST-ann-est.html. +[37] U.S. Census Bureau. 2008. Table 1: Annual Estimates of the Resident Population for the United States, Regions, States and Puerto Rico: April 1, 2000 to July 1, 2008. http://www.census.gov/popest/states/ tables/NST-EST2008-01.xls . +[38] U.S. Census Bureau. 2004. Population estimates. 1980s. http://www.census.gov/popest/archives/1980s/. + +[39] U.S. Census Bureau. 2008. Population estimates. 1990s. http://www.census.gov/popest/archives/1990s/. +[40] U.S. Census Bureau. 2008. Table 1. Projections of the Population and Components of Change for the United States: 2010 to 2050. http://www.census.gov/population/www/projections/files/nation/summary/np2008-t1.xls. + +![](images/7d84ffbd1ed2a12b5ea7097fedf153e65d553662ae73793e4026365345449419.jpg) +Jason Altieri, Katelynn Wilton, and Nevin Brackett-Rozinsky. Photo by Dominick DeSalvatore. + +# Energy Implications of Cellular Proliferation in the U.S. + +Benjamin Coate +Zachary Kopplin +Nate Landis + +College of Idaho Caldwell, ID + +Advisor: Michael P. Hitchman + +# Summary + +The U.S. has undergone a massive transformation in how it approaches telecommunications. In 30 years, it has gone from having an entirely landline-based phone system to one where $89\%$ of the population uses cellphones, with $16\%$ of households having replaced their landlines entirely. We set out to establish the key consequences and energy costs of this system. + +By collecting data on wattages of cellphone chargers and modeling likely cellphone usage, we calculate that a cellphone might waste $86\%$ of its energy intake through its charger, the equivalent of 754,000 bbl/yr of oil. Comparing that to the energy costs of landline phones, we model two transition scenarios as cellphones replace landlines. We conclude that the faster that landlines can be phased out, the more energy will be saved. + +We find that a full cell network, combined with Voice over Internet Protocol (VoIP) technology, would be the best way to provide phone service to a Pseudo U.S. completely lacking in telecommunications. Doing this would save the cost of implementation of a landline infrastructure that would be rendered mostly redundant as cellphones became more popular. Because all the cellphone chargers in this Pseudo U.S. would be brand-new models with recent energy conservation features, cellphone waste would add up to only 234,000 bbl/yr of oil. We model the increase in cellphone energy consumption in this Pseudo U.S. for the next 50 years with two models: one accounts for the growth of the population, and another also factors in a rate of technological advance. In the first model, cellphone + +energy consumption would reach 1.53 million bbl/yr of oil by 2059, while in the second it would actually decrease to 525,000 bbl/yr by then, due to increases in battery efficiency and a reduction in standby power. + +Cellphone chargers are a small part of standby-power waste in America. Using extensive wattage and usage data on consumer electronics, we calculate that these devices waste 99 million bbl/yr of oil. + +These models show that although a single cellphone charger may waste only a small amount of energy (one author estimates leaving a charger plugged in for a day is about equal to driving a car for one second), the sheer magnitude of cellphone users means that this loss is significant. + +# Cellphone Chargers + +We first consider the energy consumption of a single cellphone: the energy to recharge the cellphone, plus the energy used by the charger when it is left plugged into the wall. David MacKay convinced two engineers to measure a standard Nokia phone charger in a calorimeter—a much more accurate technique than anything we could devise. This method reported 0.472 W drawn while only the charger is plugged into the wall, 0.845 W wasted when a fully-charged phone is left attached, and, interestingly, 4.146 W lost as heat while the phone is charging. MacKay also suspects that older phone chargers may use 1-3 W [MacKay 2009]. IP.com [n.d.] reports 2.77 W consumed by a phone charger while charging and 0.45 W while not. Motorola [2008] lists its chargers' standby wattage at about 0.2 W. Since MacKay's experiment shows the charger drawing about twice as much power with the phone attached as without, we assume that brand-new chargers would do the same. + +The average cellphone is replaced every 18 months [Stover 2008; ReCellular n.d.; Recycling for Charities 2008]. Fairly-new models and brand-new models are likely to be present in approximately equal numbers, while both will outnumber older chargers. If we assume that $20\%$ of phone chargers are old, $40\%$ use about $0.472\mathrm{W}$ , and $40\%$ do not leak at all, then the average cellphone charger wastes about $0.589\mathrm{W}$ while plugged in without a phone and $0.938\mathrm{W}$ while left plugged in and attached to a fully-charged phone. + +Next, we consider how long the charger is in each of these states. We construct a model with two expressions, one for each of two practices: + +- Users who unplug the charger when they detach the phone: Let $x$ be number of recharges per year and $y$ the average number of hours that the phone remains plugged in after it has reached full charge. The annual amount of waste is $0.938xy \, \text{W/yr}$ . +- Users who detach the phone from the charger but leave the charger plugged in: These users still waste energy as above. We also assume that they rarely come back to unplug the charger later (perhaps a few + +times a year when they need the outlet). The waste in this case is + +$$ +[ 0. 9 3 8 x y + 0. 5 8 9 (8 7 6 0 - 3 0 0 - x y) ] \mathrm {W / y r}, +$$ + +where $8760 = 24 \times 365$ is the number of hours in a 365-day year and we subtract $300 \, \text{hr}$ as the charging time during a year and also subtract $xy$ for the hours that the phone is still attached to the charger after being fully charged (so that it is leaking power at $0.938 \, \text{W}$ instead of $0.589 \, \text{W}$ ). + +We weight the two quantities, assuming that $25\%$ of users unplug the charger and $75\%$ don't. + +Average talk time per charge is $5\mathrm{hr}$ and average standby time is $10\mathrm{d}$ [AT&T n.d.]. Average talk time of $1\mathrm{hr/d}$ and standby all the time would use up $60\% + 30\% = 90\%$ of battery charge in $3\mathrm{d}$ ; hence, about 100 recharges would be required per year. On the other hand, many users attach their phones every time they come home, in effect recharging the phone 365 times a year. We try both of these numbers in our model to see their effect on total energy consumption. + +We choose 2 hr and 6 hr as two likely averages for how long users leave a phone attached after it is charged, since some leave a phone plugged in all night, producing 6-9 hr of waste after 1 hr of charge, while others may unplug within minutes. We select as suitable averages + +175 recharges/yr, 4.5 hr attached after charging. + +Table 1 shows the resulting total energy waste for the entire country for various combinations. + +Table 1. +Total energy wasted in TWh/yr (1 terawatt-hour = 10 $^{12}$ Wh), by cellphone chargers, by recharges per year and hours before detaching the cellphone. + +
Recharges/yrNumber of hours
24.56
1001.191.25
1751.28
3651.271.51
+ +From the five scenarios shown, 1.28 TWh/yr, or 754,000 bbl/yr of oil (where $1700\mathrm{kWh} = 1$ bbl), is a fair estimate of the average waste; changing either variable has little effect. + +How could we reduce waste? To find out, we assume that every user gets into the habit of unplugging the charger on detaching the phone. As Table 2 shows, waste would be cut by $65 - 95\%$ . The constant power drain of the charger plugged in to the wall simply outweighs the number of recharges or how long the phone is left attached. + +# Table 2. + +Total energy wasted if users unplug the charger (in TWh/yr), by number of recharges per year and hours before unplugging. + +
Recharges/yrNumber of hours
24.56
1000.060.18
1750.24
3650.220.65
+ +How much energy per year is required in charging a cellphone? We estimate $300\mathrm{hr / yr}$ of charging time (100 recharges/yr $\times$ 3 hr/recharge). The average phone charges at 3 W, an average of new phones (many in range of 1 W) [AT&T n.d.] and older phones (with higher wattages); cf. MacKay's measurement of 4 W lost as heat during charging. Hence $3\mathrm{W} \times 300\mathrm{hr / yr} = 0.9\mathrm{kWh / yr}$ is used to charge a cellphone. When combined with the $4.7\mathrm{kWh / yr}$ of waste determined above, we get $5.6\mathrm{kWh / yr}$ . This means that $84\%$ of the energy used on cellphones is wasted, which nicely splits the difference between three-year-old statistics ( $95\%$ lost as waste [Richard 2005]) and statistics from November 2008 ( $67\%$ lost as waste [Virki 2008]). + +# Transition from Landlines to Cellphones + +U.S. cellphone use has grown logistically (Figure 1). As of February 7, 2009, there were 271,778,000 cellphone subscribers [CTIA...2009]. Meanwhile, the number of households using landlines is on a sharp decline. Between 2004 and 2008, the percentage of cellphone-only households rose from $4.5\%$ to $16.4\%$ [Dixon 2008]. We analyze the transition to a system of exclusively cellphones, evaluate the energy costs, and discuss the most efficient route to it. + +![](images/6df1d3013c6569b0233fc04f32dafea9cb6cad6663c59a5584b97c36a26a79b7.jpg) +Figure 1. Cellphone subscribers in the U.S. + +In considering the total energy costs of cellphones, we consider: + +- Charging (as we have seen). +- Production. Energy consumed in production of cellphones is extremely + +variable and poorly documented. A cellphone has an average life of 1.5 years, compared to 6 years for a cordless telephone. If the energy used in production is comparable, then over a given time period the production costs of cellphones will be four times that of cordless phones. + +- Towers. We will calculate how many additional cell towers will be needed to support the growing number of cellphone subscribers, and also analyze some of the energy costs of maintaining the landline system. + +Our primary focus in the ensuing models is the relative energy costs of keeping cellphones and cordless phones charged and usable. We would like to incorporate the energy costs, but lack of reliable data would create an error large enough to render the model unsuitable. Thus, we tackle the problem as energy usage by the consumer. + +# Basic Information and Assumptions + +Approximately $2.5\%$ of U.S. households use neither landlines nor cellphones [Bavdek 2008]. So we assume that a complete transition from landlines to cellphones will result in $97.5\%$ of the population having cellphones, since variation in household size is negligible on such a large scale. So, of the 111.1 million households [U.S. Census Bureau n.d.], 108.3 million require some form of phone line. In 2008, $16.4\%$ of total households opted to use only cellphones. Hence, the number of households using landlines in 2008 is $H = (.975 - .164) \times 111.1 = 90.1$ million. Furthermore, the average number of people per household is $m = 2.745$ [U.S. Census Bureau n.d.]. + +Our transition assumes that every man, woman, and child receives a cellphone. However, approximately $6.9\%$ of the population (21 million) is under the age of 5. They don't need cellphones; if we remove them from the number of subscribers, the U.S. would be close to complete transition already. With 272 million cellphone subscribers in a population of 305 million [U.S. Census Bureau] leaves 33 million people without cellphones; subtracting 21 million children leaves just 12 million to supply with cellphones. + +The 272 million subscribers does not account for people with both a personal cellphone and a work cellphone. There are hardly any data on the number of such people; so we assume that the number of multiple-phone users is negligible. + +# Cell and Cordless Phone Energy Use + +We consider three types of phones—cell, cordless home phone, and corded home phone. To establish the energy costs of each, we need five pieces of data for each type: + +- $E_{\text {production}} = \text {energy to produce it},$ +- $E_{\text{support}} =$ energy to support it, per year; +- $E_{\text {charge}} =$ energy to charge/power it, per year; +- $\mathrm{LS} =$ its average lifespan; and +- $N =$ the number of such phones in use. + +The first two pieces of data are nearly impossible to find to any degree of accuracy. So, we assume that it takes the same amount of energy to produce a cellphone as a cordless phone; the number of corded phones currently being sold is negligible. + +Likewise, the energy that goes into phone support—cell tower construction, tower and landline upkeep, signals, etc.—is not well documented. We find two rough estimates: + +- $0.12\%$ of global primary energy use is by telecom companies [Ericsson 2007]. If this proportion holds true for the U.S., which uses 3,923 TWh/yr [Energy Information Association 2009], then U.S. telecom companies consume 470 TWh/yr. This figure does not tell us if the energy is going towards landlines or cellphones, but it gives an idea of the scale—much larger than the energy consumed by cellphones themselves. +- Japanese mobile telecommunications companies use 120 Wh per user per day [Etoh 2008]. If U.S. mobile companies do the same, usage would be $12\mathrm{TWh} / \mathrm{yr}$ . + +These numbers contradict each other—we find it hard to believe that a telecom company spends 40 times as much energy on noncellular aspects of its business as on mobile infrastructure. However, more accurate data is simply not available. + +Since the first two quantities cannot be accurately determined, we use only the remaining three variables in our models. Table 3 shows the values that we use. We have already done the estimates for cellphones. A cordless phone uses $28\mathrm{kWh}$ [Roth and McKenney 2007], a corded phone $2.2\mathrm{kWh} (= 0.25\mathrm{W} \times 8760\mathrm{hr/yr})$ . (Every source we found said that corded phones used a "smidgen" or a "dab" of power, which we take to be $0.25\mathrm{W}$ .) We estimate the lifespans of those phones and assume that there are two cordless landline phones for every corded one. + +To determine the number of each type of phone, we develop two models for relative change in the number of cellphones to the number of landline phones. + +# Model 1: Current Trends + +We assume that current trends continue until $97.5\%$ of people have cellphones and $0\%$ use landlines. Using data since 2000, we project the trends + +Table 3. +Parameters of phone usage. + +
Echarge (kWh)LS (yr)N (units)
Cellphone5.61.5x
Cordless2862/3y
Cored2.2101/3y
+ +shown in Figure 2, with landline data from Nielsen [2008] and cellphone data from Infoplease [2008]. + +![](images/c8099b58e4de7aed74af6687347c2040124c4a5cc080f26641b1597841f2ec1a.jpg) +Figure 2. Cellphone and landline trends with projections: proportion of population vs. year. + +We calculate the total energy consumption due to phone usage from 2009 to 2015. To do so, we need to find the areas under the two curves from 2008 to 2015; we use best-fit trend lines. + +We get $19.4\mathrm{kWh / yr}$ for a landline as a weighted average for cordless and corded, with twice as many of the former as the latter. + +Thus, the total amount of energy used from the beginning of 2008 through the end of 2015 is 19.3 TWh. + +If each person (technically, $97.5\%$ of the population) had a cellphone and no landline phones were used, the total energy used over this period would be only 12 TWh. + +# Model 2: Current Trends with Resistance to Extremes + +Although cellphone usage will creep up to $97.5\%$ of population, landline usage will in fact not drop to $0\%$ : Many people feel more comfortable with the added security of a landline, in case their cellphone does not work (as during the aftermath of the September 11, 2001 attack on New York City), landlines are easier to talk on for long periods of time, some worry that cellphones may cause cancer, and senior citizens may resist technological changes. A substantial percentage of Americans may opt to stay with a + +landline as long as it is available. Using information from polls concerning feelings about having a landline, we offer an alternative projection in Figure 3. + +![](images/366fc3152751e7d33d4b4433820faebb9102e0438d2b3f3a772408e1cdb345b2.jpg) +Figure 3. Current cellphone and landline trends adjusted: proportion of population vs. year. + +We calculate as before, arriving at + +$$ +E _ {\mathrm {c e l l}} = 1 1. 3 \mathrm {T W h}, \quad E _ {\mathrm {l a n d}} = 1 7. 6 \mathrm {T W h}, \quad E _ {\mathrm {t o t a l}} = 2 8. 9 \mathrm {T W h}. +$$ + +This compares with 19.3 TWh under Model 1 and 12 TWh with only cellphones. + +# Bringing Phone Service to the Pseudo U.S. + +We now consider a "Pseudo U.S.," a country with the same population, economic status, and infrastructure as the U.S. but with no phone system in place. Our goal is a strategy to implement a phone system that minimizes energy usage. In addition to providing a detailed analysis of the phone system, we will discuss the consequences of all current types of phone systems: landlines, cellphones, satellite phones, Voice over Internet Protocol technology (VoIP), and combinations. + +Using only landlines connected to corded phones would be energy-efficient (since corded phones use less energy than cellphones), a landline system would also have lower maintenance costs, phones would be broken or lost less often, and there would be less social incentive to replace them with a more stylish or feature-filled new model every 18 months. There would also be fewer phones per person—many families could do with one or two instead of one for each family member. + +However, Americans have already proven that they favor cordless phones over corded and that they are willing to replace landlines entirely with cellphones. Inevitably, the same landline-to-cell process would happen in Pseudo U.S. So it would be better to build a cellphone network in the first place. + +Despite the advantages of cellphones, there are some drawbacks. Businesses would have to issue employees cellphones, and it would be difficult to prevent employees from using those for personal calls. Cellphones are vulnerable to hackers, interception (even by the government), and jamming. + +A simple, cheap, and energy-efficient way to solve several of these problems is VoIP (Voice Over Internet Protocol); in the fourth quarter of 2008, VoIP provider Skype reported 405 million accounts worldwide [2008]. Assuming that Pseudo U.S. has the same technology level as the U.S., a network of Internet cables would be in place, connected to $37\%$ of households [Energy Information Administration n.d., Table HC2.11]. Attaching the remaining $63\%$ of homes to existing hubs would be easier than laying new phone lines, throwing up millions of phone poles, or constructing thousands of cell towers. Cordless VoIP phones might consume a fair amount of power, but the savings on construction and infrastructure costs would be enormous. VoIP would allow business to give employees phones for which they could be held accountable, and families could have a backup VoIP line. + +Much of the energy cost of production of new cellphones could be alleviated by mandatory recycling of old ones. Recycling 100 million cellphones would save 0.215 TWh, enough energy to power 19,500 households for a year [Environmental Protection Agency n.d.]. + +# Covering Pseudo U.S. with Cell Towers + +The obvious energy-efficient choice seems to be the newly-released Tower Tube cell tower, with a range of 4 mi; it has a wind turbine attached, so it uses $40\%$ less energy than conventional towers. It can be erected in days, has a small footprint, and is resistant to vandalism and the elements [Ericsson n.d.]. + +Knowing nothing about the geography of Pseudo U.S., we devise an optimal grid for cell-tower placement based on U.S. data, minimizing the number of towers while maximizing coverage. The most efficient design is a triangular lattice, as seen on the right of Figure 4, with each tower the vertex of an equilateral triangle, $6.93\mathrm{mi}$ from the next towers (determined from trigonometry). In reality, towers would be placed closer to ensure coverage despite geography. + +Since each tower covers a unique hexagonal area of $41.6\mathrm{mi}^2$ and the land area of the U.S. is approximately $3,540,000\mathrm{mi}^2$ , we would need approximately 85,000 towers. + +Since a tower has a limit on how many users it can support, higher population density in cities demands higher density of towers. In the 601 U.S. cities with more than 50,000 people, the density of towers needs to be 16 times the average density of the rest of our grid, requiring 420 additional towers. Without data on the remaining cities with 10,000–50,000 people, we recommend a density of towers four times the density of the rest of the grid. + +![](images/2a2660d2c0f43822e134088547d4d3b56a0cc1f92e42f4203b8368812c8003b7.jpg) +Figure 4. Cell tower lattice with lattice detail. + +![](images/ec85790605e09d47f43964f070fd3f4bb452b99ebe1a3255280e500d422bfaf9.jpg) + +Information on cost or energy consumption of such a network is difficult to obtain. Using the Tower Tube with $10\mathrm{kW}$ of signal strength, and 86,000 towers, operating energy consumption would be at least 0.31 TWh. + +# Cellphone Chargers in Pseudo U.S. + +Energy wasted by cellphone chargers in Pseudo U.S. can be determined using the same calculations as earlier for the real U.S. We simply disregard older phones and assume that every cellphone user has a charger that wastes only $0.2\mathrm{W}$ on standby left plugged in alone and $0.4\mathrm{W}$ attached to a fully-charged phone. Table 4 shows the results. + +Table 4. Total energy wasted in Pseudo U.S. (TWh/yr) by cellphone chargers, by recharges per year and hours before detaching the cellphone. + +
Recharges/yrNumber of hours
24.56
1000.360.39
1750.40
3650.400.49
+ +Pseudo U.S. would waste $30\%$ less energy due to cellphone chargers than the real U.S. The 0.40 TWh/yr estimate corresponds to 234,000 bbl/yr of oil. This is encouraging, since the real United States will basically reach this state in a few years, as older cellphones are replaced. + +If all users in Pseudo U.S. unplugged their chargers and phones at the same time, we get the results of Table 5. + +Table 5. Total energy wasted in Pseudo U.S. if users unplug the charger (in TWh/yr), by recharges per year and hours before unplugging. + +
Number of hours
24.56
1000.020.07
1750.09
3650.080.24
+ +If users could refrain from recharging until absolutely necessary and unplug both phone and charger together within $2\mathrm{hr}$ of full charge, standby power waste would virtually disappear: 0.02 TWh/yr, or less than $2\%$ of current U.S. waste. + +# Future Energy Needs for Pseudo U.S. + +We model the electricity needs for the phone system over the next 50 years. This model depends heavily on both population growth and economic growth. We establish reasonable trends for each and describe why the population projections are more significant than the economic projections. + +# Population Growth + +We project for Pseudo U.S. a population growth rate of $0.9\%$ / yr, reflecting that of the U.S. [Central Intelligence Agency n.d.]; Figure 5 shows such a projection for the U.S. We project maximum cellphone usage $(97.5\%)$ by 2015. + +![](images/cea8b6b786deface142b66ff42bebf01148ebc6171fd5ab26007fc02082791f8.jpg) +Figure 5. Population projection. + +# Economic Growth + +Real GDP per capita for the U.S. shows a surprisingly robust trend (Figure 6). Despite the clearly-visible recession of the 1980s, there is a strong upward trend. Even with the current economic instability, these data suggest that over the next 50 years the U.S. economy will continue to grow at approximately the same rate as the past trend. + +![](images/4aca1a08a65a923de79d879a9a99b64e71a08228e24d6962f6581861f3c30f23.jpg) +Figure 6. Real GDP per capita (in Year 2000 dollars) vs. year. [Measuringworth.com 2008]. + +# Technological Advances + +A cellphone network, combined with VoIP technology, would be more easily expanded and upgraded over the next 50 years than a landline network. + +With improvements in satellite communications, satellites could be an attractive choice for covering the Rocky Mountain states and parts of the Midwest. We compiled a list of 15 contiguous states and Alaska that contains 14 of the 15 most-sparply-populated states, excluding Maine but including Arizona. Together, these states contain $52\%$ of the area of the United States but only $12\%$ of its population [Wikipedia 2009]; since they are contiguous, they would be the easiest to cover efficiently with satellites. If such a satellite network could be made operational now, we could simply not build any of the 44,000 cell towers needed for that area. + +# Energy Needs Over the Next 50 Years + +Taking the predictions of population and economic growth over the next 50 years, together with an assumed increase in energy efficiency of cellphone technology, we calculate the energy costs due to consumer use. + +For the first model, we assume that cellphone efficiency remains constant. + +For the second model, we anticipate that standby power waste would be nearly eliminated and batteries would last longer (so phones would + +need to be recharged less often), as cell technology became more advanced. We incorporate an exponential decay constant $\lambda$ , projecting a $\lambda$ proportion decrease in energy use per year. Our projection becomes + +$$ +\mathrm {E n e r g y} [ t ] = 0. 9 2 2 (1. 0 0 9) ^ {t} (1 - \lambda) ^ {t}. +$$ + +We set $\lambda = 2\%$ . + +We show the results for the two models in Table 6. + +# Table 6. + +Energy costs (bbl $\times 10^{6}$ of oil) of consumer usage, for two projections. + +
20092059
No change in energy efficiency.921.53
Increasing energy efficiency.920.53
+ +# Consumer Electronics + +If cellphone chargers waste over 1 TWh/yr, how much energy is leaking out of other devices in U.S. households? + +We could not find enough reliable data on the number and usage of light fixtures to study lighting. + +We were able to study consumer electronics. The Energy Information Administration produced a lengthy study on the types of appliances and electronics in American homes [n.d., Table US-1]. Many other sources detail wattages and average kWh/yr consumed by various devices [Ames City Government 2002; Seattle City Light n.d.; ABS Alaskan n.d.; Fry 2006; DotCom Alliance n.d.; MacKay 2008; afterdawn.com 2007; Fung et al. 2002]. However, Roth and McKenney [2007] was by far the most thorough. The only major consumer electronics that they did not report on were digital TVs, and they could also provide only yearly consumption estimates for component stereos, printers, and modems. We checked all of their data against the other sources and filled in the gaps with corroborated data and estimates. We also updated the study (completed in January of 2007) as best we could, especially considering VCRs and game consoles, whose use has changed drastically in the last two years. Table 7 shows total consumption by all electronic devices by type, with relative numbers of each device taken into account. + +We offer a few notes on some kinds of devices. + +- Analog TVs: They waste 4 W when turned off. +- Digital TVs: There is huge variation in wattage, from $100\mathrm{W}$ to $500\mathrm{W}$ . CNET.com [n.d.] shows an average for new HDTVs of $250\mathrm{W}$ , with standby power $1\mathrm{W}$ . We assume that in most cases the digital TV being + +Table 7. Annual electricity consumption of consumer electronics (TWh). + +
ActiveIdleOffTotal
Analog TVs43.6n/a6.450.0
Digital TVs37.8n/a0.438.2
Desktop computers20.20.11.021.3
Set-top boxes6.4n/a13.319.7
Compact audio1.40.93.86.2
Component stereo1.50.92.95.3
Game consoles1.72.40.74.8
DVD players0.51.22.64.3
VCRs0.20.62.53.3
Laptop computers2.30.10.42.8
Modems0.7n/a1.82.5
Home theaters1.50.60.12.2
Printers0.30.50.21.0
+ +the newest TV, would be used the most; so we apply usage data from Roth and McKenney [2007] for the most-used TV. About 19.25 million flat-screen TVs have been sold since the study was done, meaning that there are now 59.25 million digital TVs in the country [Burritt 2009]. + +- Desktop computers: We combine CRT and LCD monitors into a weighted total, taking into account time spent in screensaver and standby modes. +- Set-top boxes (cable, satellite, and other TV boxes): These waste more energy than any other type of electronics device—a surprising fact, since there are almost a million fewer set-top boxes than analog TVs—because they still use 15 W when off, presumably to stay in contact with the service provider and in some cases to perform services (e.g., to turn on at a certain time to record a show). +- Compact audio: We use data from Roth and McKenney [2007]. +- Component stereo: Roth and McKenney [2007] estimated that a component stereo uses $115\mathrm{kWh} / \mathrm{yr}$ , with an installed base of 50 million units. We decided that a stereo would have a usage pattern similar to a compact audio system, with wattage more like a home theater; we calculate that a stereo uses about $105\mathrm{kWh} / \mathrm{yr}$ . +- Game consoles: Roth and McKenney [2007] reported only 2.6 TWh used by game consoles, with 1.0 TWh in active state, 1.3 in idle, and 0.4 while off. But game consoles have not only become more popular since January 2007, but the proportion of older-generation consoles to new ones has also gone down. This is important because newer ones are considerably more power-hungry. Roth and McKenney reported an average of 36 W for consoles, but multiple sources cite the new Xbox 360 at 173 W, the Playstation 3 at 190 W, and the Nintendo Wii at 18-19 W. Roth and + +McKenney reported 64 million consoles, but since then Wiis have jumped from 1.5 million to 13.5 million, Xbox 360s from 4.8 million to 11.9 million, and PS3s from 0.8 million to 5.9 million [Brightman 2008]. With these 24 million new game consoles, we estimate that 12 million older ones have been removed from use. So, there are now about 52 million older consoles averaging 36 W, plus 12 million new Wiis at 19 W, 7 million new Xboxes at 173 W, and 5 million new PS3s at 190 W. Weighting appropriately gives current average wattage at 56 W. + +- DVD players: They waste a staggering $87\%$ of the energy that they use. +- VCRs: This was another area where we felt we had to correct for the two years since Roth and McKenney [2007]. They cited 5.0 TWh used by VCRs, but the number of VCRs in use has dropped since. Data from previous studies cited by them indicate that the number of VCRs decreased by $11.25\%$ per year from 2001 to 2005. We extend this trend to the end of 2008, for an estimate of 71 million VCRs operational today. We also adjust their usage numbers downward by $15\%$ to account for more families preferring to use a DVD player. + +VCRs turn out to be the energy-wasting champion by percentage, wasting over $95\%$ of the energy that they consume. + +- Laptop computers: Surprisingly efficient, laptops as a whole used one-seventh as much electricity as desktop computers, even though there are only twice as many desktops. A laptop uses only use $25\mathrm{W}$ while active instead of the $75\mathrm{W}$ that a desktop uses, despite the fact that the laptop wastes $18\%$ of energy compared to only $5\%$ for the desktop. +- Modems: Left on all the time, they have a low wattage (7 W), for 55 kWh/yr, close to the 53 kW of Roth and McKenney [2007]. Assuming that modems are used 6 hr/d (about $25\%$ less than computers), only 0.7 TWh/yr is used by modems while people are actually connected to the Internet; the other 1.8 TWh lost as waste could be saved if people would unplug modems not in use. +- Home theater: This was one of the most efficient devices when off, probably due to Energy Star standby-power guidelines, since they are relatively new devices. +- Printers: Printers on average are in use for only a few minutes per day, but their idling wattage is quite high. A reasonable average is 300 W active and 12 W idle [Dot-Com Alliance n.d.]. Assuming that a printer is used $5\mathrm{min} / \mathrm{d}$ and left on $4\mathrm{hr} / \mathrm{d}$ , a printer would use $34\mathrm{kWh} / \mathrm{yr}$ , close to the estimate of $30\mathrm{kWh} / \mathrm{yr}$ of Roth and McKenney [2007]. + +This analysis reveals that it is the TV complex, not the computer complex, that is responsible for the bulk of waste: 4.7 TWh idling and 26.1 TWh off for TV-associated devices vs. 0.7 TWh idle and 3.5 TWh off for computer-associated devices. + +If the TV and related devices were plugged into a power strip that was turned off when the electronics are not in use, households would use $18\%$ less energy on electronics. Since the average household uses $11\%$ of its energy on household electronics, this would represent a $2\%$ reduction in overall residential electricity usage. A power strip could even be fitted with a remote-control switch—the strip would consume slight standby power waiting for the remote signal, but the devices plugged into it would not. This would be a convenient way to turn off electronics that would also save electricity. + +In all, this selection of household electronics consumes 169 TWh/yr of electricity, or the equivalent of 99 million bbl/yr of oil—considerably more than the 1.3 TWh/yr wasted by cellphone chargers. Of the 169 TWh, 125 TWh is for devices in use, 7 TWh idle, and 37 TWh waste. By percentage, $26\%$ of a house's energy spent on electronics is wasted: 22 million bbl/yr of oil wasted by standby power and 4 million by electronics on but idle. David MacKay attempted to minimize his standby power waste by unplugging everything he could, finding that he could save 1.1 kWh per day [2009]. Our data suggest that the average American could save just about as much (376 kWh/yr, or 1.0 kWh/d) by doing the same. + +# References + +ABS Alaskan. n.d. Power consumption table. http://www.absak.com/library/powerconsumption-table. +afterdawn.com. 2007. Power consumption of common home electronics devices. http://my.afterdawn.com/ketola/blog_entry.cfm/2423/home_electronics_power Consumption. +Ames City Government. 2002. Common household appliance energy use. http://www.cityofames.org/electricweb/Energyguy/appliances.htm. +AT&T. n.d. Cellphones and devices. http://www.wirelessAtt.com/ cell-phone-service/cell-phones/index.jsp. +Bavdek, Maureen. 2008. Nearly 1 in 5 U.S. households have no phone. Reuters News Service (17 December 2008). http://www.pcmag.com/article2/0,2817,2337125,00.asp. +Brightman, James. 2008. Wii U.S. installed base now leads Xbox 360 by almost 2 million. http://www.gamedaily.com/articles/news/wii-us-installed-base-now-leads-xbox-360-by-almost-2-million. +Burritt, Chris. 2009. Super Bowl may spur fewer TV sales as retailers fight slump. http://www.bloomberg.com/apps/news?pid=20601103& sid=a40nW0I6j7.c&refer=news. + +Central Intelligence Agency. n.d. United States. In CIA World Factbook. https://www.cia.gov/library/publications/the-world-factbook/print/us.html. +CNET. n.d. The chart: 139 HDTV's power consumption compared. http://reviews.cnet.com/4520-6475_7-6400401-3.html?tag=rb_content;rb_mtx. +CTIA The Wireless Association. 2009. Estimated current US wireless subscribers. http://ctia.org/. +Dixon, Kim. 2008. U.S. cell-only households keep climbing. http://www.reuters.com/article/technologyNews/idUSTRE4BG5GH20081217. +Dot-Com Alliance. n.d. ICT power consumption reference tables. http://www.dot-com-alliance.org/POWERINGICT/Text/hotwords/Data_on_Power_Consumption.html#Table6. +Energy Information Association. 2009. Electricity basic statistics. http://www.eia.doe.gov/ basics/quickelectric.html. +______ . n.d. Table HC2.11 Home electronics characteristics by type of housing unit, 2005. http://www.eia.doe.gov/emeu/recs/recs2005/hc2005_tables/hc11homeelectronics/pdf/alltables.pdf. +______ . n.d. Table US-1. Electricity consumption by end use in U.S. households, 2001. http://www.eia.doe.gov/emeu/recs/recs2001/enduse2001/enduse2001.html. +Environmental Protection Agency. n.d. Recycle your cellphone. It's an easy call. http://www.epa.gov/epawaste/partnerships/plugins/ cellphone/index.htm. +Ericsson. 2007. Sustainable energy use in mobile communications. http://www.ericsson.com/technology/whitepapers/sustainable_energy.pdf. +Ericsson. n.d. Ericsson Tower Tube. http://www.ericsson.com/campaign/towertube/. +Etoh, Minoru. 2008. A power consumption ratio, 1:150. http://micketoh.blogspot.com/2008/03/power-consumption-ratio-1150.html. +Fry, Jason. 2006. A hunt for energy hogs. Wall Street Journal (18 December 2006). http://online.wsj.com/public/article/SB116603460189049162-04zk0DLUxdchdcCNf4I_Q8jDZU_20071218.html?mod=blogs. +Fung, Alan S., Adam Aulenback, Alex Ferguson, and V. Ismet Ugursal. 2002. Standby power requirements of household appliances in Canada. Energy and Buildings 35 (2) 217-228. http://www.sciencedirect.com/science?_ob=ArticleURL&_udi=B6V2V-45X2RF8-D&_user= + +6046335&_rdoc=1&_fmt=&_orig=search&_sort=d&view=c&_acct=C000069202&_version=1&_urlVersion=0&_userid=6046335&md5=1c63a4b640ba73c78ed66a358c89df32. +Infoplease. 2009. Cellphone subscribers in the U.S., 1985-2005. http://www.infoplease.com/ipa/A0933563.html. +IP.com. n.d. Zero power consumption from battery chargers when not in use. http://www.priorartdatabase.com/IPCOM/000146735/. +MacKay, David J.C. 2009. Sustainable Energy—without the Hot Air. Cambridge, UK: UIT Cambridge. http://www.withouthotair.com/. +MeasuringWorth. 2008. US real GDP per capita. http://www.measuringworth.org/datasets/usgdp/graph.php. +Motorola. 2008. Charger energy efficiency. http://www.motorola.com/content.jsp?globalObjectId=9392. +Nielsen Company. 2008. Call my cell: Wireless substitution in the United States. http://www.nielsenmobile.com/documents/WirelessSubstitution.pdf. +ReCellular. n.d. Wireless recycling. Frequently asked questions. http: //www.recellular.com/recycling/faqs.asp. +Recycling for Charities. 2008. Cellphone recycling facts. http://www.recyclingforcharities.com/blog/?p=9. +Richard, Michael Graham. 2005. Treehugger homework: Unplug your cellphone charger. http://www.treehugger.com/files/2005/11/treehugger_home_2.php. +Roth, Kurt W., and Kurtis McKenney. 2007. Energy consumption by consumer electronics in U.S. residences. http://www.ce.org/pdf/Energy%20Consumption%20by%20CE%20in%20U.S.%20Residences%20(January%202007).pdf. +Seattle City Light. n.d. Stretch your energy dollar. http://www.ci.seattle.wa.us/light/accounts/stretchyourdollar/ac5_appl.htm#Entertainment. +Skype. 2008. Skype fast facts Q4 2008. http://ebayinkblog.com/wpcontent/uploads/2009/01/skype-fast-facts-q4-08.pdf. +Stover, Dawn. 2008. Cellphone recycling for cash a win-win, or is it? MSNBC (23 January 2008). http://www.msnbc.msn.com/id/22671798/. +U.S. Census Bureau. n.d. Population clocks. http://www.census.gov/. +Virki, Tarmo. 2008. Cellphone industry eyes charger power savings. Reuters (19 Nov 2008). http://www.reuters.com/article/technologyNews/idUSTRE4AI2T520081119. + +Wikipedia. 2009. U.S. states. http://en.wikipedia.org/wiki/U.S._states. + +![](images/5ae1271e3bfe0e7e4dba773f3a9bcd61a6ef1d9a0700158761b00f098e2fc1eb.jpg) + +Advisor Michael Hitchman, with team members Benjamin Coate, Nathaniel Landis, and Zachary Kopplin. + +# Modeling Telephony Energy Consumption + +Amrish Deshmukh +Rudolf Nikolaus Stahl +Matthew Guay + +Cornell University Ithaca, NY + +Advisor: Alexander Vladimirsky + +# Summary + +The energy consequences of rapidly changing telecommunications technology are a significant concern. While interpersonal communication is ever more important in the modern world, the need to conserve energy has also entered the social consciousness as prices and threats of global climate change continue to rise. Only 20 years after being introduced, cellphones have become a ubiquitous part of the modern world. Simultaneously, the infrastructure for traditional telephones is well in place and the energy costs of such phones may very well be less. As a superior technology, cellphones have gradually begun to replace the landline but consumer habits and perceptions have slowed this decline from being an outright abandonment. + +To evaluate the energy consequences of continued growth in cellphone use and a decline in landline use, we present a model that describes three processes—landline consumption, cellphone consumption, and landline abandonment—as economic diffusion processes. In addition, our model describes the changing energy demands of the two technologies and considers the use of companion electronics and consumer habits. Finally, we use these models to determine the energy consequences of the future uses of the two technologies, an optimal mode of delivering phone service, and the costs of wasteful consumer habits. + +# Introduction + +The telephone has become a fundamental part of our social fabric. In the past couple of decades, we have seen a shift from fixed landline telephones, generally one per household, to individual ownership of cellphones. We attempt to determine the impact of this change on American energy consumption. + +The factors that go into accurately modeling telephony energy consumption are complex. We need to take into account also the energy consumption of peripheral devices, such as answering machines for landline phones and chargers for cellphones. Moreover, landline phones are not a uniform product. Cordless phones consume considerably more energy than their corded counterparts. Likewise, the total energy cost of cellphone usage is complicated by such factors as recharging, replacement, and battery recycling. Our model takes all of these factors into account, and additionally attempts to use the limited real-world data available to chart the changes in each of these factors over time. + +Perhaps the most complex factor to model is adoption of technological innovations in a population. This is relevant not only to landline adoption and cellphone adoption, but additionally de-adoption of landline phones in the face of cellphone usage can be considered an independent innovation and modeled accordingly. Research into the phenomenon indicates that it can be modeled globally by the differential equation + +$$ +\frac {d P}{d T} = r P \left(1 - \frac {P}{K}\right), +$$ + +where $P$ is the proportion of the population that has adopted the innovation at time $t$ , $r$ is the adoption rate, and $K$ is the saturation point for the innovation. + +Using the descriptions of such a model, we arrive at an accurate fit to available data and can predict future demand for cellphones and landlines. Determining the cost for these respective technologies we arrive at the total energy burden. Briefly, we explore how this question relates to the energy consumption of other household electronics, and how much waste is generated therein. Additionally, we explore the caveat that technological development has been and continues to be wildly unpredictable, and the consequences of this reality. + +A separate question is how best to distribute landline and cellphones throughout a population committed to neither, so as to minimize energy consumption while not violating social preference. This problem is explored through an optimization with respect to energy usage, in which we discover that a country, here a "Pseudo-U.S.," which supports a cellphone-only communicative infrastructure minimizes its total energy consumption, and also does not violate social demand for novel technologies. Finally, we + +estimate the total energy consumption by such a nation over the next 50 years. + +# Model Overview + +We examine two approaches to modeling technology diffusion through a population. The first attempts to gauge technology adoption at the household level and aggregate these results to model global trends. However, this approach is unsuccessful, and we explain why. The second approach models technology adoption at the global level; it + +- accurately models past and present telephony energy consumption, +- makes future predictions of cellphone saturation and landline de-adoption consistent with previous technological replacement paradigms, and +- encompasses a broad range of pertinent factors in telephony energy consumption. + +# Model Derivation + +# Adoption of Innovations + +Our model describes U.S. usage rates for landlines and cellphones as three diffusive innovation curves. Consider the adoption of an innovation $Y$ . At small times after the development of this innovation, adoption of $Y$ throughout a population is minimal. As the innovation spreads, demand increases until a saturation point is reached. Thus, the spread of $Y$ throughout a population is proportional to its synchronous prevalence, but is checked from exponential growth by an upper bound to its saturation in a population. At its simplest, we can model this as + +$$ +\frac {d Y}{d t} = Y (1 - Y). +$$ + +Of course, adoption is not uniform between different technologies, and saturation rates likewise vary. By introducing constants $r$ for adoption rate and $S$ for saturation rates, we can refine our model to + +$$ +\frac {d Y}{d t} = r Y \left(1 - \frac {Y}{K}\right), +$$ + +which has a solution in form of the logistic equation. Therefore, for each of the processes we assume a model of the form + +$$ +Y (t) = \frac {A}{1 + B e ^ {- C t}}. +$$ + +The sigmoidal form of adoption processes is well-known and has been observed in the specific case of cellphone adoption and wireless-only lifestyle adoption. + +Proceeding globally, we initially model the consumption of telephones from their inception by the equation: + +$$ +p _ {l} (t) = A \left(\frac {1}{1 + B e ^ {- C (t - D)}} + \frac {1}{1 + E e ^ {- F (t - G)}} - 1\right), \qquad \qquad (\mathbf {1}) +$$ + +where the $D$ and $G$ parameters are chosen so that time is shifted relative to the onset of cellphone adoption. This expression is essentially the addition of two sigmoid curves. The first models the adoption of the landline phone as a new innovation; and the second models the de- adoption of landlines as an independent innovation of a "wireless-only" lifestyle, which has a subtractive effect total landline usage. + +Likewise, the consumption by cellphones is given by + +$$ +p _ {c} (t) = \frac {J}{1 + e ^ {- K (t - L)}}, \tag {2} +$$ + +where again $L$ is a time shift chosen to make the model coincide with cellphone adoption. + +We tried to model this at the microscopic level, but that proved to be an intractable approach. From census data, the number of households with $m$ members over the course of history is readily available [U.S. Census Bureau 2007]. Equally accessible are the rates of penetration and average costs of cellular and landline communications penetration [U.S. Census Bureau 2001; Eisner 2008]. With this abundance of data, one may be tempted to propose an econometric forecast of telephony usage that is driven by the marginal cost-benefit analysis that a household performs. However, determining the functional form that defines the behaviors that are muddled by habits and irrationality are troubling. When reduced to a first-order approximation, such a model still requires the calibration of numerous parameters [Koyck 1954]. After attempting such an approach several times, we abandoned it. We believe the above model captures the data equally well without making undue assumptions. + +# Energy Cost of Landlines + +Together these two functions model three processes: landline adoption, wireless adoption, and wireless only adoption. Additionally, they describe the long-term behavior of these processes as they reach a steady state. To approach the question of annual energy consumption by telephony products, we combine these functions with models for energy expenditure by landline phones and their peripherals, as well as cellphones and their peripherals. The formula for energy consumption by landline phones and + +peripherals is + +$$ +E _ {l} (t) = P p _ {l} h (\pi_ {a} e _ {a} + \pi_ {b} e _ {b} + \pi_ {c} e _ {c} + \pi_ {d} e _ {d}). +$$ + +Table 1 delineates the variables and their explanations. The time variable $t$ is normalized so that $t = 0$ denotes 1960. + +Table 2. Variables and their meanings. + +
VariableDescription
P(t)Population of U.S. in year t
pl(t)Landlines per person in U.S.
h(t)Handsets per landline
πa(t)Percentage of landline owners with corded phones
ea(t)Yearly Energy Consumption (YEC) (kWh) by corded phones
πb(t)Percentage of landline owners with cordless phones
eb(t)YEC by cordless phones
πc(t)Percentage of landline owners with combination cordless phone/answering machines
ec(t)YEC by combination cordless phone/answering machines
πd(t)Percentage of landline owners with separate answering machines
ed(t)YEC by separate answering machines
+ +Due to a lack of relevant data, we make several assumptions: + +- All yearly energy consumption functions are constant over time. Because corded phones draw their energy solely from phone lines, there is little room for variation in their power draws, so this at least seems reasonable. However, answering machines, cordless phones, and combinations of the two do not have this restriction, and it seems likely that they are becoming more energy efficient with time. However, no data were available to support this hypothesis, so we fixed YEC based on available sources. +- The adoption of cordless vs. corded phones and answering machines no doubt follows its own sigmoidal curve, but again no data are available. So the variables $h, \pi_{a}, \pi_{b}, \pi_{c}, \pi_{d}$ are all modeled as first-order linear approximations. + +Regardless, results produced by the model agree well with available data for energy consumption. + +# Energy Cost of Cellphones + +The energy cost for cellphones can be modeled as + +$$ +E _ {C} (t) = P p _ {c} (E _ {c 1} + E _ {c 2}), +$$ + +where + +$$ +E _ {c 1} (t) = f _ {C} \left(C _ {\text {c h a r g e}} t _ {\text {c h a r g e}} + C _ {\text {s t a n d b y}} t _ {\text {s t a n d b y}}\right) +$$ + +and + +$$ +E _ {c 2} (t) = R _ {\mathrm {c e l l}} R (t). +$$ + +Table 2 describes each relevant variable. + +Table 2. Variables and their meanings. + +
VariableDescription
P(t)Population of U.S. in year t
pc(t)Number of cellphones per person
Ec1YEC by cellphones and chargers
Ec2YEC by cellphone recyclers
fCFrequency of cellphone charging
ChargeCharger wattage during charging
tchargeDaily charger time spent charging
CstandbyCharger wattage during standby
tstandbyDaily charger time spent in standby
RcellEnergy needed to recycle one cellphone battery
R(t)Percentage of cellphones recycled in year t
+ +The immediate contributions to cellphone energy consumption are charging the phone and leaving the charger plugged in with no phone attached. It is difficult to find data on cellphone charging frequency. Rosen et al. [2001] argue that people charge their phone 50 times each year at their residence (noting that many people charge the phone in their car); but this figure seems very low. Newer phones with a multitude of features require more-frequent charging. Since charging the cellphone has developed into a habit for most people, we assume that people charge the phone every night and keep the charger attached to an outlet all the time. + +Rosen et al. [2001] observe that the average time to charge a cellphone is 2 hrs, which seems low in comparison to other data, which suggest 3-4 hrs to charge to $80\%$ and an additional 8 hrs to charge to $100\%$ . However, a phone charged every night is unlikely to have a nearly-empty battery. We assume that the overnight charging does not affect the 2-hr charging time. That $50\%$ of cellphone batteries are lithium-ion batteries, which do not allow for overcharging, justifies this assumption [Fishbein 2002; Rosen et al. 2001]. Once a lithium-ion battery is charged, the power drawn differs negligibly from that when no phone is connected to the charger [Rosen et al. 2001]. Therefore, we feel justified in adopting Rosen et al.'s statistic. + +To model the energy cost of recycling used cellphone batteries, we consider the batteries to be recycled by the Rechargeable Battery Recycling Corporation, justified by its significant market share and the fact that it recycles batteries in the U.S. [Office Depot 2004]. + +# Energy Optimization + +Given the above functions for energy costs for cellular and landline telephone usage, we can optimize energy consumption. A Pseudo U.S. with the approximate size of the U.S. would likely have a similar distribution of household size. + +Let $H_{m}$ be the number of households with $m$ members and $l_{m}$ the fraction of households with $m$ members that have landline service. If we assume that the communication needs of every family are satisfied by either having a landline or by each member possessing a cellphone, the numbers of required cellphones $T_{c}$ and landline phones $T_{l}$ can be calculated as + +$$ +T _ {l} = \sum_ {m = 1} ^ {7} l _ {m} H _ {m}, \qquad T _ {c} = \sum_ {m = 1} ^ {7} m (1 - l _ {m}) H _ {m}. +$$ + +We believe that in the absence of a landline, members of a household will not share cellphones. + +The total telephony energy demand of the proposed plan for Pseudo U.S. is + +$$ +E (t) = E _ {l} (t) + E _ {c} (t). +$$ + +Using only landlines would minimize the number of telephone units required; however, landline phones and their companion technologies are much less energy-efficient than cellphones. Using only cellphones would maximize the number of telephone units required; and though the energy cost per unit is reduced, the overall increase in units may have deleterious consequences. Therefore, we optimize the variables $l_{m}$ to yield the best communications strategy from an energy perspective. + +We could modify the above summations to consider roles played by cellphones that are not achievable by a landline. For example, suppose that a single landline cannot serve a large family. If $n$ is the number of people a single landline can serve in a household, we may assume that a family of $m$ with one landline will need to purchase $(m - n)$ cellphones. Then we have + +$$ +T _ {c} = \sum_ {m = 1} ^ {7} m (1 - l _ {m}) H _ {m} + \sum_ {m = n + 1} ^ {7} l _ {m} H _ {m} (m - n), +$$ + +where the second term gives the fraction of families too large to be served by a single landline. Implicit in this formula is an assumption that no family obtains a second landline. This is reasonable, since the average number of landlines per household in the U.S. is only 1.118 [Eisner 2008]. + +Likewise, we could further complicate the cost function by asserting that not every family member requires a cellphone if a landline is absent. + +However, we find that such a modification does not enrich the conclusions of our optimization. + +# Results + +# Energy Consumption + +Using the above information, we create an energy consumption function: + +$$ +E (t) = E _ {c} (t) + E _ {l} (t). +$$ + +To make this specific, we must estimate parameter values for $A, \dots, G$ in (1) and (2). Using an optimization algorithm described in the methods section below, we arrive at the conclusions in Table 3. + +Table 3. Values of parameters, as fitted from data in Eisner [2008]. + +
ParameterValue
A1.1263
B1.0924
C0.0423
D27
E0.0109
F0.1587
G30
+ +Moreover, functions can be described for parameters for $E_{l}$ and $E_{c}$ . Tables 4 and 5 give values for the variables and parameters in Tables 1 and 2. + +Table 4. Values for variables in Table 1. Source: Rosen et al. [2001]. + +
VariableValue
P(t)Population growth as predicted by the Census Bureau
pl(t)As defined in (1)
h(t)1.89E-3t + 1.076, t ≤ 40; -1.20E-3+ 1.152, t > 40
πa(t)1 - πb(t) - πc(t)
ea(t)20 kWh
πb(t)max(0,1.45E-2t - 1.45E-1), t ≤ 40; .44,t > 40
eb(t)28 kWh
πc(t)max(0,1.07E-2t - 1.07E-1), t ≤ 40; .32,t > 40
ec(t)36 kWh
πd(t)max(0,2.31-2t - 2.31E-1), t ≤ 40; .69,t > 40
ed(t)36 kWh
+ +Table 5. +Values for variables and parameters in Table 2. Source: Rosen et al. [2001]. + +
VariableValue
pc(t)as defined in (2)
Ec10.365(4·2 + 0.6·24) kWh
Ec2-0.0283e-(t-1993)/17.1573 + 0.00037 kWh
fC365
Ccharge4 W
tcharge2 hr
Cstandby0.6 W
tstandby24 hr
Rcell0.0037 kWh
R(t)-7.639e-(t-1993)/17.1573 + 0.0999
+ +From our model, we expected that by 2050 cellphones will have completely replaced landlines in the U.S. Thus, we estimate steady-state energy consumption as $E(90) = 2.99$ TWh/yr, equivalent to 1.7 million bbl/yr of oil. + +# Energy Optimization Results + +From our optimization results for the distribution of telephone types in the Pseudo U.S., we find that it is almost always preferable to have a cellphone-only state, in terms of energy efficiency. Even assuming a landline can service an unlimited number of people in a household, our optimization finds that only for families of size 7 or larger it is energy-efficient to own a single landline and peripherals in place of a cellphone for each family member. + +The cost of leaving cellphone chargers on standby when not active would amount to approximately $62\%$ of the total YEC, or 862,000 bbl/yr of oil. + +# Energy Waste by Other Household Electronics + +We also discuss the impact of leaving devices plugged in when the device is not in use. From Rosen et al. [2001], we adopt the following approach. First, we investigate the average wattage used in standby mode by the devices under consideration and the time spent in standby mode, respectively. Then we find saturation and penetration values to find the total energy expenditure in the U.S. We consider computers, TVs, set-top boxes (digital and analog), wireless set-top boxes, and video-game consoles. + +We take the data for the three types of set-top boxes and the videogame console from Rosen et al. [2001]. Furthermore, the average American spends an average of 4.66 hours watching television and 4.4 hours using a computer every day [Bureau of Labor Statistics 2009]. Average power + +drawn by computers and television sets turns out to be 4 and $5.1\mathrm{W}$ [Rosen et al. 2001]. The first two columns of Table 6 give our data set. We use that information, along with saturation rates and household penetration rates [Eisner 2008], to arrive at the figures in the third column. + +Table 6. Data used for power consumption of household electronics. Source: Thorne and Suozzo [1998]. + +
DeviceStandby time (proportion)Power drawn in use (W)Standby power consumption (TWh/yr)
Set-top box, analog.7810.53.2
Set-top box, digital.7822.30.6
Wireless receiver.7810.21.4
Video-game console.981.00.5
TV.805.110.3
Computer.8143.3
+ +We conclude that wasteful energy expenditure due to appliance standby in the U.S. consumes approximately 11.4 million bbl/yr of oil. + +# Future Predictions + +Assuming moderate economic and population growth, Table 7 shows results for the Pseudo U.S., using population projections from the U.S. Census Bureau [1996a; 1996b]. + +Table 7. Projected energy use in Pseudo U.S. + +
YearEnergy (×106bbl oil)
20101.14
20201.24
20301.66
20401.77
20501.89
+ +However, we believe that such an analysis is of limited use. Predicting the future of so many variables for a 50-year period is extremely difficult, especially in the realm of technology, where it is commonplace for innovations to change social paradigms. For example, consider an attempt in the 1950s to model the growth of computer usage. Any such attempt would have been unlikely to foresee personal computers, the Internet, or cellphones (which today are rapidly replacing many of the functions of personal computers). Likewise, the energy cost of cellphones may vary greatly due to changes in technology: Social awareness about energy efficiency may drive them to ever-lesser energy consumption, but also they may gain additional fea + +tures or be replaced by miniaturized computers that result in more energy consumption. + +# Conclusions + +# Recommendations + +From and energy perspective, we find that it is more efficient to abandon landlines in favor of cellphones. This suggestion is reinforced by the model prediction, which suggests an elimination of landlines in the near future by consumer adoption of a wireless-only lifestyle. + +Finally we find that the waste generated by chargers on standby (i.e., not charging a device) are a significant source of energy waste. We therefore advocate that efforts be made to forgo convenience and unplug devices when in standby. + +# Model Strengths and Weaknesses + +# Strengths + +- The model reproduces sigmoidal innovation-adoption behavior without making undue assumptions about the underlying processes. +- The model incorporates a broad span of indirect sources of energy consumption: battery recycling, commuters with cellphones, landline companion technologies. + +# Weaknesses + +- Our model captures only global adoption behavior. This exclusion of underlying behavior is a detriment in capturing deviations from the standard behavior, as was exemplified by the underestimation in the 1990s, when economic expansion may have driven telephone adoption. +- Due to lack of data, the model relies on interpolation of data related to cellphone and landline energy costs. +- For simplicity, the model excluded other possible communications technologies. As noted earlier, paradigm shifts in technology are commonplace yet hard to predict. +- The perspective excludes other communications technologies. +- The model fails to capture any benefit of landlines not provided by cellphones. It may be that landlines are associated with a certain degree of security, which mediates the current prediction that landlines will be completely abandoned. + +# Future Work + +- We believe that a model at the microscopic level that takes into consideration consumer perceptions and habits, in addition to economic data, would perform the best. +- We also believe with Bagchi [2008] that modeling cellphones and landlines as more directly competing products with reference to economic data would provide better data fits and predictions. +- The analysis is limited to the household level. Landline phones will persist in many businesses, and we believe that this persistence will be a significant factor in energy consumption. + +# References + +Bagchi, Kallol. 2008. The impact of price decreases on telephone and cell phone diffusion. Information and Management 45 (2008): 183-193. +Eisner, James. 2008. Table 16.2: Household Telephone Subscribership in the United States. In Trends in Telephone Service (March 2008), Federal Communications Commission Industry Analysis and Technology Division, Wireline Competiton Bureau. http://hraunfoss.fcc.gov/edocs_public/attachmatch/DOC-284932A1.pdf. +Fishbein, Bette. 2002. Waste in the Wireless World: The Challenge of Cell Phones. New York: INFORM, Inc. http://www.p2pays.org/ref/19/18713.htm. +Koyck, L.M. 1954. Distributed Lags and Investment Analysis. Amsterdam: North-Holland. +Office Depot. 2004. Office Depot launches free cell phone and rechargeable battery recycling program. http://mediarelations.Officedepot.com/phoenix.zhtml?c=140162&p=irol-newsArticle&ID=674008. +Rosen, Karen, Alan K. Meier, and Stephan Zandelin. 2001. Energy use of set-top boxes and telephony products in the U.S. http://repositories.cdlib.org/cgi/viewcontent.cgi? article=1950&context=1bnl. +Thorne, Jennifer, and Margaret Suozzo. 1998. Leaking electricity: Standby and off-mode power consumption in consumer electronics and household appliances. http://www.accee.org/pubs/a981.htm. +U.S. Bureau of Labor Statistics. 2009. American time use survey summary—2008 results. http://www.bls.gov/news.release/atus.nr0.htm. +U.S. Census Bureau.. 1996a. Projections of the number of households and families in the United States: 1995 to 2010. Current Population Reports P25-1129. http://www.census.gov/prod/1/pop/p25-1129.pdf + +______ 1996b. Population projections of the United States by age, sex, race, and Hispanic origin: 1995 to 2050. Current Population Reports P25-1130. http://www.census.gov/prod/1/pop/p25-1130.pdf. +2001. Construction and housing. Chapter 25 in Statistical Abstract of the United States: 2000. http://www.census.gov/prod/2001pubs/statab/sec25.pdf. +______ 2007. U.S. households by size, 1790-2006. Reproduced at http://www.infoplease.com/ipa/A0884238.html. + +![](images/4766dc2e9a7b302c7be2f045ea1e9226c960293f6da386a408f8dcf223c13be7.jpg) +Amrish Deshmukh, Matt Guay, Niko Stahl, and advisor Alexander Vladimirsky. + +# America's New Calling + +Stephen R. Foster + +J. Thomas Rogers + +Robert S. Potter + +Southwestern University + +Georgetown, TX + +Advisor: Richard Denman + +# Summary + +The ongoing cellphone revolution warrants an examination of its energy impacts—past, present, and future. Thus, our model adheres to two requirements: It can evaluate energy use since 1990, and it is flexible enough to predict future energy needs. + +Mathematically speaking, our model treats households as state machines and uses actual demographic data to guide state transitions. We produce national projections by simulating multiple households. Our bottom-up approach remains flexible, allowing us to + +- model energy consumption for the current U.S., +- determine efficient phone adoption schemes in emerging nations, +- assess the impact of wasteful practices, and +- predict future energy needs. + +We show that the exclusive adoption of landlines by an emerging nation would be more than twice as efficient as the exclusive adoption of cellphones. However, we also show that the elimination of certain wasteful practices can make cellphone adoption $175\%$ more efficient at the national level. Furthermore, we give two forecasts for the current U.S., revealing that a collaboration between cellphone users and manufacturers can result in savings of more than 3.9 billion barrels-of-oil-equivalent (BOE) over the next 50 years. + +# Problem Background + +In 1990, less than $3\%$ of Americans owned cellphones [International Telecommunication Union n.d.]. Since then, a growing number of households have ditched their landline in favor of cellphones for each household member. We develop a model for analyzing how the cellphone revolution impacts electricity consumption at the national level. In particular, we + +- assess the energy cost of the cellphone revolution in the U.S., +- determine an efficient way of introducing phone service to an nation like the U.S., +- examine the effects of wasteful cellphone habits, and +- predict future energy needs of a nation (based on multiple growth scenarios.) + +# Assumptions + +- The population of the U.S. is increasing at roughly 3.3 million people per year [U.S. Census Bureau 2009]. +- The relatively stable energy needs of business and government landlines, payphones, etc. have a negligible impact on energy consumption dynamics during the household transition from landlines to cellphones. +- No household member old enough to need phone service is ever without it. +- Citizens with more than one cellphone are rare enough to have a negligible energy impact. +- The energy consumption of the average cellphone remains constant. Future changes in cellphone energy requirements depend largely on changes in user habits and in manufacturing efficiency, so are difficult to predict. However, we drop this assumption in our final section. + +# Energy Consumption Model + +Our approach involves three steps: + +- We model households as state machines with various phones and appliances. +- We use demographic data to determine the probability of households changing state. + +- By simulating multiple households, we extrapolate national energy impacts. + +# Households + +The basic component of our model is the household. Each household has the following attributes: + +$m$ : number of members old enough to need a telephone. + +$t$ : number of landline telephones. + +$c$ : number of members with cellphones. + +The state of each household can be described in terms of the above values. We generate $m$ from available demographic data and hold it constant. + +A household can exist in one of four disjoint states at a time; each state has two associated conditions: + +- Initial State: when a household uses only landline telephones: $t > 0, c = 0$ +Acquisition State: after a household acquires its first cellphone: $t > 0, 0 < c < m$ +- Transition State: after all household members have their own cellphone but the landline is retained: $t > 0, c = m$ +- Final State: after the household abandons landline telephones: $t = 0, c = m$ + +These states are disjoint. We do not assume that all states are reached during the timeline of a household. We assume that cellphones, once acquired, are never lost, and that landlines, once dropped, are never read-opted. Thus, a household will never re-enter a state that it has left. Thus, a household will reach one or more of the above states in the order listed. + +Consider a household with three members ( $m = 3$ ), one landline telephone ( $t = 1$ ), and no cellphones yet ( $c = 0$ ). Figure 1 shows the timeline of such a household as it moves through the four phases. + +Our model generates household state-transition probabilities from demographic data. However, this process is simulation-dependent, as we discuss later. + +# Nations + +Households are only part of the story. We model the national timeline during the country-wide transition from landlines to cellphones as a composition of multiple overlapping household timelines. Furthermore, the + +![](images/ee4a8fa2bb49f56c76eca16f6bc83f6fac3f1d684f03eeb91dfc4642eabaa44e.jpg) +Figure 1. Power consumption timeline for a hypothetical household. + +decisions that households make regarding when to acquire cellphones and when to abandon landlines depend on the larger national context. For example, a household would be much more likely to acquire its second or third cellphone in 2008 than it would have been in 1990. + +A hypothetical nation with only three households might have the timeline composition of Figure 2. That the three household power usages converge is a result of there being three members in each household. + +We proceed to construct such a timeline for the U.S. We average the power consumption over all households in the U.S. to generate a national timeline like that in Figure 3. + +# The Current U.S. + +# Using Technological Data + +To use our model in conjunction with relevant data, we have to calculate: $C_{\mathrm{wattage}}$ : the average power consumption of a cellphone over its lifetime. $L_{\mathrm{wattage}}$ : the average power consumption of a landline phone over its lifetime. + +![](images/04bd9a9227be279205d2aca0be63bd611ab9d002a7bc9faef167437faae442c0.jpg) +Figure 2. Power consumption timeline for three hypothetical households. + +![](images/f9a7d8066952e70b9ef1987e13db146bef479ce6b3795be107b3a7e2cb784644.jpg) +Figure 3. Power consumption timeline for a hypothetical nation consisting of the three households shown previously. + +We deal with only cordless landline phones because corded phones use minimal levels of energy and are ignored in the literature that we consulted [Frey et al. 2006; Rosen et al. 2001]. + +We derive $C_{\mathrm{wattage}}$ as follows: + +$$ +C _ {\mathrm {w a t t a g e}} = \mathrm {C h a r g e r} _ {\mathrm {w a t t a g e}} + \frac {C _ {\mathrm {u p f r o n t}}}{C _ {\mathrm {l i f e t i m e}}}. +$$ + +We add to the usage wattage of the charger the upfront energy cost (in joules) manufacturing a cellphone, amortized over the lifetime of a cellphone (in seconds). The bulk of energy consumption occurs in manufacturing and use [Frey et al. 2006], so we ignore the rest of a phone's life cycle (e.g., shipping). + +For cellphones, Frey et al. [2006] give: + +$$ +C _ {\mathrm {u p f r o n t}} = 1 4 8 \mathrm {M J}, \qquad C _ {\mathrm {l i f e t i m e}} = 2 \mathrm {y r}, \qquad \mathrm {C h a r g e r} _ {\mathrm {w a t t a g e}} = 1. 8 3 5 \mathrm {W}. +$$ + +Analogously, for corded phones we have: + +$$ +L _ {\mathrm {w a t t a g e}} = \mathrm {C o r d l e s s} _ {\mathrm {w a t t a g e}} + \frac {L _ {\mathrm {u p f r o n t}}}{L _ {\mathrm {l i f e t i m e}}}. +$$ + +Though there are many different kinds of cordless phones, we use the values for cordless phones with integrated answering machines, as determined by Rosen et al. [2001]: + +$$ +L _ {\mathrm {u p f r o n t}} = 1 6 7 \mathrm {M J}, L _ {\mathrm {l i f e t i m e}} = 3 \mathrm {y r}, \mathrm {C o r d l e s s} _ {\mathrm {w a t t a g e}} = 3. 5 3 9 \mathrm {W}. +$$ + +Thus, our simulation uses the following values: + +$$ +C _ {\mathrm {w a t t a g e}} = 4. 1 8 2 \mathrm {W}, L _ {\mathrm {w a t t a g e}} = 5. 3 0 4 \mathrm {W}. +$$ + +# Demographic Data + +We need demographic data to guide the transition of household states over the course of a simulation. We could allow houses to decide randomly when and whether to adopt new cellphones, as well as when and whether to drop their landline. However, we prefer to use actual penetration data to probabilistically weight household decisions. + +Consider the household decision of whether to purchase a cellphone in month $M$ . We use a three-step process to produce the cellphone acquisition probability function $a(M)$ employed in our simulation: + +1. Find historic data about the number of cellphone owners over time. +2. Interpolate between data points. + +3. Define $a(M)$ , the probability of a simulated household acquiring a cellphone in month $M$ . + +For step 1, we use data from the International Telecommunication Union [n.d.]. In step 2, we use linear interpolation between available data points to make a continuous function from 1990 (the start of our simulation) to 2009, as shown in Figure 4. + +![](images/8c40cd5c89bb54795bbf6c8c2ba42fee90df8e80d62426f0a6ae7785a38d4846.jpg) +Figure 4. Cellphone penetration demographics. + +Then we use a linear regression to extrapolate between 2009 and 2040. Call this function $f$ . Then, for step 3, we have + +$$ +a (M) = f (M) - \frac {\sum_ {H \in \text {H o u s e s}} c (H , M)}{\sum_ {H \in \text {H o u s e s}} m (H , M)}, \tag {1} +$$ + +where + +$c(H, M)$ is the number of cellphones owned by members of simulated household $H$ in month $M$ , + +$m(H, M)$ is the number of members in simulated household $H$ in month $M$ , and + +the summations are over all households in the simulation. + +In essence, (1) subtracts the current simulated cellphone penetration during month $M$ from the approximated market penetration, $f(M)$ , which is derived from available data. + +Using $a(M)$ , the households in our simulation make decisions that approximate historical data. As the second term in (1) approaches the historical value returned by $f(M)$ , the chance of a simulated household buying a cellphone decreases to zero. + +We perform an almost identical process with historic landline ownership data to determine the probability of a household dropping its landline in month $M$ ; we omit the details. Mnemonically, however: $a(M)$ is the probability of acquiring a cellphone, and $d(M)$ is the probability of dropping a landline. + +# Simulating the Current U.S. + +The historical demographic data help guide our simulation, and technological data help us calculate power consumption at any point during the simulation. + +We algorithmically generate household timelines as follows: + +While month M is before end date For every house H do if H is in 'initial' or 'acquisition' state get a new cellphone with probability a(M) if H is in transition' state get rid of landline with specified probability d(M) End For Calculate power consumption. Let $\mathbf{M} = \mathbf{M} + 1$ month end while + +The power consumption is calculated from $C_{\mathrm{wattage}}$ , $L_{\mathrm{wattage}}$ , and current phone ownership. + +Figure 5 shows the timeline of power consumption for the U.S. over the past 19 years, with future projections. Interesting features of this graph are: + +- The steep energy consumption as Americans acquire cellphones yet retain their landlines. +- The drop after cellphone penetration slows and landlines are abandoned. +- The rising slope after households have dropped their landlines and the population grows. + +At first, most households tend to be in an Acquisition State, having both landlines and an increasing number of cellphones. Next, households begin to progress to a Transition State, slowly dropping landlines while retaining cellphones—hence, overall consumption drops. The final upward slope represents the steady state, in which population growth (and associated cellphone acquisition) is the only factor affecting energy consumption. + +![](images/d87667172a1b372adff17a4b64e80894170e963f075955eca12ea7f6010d5b73.jpg) +Figure 5. Energy consumption timeline for the U.S. over the past 19 years, with future projections. + +# Optimal Telephone Adoption + +For an emerging nation without phone service but with an economic status roughly similar to the current U.S., we examine two hypothetical scenarios for introducing phone service: + +- Cellphones Only, or +- Landlines Only + +Because it took Russia roughly 6 years for cellphone penetration to go from $2\%$ to $105\%$ [International Telecommunication Union. n.d.], we assume a similar timescale for introducing cellphones to our hypothetical nation. Furthermore, a country with the same economic status as the U.S. should be capable of making a similarly quick adoption of either cellphones or landline phones, even though landline phone infrastructure involves the extra complexity of laying cables. + +# Cellphones Only + +For our cellphone introduction plan, we assume that $0\%$ of the population in 2009 have cellphones and that $100\%$ of the population in 2015 have cellphones. If we interpolate linearly between these two dates, we + +can derive the number of people with a cellphone in any month during the 6-year period. If we assume that the rate at which cellphones consume energy remains roughly the same between 2009 and 2015, then we have all the information we need to run our simulation. + +The only major change that we make to our model is that the Initial State of a household now involves having no phones at all, and the Final State involves each household member owning a cell phone. + +The steep slope levels off when cellphone market penetration reaches $100\%$ , and the only relevant factor after that is population growth (Figure 6, top curve). + +# Landlines Only + +We alter our model so that the Initial State of a household still involves having no phones and the Final State involves having one landline (Figure 6, bottom curve). + +![](images/b15161cafde35d8119b931f64368ffd9a5dbdc490353f4329eaf4677bbd0709c.jpg) +Figure 6. Power consumption comparison of adoption plans: cellphone only (top curve), "cellphone light" use (middle curve), and landline only (bottom curve). + +The Landlines Only plan requires less than half the power of the Cellphones Only plan. However, we prefer to delay our recommendation. First, we examine a way to make cellphone adoption more energy efficient. + +# Waste and "Vampire" Chargers + +Although the above comparison shows Landlines Only to be a clear winner, we should take into account that the rate at which cellphones consume energy varies depending on the practices of users. Until now, we have assumed that the energy consumption of a cellphone is equal to the consumption of its charger—even though many people do not use their charger as conservatively as they could. We now relax this assumption and assess the total cost of certain wasteful practices by supposing that our hypothetical nation's citizens + +- never charge a cellphone after it is finished charging and +- never leave their charger plugged in when not charging the phone. + +The value for $C_{\mathrm{wattage}}$ that we calculated earlier was based on the assumption of Frey et al. [2006] that cellphone chargers spend their lifetimes plugged in—mostly in standby ("vampire") mode. Figure 7 shows, in barrels-of-oil-equivalent (BOE), the amount of energy wasted each month by vampire charging. + +![](images/8b1eb54fd5b713bdfa314cb5362c88b3cf4ebe784e373748116adf04fd434c63.jpg) +Figure 7. Barrels-of-oil-equivalent (BOE) wasted due to vampire charging. + +We now derive a new value for $C_{\mathrm{wattage}}$ based on Roth and McKenney [2007], which shows that the average cellphone needs to spend only 256 hr/yr charging. In short, we make $C_{\mathrm{wattage}}$ depend strictly on its mini + +mum battery requirements and assume that users charge their phones only enough to keep them charged for the entire day. Roth and McKenney also suggest that chargers require 3.7 W when charging. + +$$ +C _ {\text {w a t t a g e}} ^ {\prime} = \text {B a t t e r y} _ {\text {w a t t a g e}} + \frac {C _ {\text {u p f r o n t}}}{C _ {\text {l i f e t i m e}}}, \tag {2} +$$ + +$$ +\mathrm {B a t t e r y} _ {\mathrm {w a t t a g e}} = \frac {\mathrm {T i m e s p e n t c h a r g i n g}}{\mathrm {L i f e t i m e}} \times \mathrm {W a t t a g e w h e n c h a r g i n g} +$$ + +Thus, Battery $_{\text{wattage}} = \frac{256(\text{hr})}{8760(\text{hr})} \times 3.7 \text{ W} = 0.108 \text{ W}$ . The second term in (2) is the same as in (1). So, + +$$ +C _ {\mathrm {w a t t a g e}} ^ {\prime} = 2. 4 5 5 \mathrm {W}. +$$ + +Recall that our previous value was + +$$ +C _ {\mathrm {w a t t a g e}} = 4. 1 8 2 \mathrm {W}. +$$ + +The middle curve of Figure 6 shows the lower energy expenditure of this "cellphone light" use. + +# Other Household Appliances + +Generalizing our previous analysis, we now assume that households do not simply use cellphones and/or landlines. They also each have the following common appliances: + +- 0 or 1 computer (50% have 1) [Newburger 2001], +- 0 or 1 DVD player (84% have 1) [Nielsen...2007], and +2 or 3 TVs [Nielsen...2007]. + +We select these appliances because they are responsible for a significant amount of household energy consumption [Floyd n.d.]. The "vampire" energy leakage from these appliances is: + +Computer 2.63 W [Roth and McKenney 2007] +DVD player 3.64 W [Roth and McKenney 2007] +TV 6.53 W [Floyd n.d.] + +The graph of a single household might look like Figure 8. Figure 9 shows our hypothetical nation's wasted power, interpreted in barrels-of-oil-equivalent (BOE). + +Clearly, telephone-related energy loss is a significant contributor to the overall energy consumed by the U.S. However, other electrical appliances have a larger impact. + +![](images/d40ef0b67f30945c42ea8db9adffa90631eda29b92c652e5efc6d5700dfbf433.jpg) +Figure 8. Energy consumption timeline for a household with various appliances, transitioning from landlines to cellphones. + +![](images/f3e9d11d7af17dcc12c7489f9a7d4ea2dd86e7f96ea1244a3eb4b4865de099c3.jpg) +Figure 9. Energy consumption timeline for a household with various appliances, transitioning from landlines to cellphones. + +# Predictions + +Here we tie our previous work together into a predictive simulation that investigates the energy impact of the following eventualities: + +- Cellphone efficiency stays the same. +- Cellphone efficiency decreases (e.g., with the introduction of smartphones). +- People save $50\%$ of energy currently lost to "vampire" charging. +- People do not stop "vampire" charging. + +In all cases, the population of the nation is assumed to grow at 3 million people per year—a rate comparable to that of the current U.S. + +# Optimistic Prediction + +For our optimistic prediction, we assume that cellphone energy requirements remain constant with each successive generation of cellphones and the population eliminates $50\%$ of energy consumption due to "vampire" charging. + +Recall that our best-case value for the use-phase power consumption of a cell phone (no vampire charging) is + +$$ +\mathrm {B a t t e r y} _ {\mathrm {w a t t a g e}} = 0. 1 0 8 \mathrm {W}, +$$ + +and our worst-case scenario (charger always plugged in) is + +$$ +\mathrm {C h a r g e r} _ {\mathrm {w a t t a g e}} = 1. 8 3 5 \mathrm {W}. +$$ + +We choose a use-phase value half-way between the two: + +$$ +\mathrm {R e a l i s t i c} _ {\mathrm {w a t t a g e}} = 0. 9 7 1 5 \mathrm {W}. +$$ + +As in (1) and (2), we add this to the manufacturing-phase energy cost to obtain an optimistic (but not too optimistic) average cellphone wattage. With this value, we graph in Figure 10 the power consumption over the next 50 years. + +Landline telephone usage still contributes significantly to the total power consumption of the nation until the year 2030. The cellphone power consumption trend may not be meaningful until looked at alongside the pessimistic prediction. + +# Pessimistic Prediction + +We assume that cellphone energy requirements increase with each successive generation of cellphones at a rate comparable to the increase from + +regular cellphones to smartphones. In short, we are modeling the transition from landlines to cellphones to smartphones. We also assume that the population does not manage to avoid "vampire" energy loss. + +Because smartphone technology exists in a state of relative infancy, technical information about it is scarce. Thus, we make an estimate of the average wattage of a smartphone based on the fact that for all tasks (emailing, text messaging, idling, etc.) a smartphone requires more than twice as much power as a regular cellphone [Mayo and Ranganathan 2005]. Endeavoring to be conservative, we assume that smartphone manufacturing costs are the same as for cellphones, even though they are likely much higher. Thus, we borrow most values from (1) to calculate average smartphone wattage: + +$$ +S _ {\mathrm {w a t t a g e}} = 2 \times \mathrm {C h a r g e r} _ {\mathrm {w a t t a g e}} + \frac {C _ {\mathrm {u p f r o n t}}}{C _ {\mathrm {l i f e t i m e}}}. +$$ + +With $S_{\mathrm{wattage}} = 6.017 \, \mathrm{W}$ , and smartphones becoming widespread at around 2025, we are ready to make our comparison. + +# Comparison + +The two predictive scenarios above are represented together in Figure 10, which graphs the nation's total power consumption. + +![](images/2ecd6c2d1c03d92e2b7f8434d603f2acec023b394e93dc5c2cb84393c5a5f26b.jpg) +Figure 10. Comparison between optimistic prediction and pessimistic prediction. + +Our model leads us to recommend the adoption of conservative practices (on the part of cellphone users) and research into greater phone efficiency (on the part of cellphone manufacturers). A $50\%$ reduction in vampire phone charging and a dedication to energy-efficient phones, according to our simulation, would result in the conservation of 3.9 billion BOE over the next 50 years. Even our pessimistic scenario is not as pessimistic as it could be, since we chose a deliberately low value for the energy cost of smartphones; our optimistic scenario is not as optimistic as it could be, since we assumed only a $50\%$ reduction in vampire energy losses. + +# Conclusion + +Modeling the cellphone revolution can benefit from a bottom-up approach. The basic components of this approach are households undergoing a series of transitions such that each member acquires a cellphone and, eventually, the household abandons its landline. + +For an emerging nation adopting a new telephone system, landline adoption would be twice as efficient as cellphone adoption. However, if the nation enforces conservative cellphone energy use, the cellphone plan can be almost comparable to the landline plan. + +Also, our model is capable of showing a vast divergence between an optimistic future scenario and a pessimistic one. This being the case, we must recommend a concerted energy conservation effort on the part of cellphone makers and cellphone consumers. Doing so would result in savings of over 3.9 billion barrels-of-oil-equivalent (BOE) over the next 50 years. + +# Strengths and Weaknesses + +# Strengths + +- Uses demographics. Our model simulates the decisions of households based on historic data, making it a good model for assessing the energy consumed to-date. +- Incorporates manufacturing. We combine the energy cost of a phone's manufacturing-phase with the phone's use-phase wattage, thereby increasing the simplicity of our model without ignoring the significant energy consumption during manufacturing. +- Retains flexibility. Because our model is a bottom-up approach, various details at the household level can easily be incorporated into national simulations. We did this, for example, to assess the cost of "vampire" chargers and to assess the cost of non-telephonic appliances. + +# Weaknesses + +- Ignores infrastructure. We do not examine the energy cost of cellular infrastructure (towers, base stations, servers, etc.) as compared to the energy cost of landline infrastructure (i.e., telephone lines and switches). +- Extrapolates naively. Though we use demographic data to guide household decisions before 2009, we use simple regression techniques to forecast future demographic information. Using better forecasts would make predictions more accurate. Data that we extrapolated are: cellphone energy-use changes, cellphone penetration dynamics, and landline abandonment rates. +- Simplifies households. Our model doesn't examine all household member dynamics—e.g., members getting born, growing old enough to need cellphones, moving out, starting households of their own, etc. + +# References + +Floyd, David B. n.d. Leaking electricity: Individual field measurement of consumer electronics. enduse.lbl.gov/info/ACEEE-Leaking.pdf. +Frey, S.D., David J. Harrison, and H. Eric Billet. 2006. Ecological footprint analysis applied to mobile phones. Journal of Industrial Ecology 10 (1-2): 199-216. http://www3.interscience.wiley.com/journal/120128219/abstract. +International Telecommunication Union. n.d. ITU World Telecommunication/ICT Indicators Database. http://www.itu.int/ITU-D/ICTEYE/Reports.aspx. +Mayo, Robert N., and Parthasarathy Ranganathan. 2005. Energy consumption in mobile devices: Why future systems need requirements-aware energy scale-down. In Power-Aware Computer Systems, edited by Babak Falsafi and T.N. Vijaykumar, 26-40. New York: Springer. www.hpl.hp.com/personal/Partha_Ranganathan/papers/2003/2003_lncs_escalate.pdf. +Newburger, Eric C. 2001. Home computers and Internet use in the United States: August 2000. www.census.gov/prod/2001pubs/p23-207.pdf. +Nielsen Media Research. 2007. Average U.S. home now receives a record 104.2 TV channels, according to Nielsen. http://www.nielsenmedia.com/nc/portal/site/Public/ menuitem.55dc65b4a7d5adff3f65936147a062a0/?vgnnextoid= 48839bc66a961110VgnVCM100000ac0a260aRCRD. +Rosen, Karen B., Alan Meier, and Stephan Zandelin. 2001. Energy use of set-top boxes and telephony products in the U.S. http://eetd.lbl.gov/ea/reports/45305/45305.pdf. + +Roth, Kurt W., and Kurtis McKenney. 2007. Energy consumption by consumer electronics in U.S. residences. http://www.ce.org/pdf/Energy%20Consumption%20by%20CE%20in%20U.S.%20Residences%20(January%202007).pdf. +Singhal, Pranshu. 2005. Integrated Product Policy Pilot Project Stage I Report. Finland: Nokia Corporation.http://www.esm.ucsb.edu/academics/courses/282/Readings/Singhal-Nokia-2005a.pdf. +U.S. Census Bureau. 2009. State & county QuickFacts. http://quickfacts.census.gov/qfd/states/06000.html. + +![](images/c4f7fe0ed70716aeaa569668601c9224087b4f16fcc97275e150e18302355519.jpg) +Tommy Rogers, Stephen Foster, and Bob Potter. + +# Wireless Networks: An Easy Cell + +Jeff Bosco + +Zachary Ulissi + +Bob Liu + +University of Delaware + +Newark, DE + +Advisor: Louis Rossi + +# Summary + +The number of cellphones worldwide raises concerns about their energy usage, even though individual usage is low ( $< 10\mathrm{kWh} / \mathrm{yr}$ ). We first model the change in population and population density until 2050, with an emphasis on trends in the urbanization of America. We analyze the current cellular infrastructure and distribution of cell site locations in the U.S. By relating infrastructure back to population density, we identify the number and distribution of cell sites through 2050. We then calculate the energy usage of individual cellphones calculated based on average usage patterns. + +Phone-charging behavior greatly affects power consumption. The power usage of phones consumes a large part of the overall idle energy consumption of electronic devices in the U.S. + +Finally, we calculate the power usage of the U.S. cellular network to the year 2050. If poor phone usage continues, the system will require $400\mathrm{MW / yr}$ , or 5.6 million bbl/yr of oil; if ideal charging behavior is adopted, this number will fall to $200\mathrm{MW / yr}$ , or 2.8 million bbl/yr of oil. + +# Introduction + +As energy becomes a growing issue, we are evaluating current infrastructure to locate inefficiencies in power consumption. The increase in cellphone usage in the past decade raises concern about greater energy consumption compared to landline phone networks. + +By modeling subscriber growth and trends, we can get a clearer picture of the energy consequences of our mobile network. By correlating the + +growth of mobile subscribers with changes in our mobile infrastructure, we can strategically develop our current communications network to meet energy-efficient guidelines. + +# Current Cellular Network Model + +# Assumptions + +- The FCC database contains all relevant and major cell sites in the U.S. +- Cell sites serve areas of homogeneous population density, characterized by the population density at the exact location of the site. +- All cell sites can communicate to $50\mathrm{km}$ (approximately the limit of modern technologies). +- The strength of a cell tower depends primarily on the number of antennas (we lack information about transmission power). + +# Communication Standards + +CDMA and GSM, the two primary standards for mobile phones in the U.S., require different antennas, so different cell sites exist for each standard. However, to simplify our models, we assume that all mobile phones use one generic standard. + +# Network Model and Component Power Usage + +A simplified cellular network model and corresponding energy usage requirements are shown in Figure 1. Cellphones connect directly to cell sites, which may or may not be mounted on antenna towers. We consider each antenna mounted on a tower as a separate cell site. A tower can handle a range of calls at once (about 200-500 users, using 600-1000 W [Ericsson 2007]) and pass them along to Mobile Switching Stations (MSCs). Communication between MSCs and cell sites can be accomplished through fiber-optic networks or microwave connections. Each MSC can handle approximately 1.5 million subscribers and consumes about $200\mathrm{kW}$ . MSCs connect directly into the communications backbone of the country. Since a fiber-optic backbone is necessary in any scenario (or in any Pseudo U.S.), we do not consider it in energy estimates. + +# Cell Site Registration Databases + +All cellular radio transmitters greater than $200\mathrm{m}$ in height are required to be registered in the FCC Universal Licensing System Database [Fed- + +![](images/0e9b4db6fd4cb33eb08e9c82d1892c3eac1ac0e94868a92e3d50b7e0dac597d9.jpg) +Figure 1. Simplified network model for infrastructure calculations. Each component of each type (cell sites and MSCs) is assumed to be identical for all carriers and geographies. + +eral Communications Commision 2009], ensuring that a majority (but not all) cell sites are included. The database contains approximately 20,000 cell site locations comprising about 130,000 individual cell sites. + +# Tower Location + +We show cell-site location and population density in Figure 2. Interestingly, several cell towers seem to be in the Gulf of Mexico and in the Atlantic Ocean (either due to errors in registrations or to the use of ships and/or oil rigs). Also interesting is the single tower at the center of Dallas (northern Texas), which contains 25 antennas and suggests a series of smaller sites spread throughout the city. + +# Antennas per Cell Site + +Many cell sites in urban areas use more antennas and higher transmission power. Although some Effective Radial Power (ERP) data is included in the FCC database, many sites have no published information and several have a negative ERP (impossible). Many sites have similar transmission power, likely due to FCC regulations. To quantify the power of a cell site, we use the number of antennas. While most sites have only a single antenna, many have several, and a few have as many as nine (Figure 3). + +![](images/e0e03983c706af469e9b72a6a2b0f9bc5a43e542806e7566065eeb03c12f6e25.jpg) +Figure 2. Cellphone towers (red) and population density (grays). + +![](images/8b00dc2b5ee3aec096484a6dc23be68e0c7082cf214020d05dd27359dafa5fd5.jpg) +Figure 3. Distribution of the number of antennas per tower. + +# Tower-Antenna-Population Density Relations + +To calculate how many cell sites are used on average in regions of varying population density, we use the site locations to interpolate densities from the maps. Binning the data for population density, we get in Figure 4 the relationship between antenna density and population density. The initial portion of the graph approximately shows a steady increase in the number of towers, with one antenna per tower. However, above 150 people/km², the number of towers levels off and the number of antennas per tower rises to compensate for the increased population. + +# Coverage Overlap + +We investigated overlapping coverage by determining the number of nearby cell sites at a range of locations; the method is illustrated in Figure 5. + +![](images/c2e4b225315c021b38d6d7833f708a3d3345a80e44dbc10ab30e4557f419280b.jpg) +Figure 4. Antenna density vs. population density. + +![](images/375f0b3ded652f61c02015c837ff31bc82bd67e853713e964fd4a8f571e0a229.jpg) +Figure 5. Illustration of algorithm to determine number of overlapping cell sites. The figure does not represent the eccentricities of the grid due to changing longitudinal lengths. + +For each cell in the population density grid, we construct a trial list of all towers within a reasonable range (towers within $1^{\circ}$ latitude, $3^{\circ}$ longitude, or approximately $100 - 200\mathrm{km}$ in each direction). For each candidate tower, we calculate the great-circle distance (in km) between the location (latitude $\delta_{1}$ , longitude $\lambda_{1}$ ) and the tower (latitude $\delta_{1}$ , longitude $\lambda_{1}$ ) [Weisstein n.d.]: + +$$ +d = 6 3 7 8 \cos^ {- 1} \left[ \cos \delta_ {1} \cos \delta_ {2} \cos \left(\lambda_ {1} - \lambda_ {2}\right) + \sin \delta_ {1} \sin \delta_ {2} \right]. +$$ + +If the great circle distance is less than the maximum range of a tower (approximately $50\mathrm{km}$ ), the region is considered to be in the tower's plausible range. We thus calculate for each location the number of cell sites within range (Figure 6). While some cities have a large degree of overlap, others accomplish full connectivity by using many smaller rooftop sites or higher-power antennas. Also noticeable are several regions in the Western U.S. with no current connectivity. + +![](images/e0bc7f36cdaa96a644cc1a2cac82e9921393f981d503eaa713e4ace8b06b43c7.jpg) +Figure 6. Results of overlap calculations for the known grid of cellsites as reported by the FCC. Most urban regions have higher overlap of cell towers to cope with an increased population load. + +# Model for Cellphone Usage + +# Basic Assumptions + +Our investigations uncover three main components of electricity consumption by cellphones: + +- powering the phone during talking and standby, +- powering the charger with a phone attached, and +- powering the charger without a phone attached. + +Therefore, we model the cellphone usage of an average person as a function of three different characteristics: + +- At what remaining battery level (0–100%) does the user recharge the cellphone? +- How long does the cellphone remain connected to the charger after the battery is completely charged? +- Does the user unplug the charger from the outlet upon completion of battery charging? + +The possible power consumption states of a phone adapter are displayed in Table 1. + +# Cellphone Information and Usage Behavior + +# Battery Capacity + +Table 2 displays the average battery capacity, power consumption during talking, and standby power consumption for batteries of the nine largest + +Table 1. Telephone charger states and energy consumption. + +
StateConsumption (W)
Unplugged0
Plugged in, no phone0.5
Phone attached, not charging0.9
Phone attached, charging4.0
+ +mobile phone manufacturers in the U.S. We determined averages using manufacturer information about more than 150 popular cellphones, approximately 15 phones per provider [IDC 2008]. Power consumption is calculated using battery capacity and estimates of talk time and standby time for individual phones, assuming each phone has a $3.7\mathrm{V}$ lithium-ion battery. The last line shows an overall average weighted by 2008 U.S. market share. + +Table 2. Average capacity and energy consumption for popular U.S. cellphones. + +
RankManufacturerMarket share (%)Battery capacity (mAh)Talk power, (W)Standby power, (W)
1Samsung22.0980 ± 2280.0138 ± 0.00510.875 ± 0.293
2Motorola21.6826 ± 1220.0108 ± 0.00230.655 ± 0.292
3LG20.7890 ± 1060.0116 ± 0.00360.923 ± 0.242
4RIM9.01216 ± 2760.0145 ± 0.00601.065 ± 0.348
5Nokia8.51066 ± 1920.0122 ± 0.00320.735 ± 0.334
6Sony Ericsson7.01015 ± 2140.0085 ± 0.00390.431 ± 0.110
7Kyocera5.0900 ± 0000.0200 ± 0.00300.970 ± 0.080
8Sanyo4.0810 ± 890.0161 ± 0.00370.908 ± 0.152
9Palm2.21500 ± 3460.0167 ± 0.00421.402 ± 0.353
Weighted average960 ± 1660.0127 ± 0.00390.829 ± 0.263
+ +# Cellphones Per Person + +The average number of cellphones owned per person is determined using historical population and mobile phone data and extrapolated to the year 2050 [Federal Communications Commission 2008; U.S. Census Bureau 2008]. Figure 7a displays the total number of cellphone subscribers normalized by the population of the U.S. The historical data fit a sigmoidal curve, assuming that the ratio will eventually reach a value of 1 cellphone per person (complete saturation). Figure 7b compares the yearly increase in U.S. population to that of cellphone users. By 2015, the predicted number of cellphone owners reaches the total number of people and continues to grow with the population. + +![](images/27624acc4022b7c1abc5ce72d3e257ff497d8f4449f6d023e04250a5b0061d55.jpg) +Figure 7a. Sigmoidal fit for the average number of cellphones per person in the U.S. + +![](images/11adb2e522864436370658c6acde117e6d88c0069671cdeb18dccbcf78386e2c.jpg) +Figure 7b. Predicted growth and saturation of cellphone owners in the U.S. + +# Average Talk-Time per Person + +The average talk time of an individual user between 1991 and 2050 is determined in a fashion similar to the average number of cellphones per owner. Figure 8 displays the trends in landline and cellphone usage in terms of total minutes used per year between 1991 and 2007 [CTIA 2008; Federal Communications Commission 2008], together with our extrapolation. We assume that average usage will eventually saturate to some value, and a first-order exponential growth function is employed to model this behavior. Figure 9 displays the predicted growth of cellphone usage assuming saturation at 15, 20, 25, and 30 minutes per person per day. + +![](images/47530c359bec708a75a5e81cfd2e5c491b4bffb9a8a1878ac1f82322422e9560.jpg) +Figure 8. Historical behavior of landline and cellphone usage in the U.S. + +![](images/2df57b4e97b6beca8df1b9474b12799456e3cde6df6b7fe0d71da0dcb797e645.jpg) +Figure 9. Predicted saturation behavior of average daily mobile cellphone usage. + +# Recharge Probability and Duration + +We model the battery level at which a person is likely to charge their phone as a Gaussian distribution (Figure 10), based on cellphone behavior data [Banerjee et al. 2007]. Users tend to recharge their phone batteries at between $25\%$ and $75\%$ of full capacity. + +![](images/e45f56977c04ce6653c39a931a5376479744c291a3f4ab9e7bd1f0fcfa637eac.jpg) +Figure 10. Fitted Gaussian distribution for recharging behavior of users. + +The time to charge a lithium-ion battery is typically not proportional to the remaining charge to be added. Therefore, we assume that the battery charge increases exponentially as a function of charge time, as depicted in Figure 11. + +![](images/ffdf1c59563c6608e5aa662f739bd94d3fbdd31659f6fa425b7dd2b82f7351ed.jpg) +Figure 11. Typical charge profile for lithium-ion battery. + +# Calculation of Average Energy Consumption + +We calculate the energy consumed by the average cellphone user over the course of a year by employing the battery and usage behavior extrapolations discussed earlier. We assume that the full range of remaining battery charge $(0 - 100\%)$ can occur before charging is initiated, depending on the type of user ("regular" or "ideal"). The total energy consumption is calculated from battery capacity and different power states of a charge-adapter. The duration that the adapter stays in a particular power state is determined by the frequency of charging (number of charge cycles per year), which is approximated by the power consumption during periods of cellphone talking and standby. Furthermore, the power consumption during talking/standby is weighted by the average number of minutes a person talks on the phone per day (see Figure 8). Finally, the average energy consumption across the entire population of cellphone users is determined using a weighted sum of energy at each remaining battery level and the probability distribution that charging starts at that battery level. + +We assume that there are only two types of users: + +- the "regular" user, who charges for 8 hr at a time (at the probability given by the fitted Gaussian distribution) and always leaves the charge-adapter plugged in; and +- the "ideal" user, who charges for only the time to reach $100\%$ charge (at the probability distribution centered at $15 - 20\%$ battery levels) and never leaves the charger plugged in when not charging. + +# Energy Usage of Cellphones + +The yearly energy consumed by cellphone charging between 1991 and 2050 for the "regular" user and for the "ideal" user is displayed in Figure 12. The yearly consumption of the "ideal" user is less than one-fifth that of the "regular" user. This drastic difference is primarily a consequence of unplugging the charger after charge completion. As a result of the increased energy savings of the "ideal" behavior, we see an increased sensitivity to the cellular usage saturation at different values of minutes per person per day. These trends are more difficult to see with the regular behavior since the majority of energy consumption is wasted by the charger. + +![](images/c799dfa65a9b66cc3cd4c266faae9d6f9f3d2720aae4a53d9a427f1b499a401f.jpg) +a. "Regular" user. + +![](images/787f0d3f19ffc474cdcdf6ae0066729c014fd12ef558f945ef4c610227a098a3.jpg) +b. "Ideal" user. +Figure 12. Yearly energy consumption of "regular" user and "ideal" user, assuming different user saturation times (15, 20, 25, and $30\mathrm{min}$ / person/d). + +# Pseudo-U.S. Model + +# Assumptions + +- A communication infrastructure is entirely nonexistent. +- A power grid already exists. +- Each household must have television and Internet service. +- Each household has either one landline phone per person or one cellphone per person. + +# Comparison of Fiber-optics to Wireless Networks + +We compare the energy usage per person for an entirely wireless network to the cost of running a competitive fiber-optic network. Since the + +choice of wireless vs. fiber optic affects the energy usage of TVs, computers, and phones in a household, we consider all three of these communication methods. The estimated power usage for each system is summarized in Table 3. Based on current estimates for each electronic device [Rosen and Meier 1999], a completely wireless approach could be energy competitive against a fiber-optic solution, due to the energy inefficient link necessary in every household. + +# Table 3. + +Electricity usage for fiber-optic and wireless approaches, per household of 2.5 members with one computer, one TV, and one phone per person. + +
CategoryFiber-optic usageWireless usage
General TVFiber-optic link (16 W)DTV converter (5 W)
Internet2.5 × WIMAX card (1 W)
2.5 × transmission (0.75 W)
Phone2.5 × cellphone (0.75 W)
2.5 × transmission (0.75 W)
Total16 W13 W
+ +# Energy to Oil Conversion + +We determine the amount of electrical energy available per barrel of oil using historical data [Energy Information Administration 2008; Taylor et al. 2008]. Figure 13a shows the heat content per barrel of oil from 1949 to 2007 with linear extrapolation to 2050. Heat content is decreasing, possibly due to a decreasing proportion of energy-rich oil in the global market. The thermoelectric efficiency (i.e., the efficiency of converting heat created by burning fuel into electricity) is displayed in Figure 13b with extrapolation. Using the heat content and thermoelectric efficiency data, the total electricity produced per barrel of oil is obtained and displayed in Figure 14. From the extrapolation, we find that one barrel of oil will produce approximately $628\mathrm{kWh}$ of electricity in 2050. + +While a considerable amount oil is needed to create 1 TWh or more of electricity, it is very unlikely that oil would be used to create this electricity. In Figure 15, we see that oil at its peak use (1977) accounted for only $17\%$ of the electricity produced in the U.S. Today, oil accounts for less than $4\%$ of electricity and this value appears to be decreasing slowly. + +![](images/8f09835ce137107ebeb1bef4452585ff3a441a2a36c87b59c1d693061a4ef612.jpg) +a. Heat content. + +![](images/8c12a39569a83bf0aa335cad8125bacce9fc95b2105c509c333d55c99a3314e8.jpg) +b. Thermoelectric efficiency. + +![](images/12f79e45641064f45d05f78fd1778bca3fbb246fdf2feacc960eec806c4a36c1.jpg) +Figure 13. Heat content and thermoelectric efficiency data for oil, with extrapolations. +Figure 14. Electricity per barrel of oil, over time. + +![](images/9334ed741a89b81324823bd5d359368646f7c9f64f182860fac16881e019a882.jpg) +Figure 15. Trends in U.S. electricity production from oil. + +# Overall Charger Power Usage + +To gauge the inefficiency of cellphones compared to other electronics, we compare results of our analysis with Rosen and Meier [1999]. With updating to reflect 2008 cellphone usage, the results are shown in Figure 16. Although the energy usage of cellphones chargers is significant (2 TWh/yr), it is only a small portion of the overall energy wasted by idle electronics (34 TWh/yr), or 54 million barrels per year using the conversions established above. + +![](images/4117d8c6acf4a378514285f9141c13944614f454eb4e5bb85504e9f894103694.jpg) +Figure 16. Usage of various electronics according to Rosen and Meier [1999], with cellphone energy usage updated to 2008 per our model. + +# Cellular Network Growth Through 2050 + +# Assumptions + +- No new (radically disruptive) technologies will be introduced past 3G (third generation of cellphones). Current technology will improve until a minimum necessary energy usage is achieved. +Population density growth will follow similar trends to 2050. +- The number of towers necessary for a given population density will remain constant through 2050. + +# Technology Improvements + +The power requirements of cellular networks has fallen drastically since the 1980s. Until 2050, similar reductions in power usage will be likely, either through improvements in the electronics of cell sites (computers and such) or more-efficient communication strategies (antenna transmissions). To characterize this reduction in energy, we use information on energy usage + +of past technologies [Ericsson 2007], as shown in Figure 17. Technologies following the primary upgrade path (1G to 2G and beyond) are leveling out in their minimum energy usage. Although the introduction of 3G initially caused a large increase in power consumption, it seems to have a greater potential for reducing energy consumption. Since future technologies cannot be accurately quantified, we assume that all future networks will be based on a variation of 3G architecture. Calculated from Figure 17, the relevant efficiencies for each decade are shown in Table 4. + +![](images/b4b86002d67c453a4b7728dd43181a21ee889cad1b3b3d1a6f311c351af8d811.jpg) +Figure 17. Characterization of technological improvements in cellular infrastructures on energy usage, for two different sets of technology, with corresponding exponential fits of the form $a\exp (-bx) + c$ projecting to 2050 [Ericsson 2007]. + +
Table 4. +Network technology efficiency.
YearRelative power usage
20051
2010.85
2020.66
2030.63
2040.62
2050.62
+ +# Infrastructure Improvements + +As the population grows and the use of cellphones increases, more cell sites and related infrastructure will be necessary. To model the increasing number of towers, we combine tower density / population density relations with population density predictions. The resulting increase in towers is seen in Figure 18. These predictions assume that tower capacity will not grow directly but instead improve through energy efficiency. + +# Overall Energy Usage + +We calculate total energy usage of the U.S. cellular network using the predicted increase in cell sites, observed trends in technology, predicted usage patterns, and recent energy statistics. Final predictions are shown for two usage scenarios in Figure 19. If chargers are used inefficiently power consumption will grow to approximately $400\mathrm{MW}$ , or 5.6 million + +![](images/3ebfd608f38fe3b5822cacb254d0ef7c8001daa2e11315042fe888b0643427b8.jpg) +Figure 18. Predicted number of cellphone towers from 2007 to 2050. + +bbl/yr. However, if consumers utilize chargers efficiently, consumption by 2050 will be approximately $200\mathrm{MW}$ /yr (2.8 million bbl/yr of oil). + +# Conclusion + +We estimate power consumption of the U.S. cellular network, based on + +- models of usage trends, +- current infrastructure, +population projections, and +- technology improvements. + +![](images/e0626494f67751f35e92b635997f84e49057391decc8629c1ec23e53796ae34d.jpg) +a. Inefficient charger usage. + +![](images/7ba5f3cecf13576beae7258c57435b38677827d6d90acb7511ca404518d65de8.jpg) +b. Ideal charger usage. +Figure 19. Predictions for the energy usage of the U.S. cellphone network for two different charge scenarios. + +Technological developments will cause energy usage to decrease until 2015, after which increasing population will demand more power usage. + +We assess the optimal communications network for a country similar to the U.S. A wireless network (to houses) comprising voice, data, and TV service would draw less electricity than a fiber-optic approach and hence be optimal, as long as wireless communication can provide sufficient bandwidth (likely). + +We compare energy consumption for "regular" users and "ideal" users in terms of charging practices. A "regular" user today wastes $4.8\mathrm{kWh} / \mathrm{yr}$ through inefficient charging. + +We model energy wasted by various idle household electronics, including cellular network usage. A person today wastes $125\mathrm{kWh} / \mathrm{yr}$ through idle electronics. + +We model energy needs for phone service through 2050 and calculate the number of new cell towers and other infrastructure necessary. + +If inefficient charging strategies are used, cellular networks in 2050 will require $400\mathrm{MW}$ /yr of electricity (5.6 million bbl/yr of oil). If more-efficient chargers are introduced or people change their habits, only $200\mathrm{MW}$ of power (2.8 million bbl/yr of oil) will be required. + +# References + +Banerjee, Nilanjan, Ahmad Rahmati, Mark D. Corner, Sami Rollins, and Lin Zhong. 2007. Users and batteries: Interactions and adaptive power management in mobile systems. In UbiComp 2007: Ubiquitous Computing, edited by J. Krumm et al., 217-234. Lecture Notes in Computer Science, vol. 4717/2007. Berlin / Heidelberg, Germany: Springer. http://www.springerlink.com/content/t2m30643713220k6/. +Center for International Earth Science Information Network (CIESIN), Socioeconomic Data and Applications Center, Columbia University. 2005. Gridded population of the world, version 3 (GPWv3): Population density grids. http://sedac.ciesin.columbia.edu/gpw/global.jsp. +CTIA: The Wireless Association. 2008. 2008 CTIA semi-annual wireless industry survey. http://files.ctia.org/pdf/CTIA_Survey_Mid_Year_2008_Graphics.pdf. +Energy Information Administration. 2008. Annual energy review (AER) 2007. Technical Report DOE/EIA-0384(2007). http://www.eia.doe.gov/aer/. +Ericsson. 2007. Sustainable energy use in mobile communications. White paper, August 2007. http://www.ericsson.com/campaign/sustainable/mobile_communications/downloads/sustainable_energy.pdf. + +Federal Communications Commission, Industry Analysis and Technology Division, Wireline Competition Bureau. 2008. Trends in telephone service. URL http://hraunfoss.fcc.gov/edocs_public/attachmatch/DOC-284932A1.pdf. +_____. Database downloads: Antenna structure registration: Cellular—47 CFR Part 22: Licenses. URL http://wireless.fcc.gov/antenna/index.htm?job=uls_transaction&page=weekly. +Interactive Data Corp. 2008. Worldwide quarterly mobile phone tracker. April 2008. URL http://www.idc.com/getdoc.jsp?containerId=IDC_P8397. +Rosen, Karen, and Alan Meier. 1999. Energy use of U.S. consumer electronics at the end of the 20th century. Technical report, Lawrence Berkeley National Laboratory. http://eetd.lbl.gov/EA/Reports/46212/. +Taylor, Peter, with Olivier Lavagne d'Ortigue, Nathalie Trudeau, and Michel Francoeur. 2008. Energy efficiency indicators for public electricity production from fossil fuels. International Energy Agency. http://www.iea.org/textbase/papers/2008/En_Efficiency_Indicators.pdf. +U.S. Census Bureau. 2004. Tble HH-6. Average population per household and family: 1940 to present. www.census.gov/population/socdemo/hh-fam/tabHH-6.pdf. +______ 2008. Population projections: U.S. interim projections by age, sex, race, and Hispanic origin: 2000 to 2050. http://www.census.gov/population/www/projections/usinterimproj/. +Weisstein, Eric W. n.d. Great circle. From MathWorld—A Wolfram Web Resource. http://mathworld.wolfram.com/GreatCircle.html. + +![](images/0d413c0ac2b1ae0dbd74ceb856a184bfc03bfd9eedaea4717a48bf05901962d7.jpg) +Advisor Louis Rossi with team members Bob Liu, Jeff Bosco, and Zachary Ulissi. + +# Judges' Commentary: The Outstanding Cellphone Energy Papers + +Marie Vanisko + +Dept. of Mathematics, Engineering, and Computer Science + +Carroll College + +Helena, MT 59625 + +mvanisko@carroll.edu + +# General Remarks + +As in past years, the diverse backgrounds of the undergraduate participants yielded many interesting modeling approaches to the stated problem. The judges assessed the papers on the breadth and depth of analysis for all major issues raised, on the validity of proposed models, and on the overall clarity and presentation of solutions. + +The Executive Summary is often still below par; it should motivate the reader to read the paper. It must not merely restate the problem, but indicate how it was modeled and what conclusions were reached, without being unduly technical. + +Assumptions must be clearly stated and justified where appropriate. Some teams overlook important factors and/or make unrealistic assumptions with no rationale. It should be made clear in the model precisely where those assumptions are used. + +Graphs need labels and/or legends and should provide information about what is referred to in the paper. Tables and graphs that are taken from other sources need to have specific references. Failure to use reliable resources and to properly document those resources kept some papers from rising to the top. The best papers not only list trustworthy resources but also document their use throughout the paper. + +# Requirements and Selected Modeling Approaches + +The Cellphone Problem involved the "energy" consequences of the cellphone revolution, and five Requirements were delineated. To receive an Outstanding or Meritorious designation, teams had to address issues raised in each of these Requirements. Additionally, Outstanding papers considered both wireless and wired landlines and the infrastructure to support cellphones and landlines. + +# Requirement 1 + +Teams were first asked to estimate the number of U.S. households in the past that were served by landlines and also to estimate the average size of those households. They were then to consider the energy consequences, in terms of electricity use, of a complete transition from landline phones to cellphones, with the understanding that each member of each household would get a cellphone. + +To address this problem, the energy used by current landlines had to be considered. Whereas corded landline phones use relatively little electricity, the same cannot be assumed about cordless landline phones. The top papers researched this issue and arrived at documented estimates of the number of corded vs. cordless landline phones and the average number of each per household. This led to a more realistic appraisal of the energy used by the landline phone system. + +With regard to cellphones, teams that rose to the top considered the infrastructure necessary—for example, the building of numerous additional communication towers if cellphones are to replace landline phones completely. This was of special importance in the analysis of the transitional phase. Also, the varying amount of electricity required by different types of cellphones was a consideration in the transitional and steady-state phases. + +Interesting models were constructed for the transitional phase of the cellphone "takeover." Some teams considered the spread of cellphones as the spread of a disease and used the Verhulst model for logistic growth, using the population of the U.S. as the carrying capacity and estimating the rate of growth of cellphones from published reports on the growth of cellphone use. Other teams generalized this to an SIR model or used the Lotka-Volterra predator-prey model, with cellphones as the predators and landline phones as the prey. A few used the competing-species model. The judges looked very favorably upon models for which sufficient rationale was given as to why that model might be appropriate in this circumstance. Interpretation of the parameters and solutions as they applied to the problem at hand was essential. + +Many papers ignored the transition phase and only considered the energy comparison for the steady state in order to determine the energy consequence. Some teams merely talked their way through the issues and did not construct + +a mathematical model. After estimating energy costs associated with landline phones and cellphones, many teams used linear equations to model the total costs associated with the numbers of phones. + +# Requirement 2 + +Teams were asked to consider a "Pseudo U.S."—a country similar to the current U.S. in population and economic status, but with neither landlines or cellphones. They were to determine the optimal way, from an energy perspective, to provide phone service to this country. The teams were also to take into account the social advantages that cellphones offer and the broad consequences of having only landlines or only cellphones. + +Once again, consideration of the infrastructure for phones was important. In addition to landline phones and cellphones, many teams considered the VoIP (Voice over Internet Protocol) communication option. Not every team that considered VoIP took into account the costs for laying the cables; some assumed that such cables were already in place (a questionable assumption). However, failure to consider the VoIP method of phone service may have kept a Meritorious paper from becoming an Outstanding paper. If one were to assume that households would already have one or more computers with Internet access, the energy costs associated with VoIP would be quite small. + +In terms of finding the optimal way to provide phone service from an energy perspective, some teams used linear programming, using the costs determined in Requirement 1 and quantifying in various ways the social advantages of cellphones, as well as the preference for landlines in certain circumstances. Other teams used AHP (Analytic Hierarchy Process), which worked well to get parameters used in the optimization routine but did not work as an optimization tool in itself. Teams that considered the advantages and disadvantages of various phone types not just for individuals, but for businesses also, demonstrated a thoroughness that was commendable. Another factor that some teams considered was the number of children under 5 who would have no need for cellphones. + +# Requirement 3 + +This was an extension of Requirement 2, asking teams to consider the electricity wasted when cellphones are plugged in that do not need charging and when chargers are left plugged in after the phone is removed. Teams were to continue to assume that they were in the Pseudo U.S. and were to interpret energy wasted in terms of barrels of oil used. + +To determine the amount of energy wasted, teams had to first estimate the number of hours that a "typical" cellphone charger is in the state of charging a phone, left plugged into a phone not in need of charging, and left plugged in when the phone is not connected to it. Some teams did their own informal surveys, but better estimates were arrived at from publications and surveys. + +In some papers, probability distributions were used to describe this behavior, but use of such distributions was not always justified. + +Teams that were more comprehensive took into account the fact that some cellphones and chargers use less power than to do others, based on brands, age, and capabilities of the phones and chargers. Assuming that all electrical energy is generated by oil, translating kilowatts of energy into barrels of oil used was a straightforward transformation. + +# Requirement 4 + +This requirement extended the concepts in Requirement 3 and asked teams to estimate the amount of energy wasted by all electronic chargers. Since this question was very open-ended, contest papers showed a wide variety of estimates for the energy wasted in terms of barrels of oil. The top teams estimated the average hours that appliances are left plugged in, charging and not charging, and also the number of hours that chargers are plugged in without the appliance. + +More-comprehensive papers considered many other kinds of electronic devices and by comparison showed that the amount of energy wasted by cellphones is relatively small. + +# Requirement 5 + +For this part, students were to consider the population and economic growth of the Pseudo U.S. for the next 50 years and predict energy needs for providing phone service based on their analysis in the first three Requirements. Predictions were to be interpreted in terms of barrels of oil used. + +Papers needed to consider both economic growth and population growth in order to estimate energy needs in the future. Various types of regression fits were applied to historical population data and economic data such as GDP. Using earlier estimates of energy requirements, coupled with the regression equations from historical data, predictions were made for the amount of energy used every decade for the next 50 years. Some teams predicted greater efficiency in future phones and the development of chargers that would use less electricity. + +Papers showed estimates for the number of barrels of oil used on a per-day basis or perhaps on a per-year basis. There was no one right answer, and answers given depended on assumptions made at the start. Some papers contained graphs displaying future values but did not give tables. A table together with a graph is a better way to display information. + +# Concluding Remarks + +Mathematical modeling is an art that requires considerable skill and practice in order to develop proficiency. The big problems that we face now and in the + +future will be solved in large part by those with the talent, the insight, and the will to model these real-world problems and continuously refine those models. + +The judges are very proud of all participants in this Mathematical Contest in Modeling, and we commend you for your hard work and dedication. + +# About the Author + +Marie Vanisko is a Mathematics Professor Emerita from Carroll College in Helena, Montana, where she has taught for more than 30 years. She was also a visiting professor at the U.S. Military Academy at West Point and taught for five years at California State University Stanislaus. While in California, she co-directed MAA Tensor Foundation grants on Preparing Women for Mathematical Modeling, a program encouraging more high school girls to select careers involving mathematics, and was also active in the MAA PMET (Preparing Mathematicians to Educate Teachers) project. Marie serves as a member of the Engineering Advisory Board at Carroll College, is on the advisory board for the Montana Learning Center for mathematics and science education, and is a judge for both the MCM and HiMCM COMAP contests. + +# Judges' Commentary: + +# The Fusaro Award for the Cellphone Energy Problem + +Marie Vanisko + +Dept. of Mathematics, Engineering, and Computer Science + +Carroll College + +Helena, MT 59625 + +mvanisko@carroll.edu + +Peter Anspach + +National Security Agency + +Ft. Meade, MD + +anspach@aol.com + +# Introduction + +MCM Founding Director Fusaro attributes the competition's popularity in part to the challenge of working on practical problems. "Students generally like a challenge and probably are attracted by the opportunity, for perhaps the first time in their mathematical lives, to work as a team on a realistic applied problem," he says. The most important aspect of the MCM is the impact that it has on its participants, and, as Fusaro puts it, "the confidence that this experience engenders." + +The Ben Fusaro Award for the 2009 Cellphone Energy problem went to a team from the Lawrence Technological University in Southfield, MI. This solution paper was among the top Meritorious papers and exemplified some outstanding characteristics: + +- It presented a high-quality application of the complete modeling process. +- It demonstrated noteworthy originality and creativity in the modeling effort to solve the problem as given. +- It was well written, in a clear expository style, making it a pleasure to read. + +# The Problem: Energy Consequences of the Cellphone Revolution + +The Cellphone Energy Problem involved many facets of the "energy" consequences of replacing landlines with cellphones and five Requirements were delineated. Teams had to address issues raised in each of the five Requirements. Additionally, the best papers considered both wireless and wired landlines and the infrastructure to support cellphones and landlines. + +# The Lawrence Technological University Paper + +# Assumptions + +The team began with a page of assumptions, most of which were well-founded and enabled them to determine parameters in their models. However, certain assumptions made were unrealistic and these led to results that did not reflect the real-world situation. In particular, in the eyes of the judges, assuming that all landline phones are cordless was a serious shortcoming that greatly impacted the issue of energy use. Furthermore, while the team did address the issue of infrastructure, the assumption that infrastructure for cellphones is equal to that for landline phones seemed to ignore the need for the large number of additional communication towers if cellphones were to replace landlines. + +# Requirement 1: Mathematical Formulation for the Transition + +In Requirement 1, teams were to consider the energy consequences in terms of electricity utilization of a complete transition from landline phones to cellphones, with the understanding that each member of each household would get a cellphone. The Lawrence Tech team shone in mathematically modeling this transition! For their first model representing the transition from landline to cellphones, the team used the basic logistic differential equation to model the rate of change in the number of cellphones over time. They used the total population as the carrying capacity and determined the intrinsic rate of growth of cellphones from published results. This was very well done, though references for the tables and graphs should have been included. The second model introduced was a predator-prey system of differential equations, and the team is to be commended on their clear statement of rationale for using this model, with cellphones causing the demise of landlines. However, this model quickly became complicated, so they headed "down a different route." And, once again, their rationale for + +the equations used and parameters arrived at was commendable. + +In modeling the energy used by cellphones, the team considered three distinct models of cellphones and did a good job of researching the habits of individuals of different ages regarding talking times. The assumption they made that the average number of calls is directly related to the talk time per call might be questionable, but this was not considered a serious deficiency and it enabled them to estimate needed parameters in their model. + +# Requirements 2, 3, 4, and 5 + +Documented sources were used to estimate energy used for charging batteries, and these were translated into barrels of oil used. Energy usage comparisons were demonstrated for landline cordless phones and cellphones. This was taken forward into Requirement 2 and they seemed to conclude that the optimal mix of landline and cellphones would be the state where the same amount of energy was used by cordless landline and cellphones. + +For Requirement 3, after gathering data on energy consumption by phone chargers, the team demonstrated an interesting comparison of energy consumed by daily vs. weekly charging and charger left plugged in or not, and from this they estimated the long-term consequences of avoiding wasteful practices in the charging of cellphones. The team introduced a percentage comparison of energy wasted by various charging methods. + +Requirement 4 extended the concepts in Requirement 3 and asked teams to estimate the amount of energy wasted by all idle electronic appliances. Since this question was very open-ended, contest papers showed a wide variety of estimates for the energy wasted. The Lawrence Tech team limited themselves to the average hours that computers, televisions, DVD players/VCRs, and game consoles are left plugged in and the resulting annual oil consumption from such wasteful practices. A linear pattern of growth was projected up to 2059. More-comprehensive papers considered many more electronics and, by comparison, showed that the amount of energy wasted by cellphones is relatively small compared to many other electronic devices. Thus, when the team referred to cellphones as the "most energy consuming devices" in the Executive Summary, judges questioned the credibility of the paper. + +For Requirement 5, students were to consider the population and economic growth of a Pseudo U.S. for the next 50 years and predict energy needs for providing phone service based on their analysis in the first three Requirements. Predictions were to be interpreted in terms of barrels of oil used. To their credit, the Lawrence Tech team had numerous appendices with data tables (but again without reference). + +# Recognizing Limitations of the Model + +Recognizing the limitations of a model is an important last step in the completion of the modeling process. The students recognized that their model failed to look at technological changes, including advances in battery and cellphone technology. They also acknowledged that assuming that every member of a population has a cellphone puts cellphones into the hands of infants and ignores the fact that some individuals have more than one cellphone. + +# Conclusion + +Although there were some deficiencies in Requirements 2-5, the quality of the mathematical modeling done in Requirement 1, coupled with the excellent use of resources to answer the questions posed throughout, made the Lawrence Technological University paper one that the judges felt was worthy of the Meritorious designation. The team is to be congratulated on their analysis, their clarity, and their use of the mathematics that they knew to create and justify their own model for the cellphone revolution problem. + +# About the Authors + +Marie Vanisko is a Mathematics Professor Emerita from Carroll College in Helena, Montana, where she has taught for more than 30 years. She was also a visiting professor at the U.S. Military Academy at West Point and taught for five years at California State University Stanislaus. While in California, she co-directed MAA Tensor Foundation grants on Preparing Women for Mathematical Modeling, a program encouraging more high school girls to select careers involving mathematics, and was also active in the MAA PMET (Preparing Mathematicians to Educate Teachers) project. Marie serves as a member of the Engineering Advisory Board at Carroll College, is on the advisory board for the Montana Learning Center for mathematics and science education, and is a judge for both the MCM and HiMCM COMAP contests. + +Peter Anspach was born and raised in the Chicago area. He graduated from Amherst College, then went on to get a Ph.D. in Mathematics from the University of Chicago. After a post-doc at the University of Oklahoma, he joined the National Security Agency to work as a mathematician. + +# Statement of Ownership, Management, and Circulation (All Periodicals Publications Except Requester Publications) + +
1. Publication Title +The UMAP Journal2. Publication Number3. Filing Date +9/5/2009
0197-3622
4. Issue Frequency +Quarterly5. Number of Issues Published Annually +Four (4)6. Annual Subscription Price +$125
7. Complete Mailing Address of Known Office of Publication (Not printer) (Street, city, county, state, and ZIP+4®) +COMAP, Inc., 175 Middlesex Tpk., Suite 3B, Bedford, MA 01730Contact Person +John Tomicek
Telephone (Include area code) +781/862-7878 x130
+ +8. Complete Mailing Address of Headquarters or General Business Office of Publisher (Not printer) + +# SAME + +9. Full Names and Complete Mailing Addresses of Publisher, Editor, and Managing Editor (Do not leave blank) + +Publisher (Name and complete mailing address) + +Solomon Garfunkel, 175 Middlesex Tpk., Suite 3B, Bedford, MA 01730 + +Editor (Name and complete mailing address) + +Paul Campbell, 700 College St., Beloit, WI 53511 + +Managing Editor (Name and complete mailing address) + +Joyce Barnes, 175 Middlesex Tpk., Suite 3B, Bedford, MA 01730 + +10. Owner (Do not leave blank. If the publication is owned by a corporation, give the name and address of the corporation immediately followed by the names and addresses of all stockholders owning or holding 1 percent or more of the total amount of stock. If not owned by a corporation, give the names and addresses of the individual owners. If owned by a partnership or other unincorporated firm, give its name and address as well as those of each individual owner. If the publication is published by a nonprofit organization, give its name and address.) + +
Full NameComplete Mailing Address
Consortium for Mathematics and175 Middlesex Tpk., Suite 3B, Bedford, MA 01730
Its Applications, Inc. (COMAP, Inc.)
+ +11. Known Bondholders, Mortgages, and Other Security Holders Owning or + +Holding 1 Percent or More of Total Amount of Bonds, Mortgages, or + +Other Securities. If none, check box + +None + +
Full NameComplete Mailing Address
+ +12. Tax Status (For completion by nonprofit organizations authorized to mail at nonprofit rates) (Check one) + +The purpose, function, and nonprofit status of this organization and the exempt status for federal income tax purposes. + +□ Has Not Changed During Preceding 12 Months +□ Has Changed During Preceding 12 Months (Publisher must submit explanation of change with this statement) + +
13. Publication Title +The UMAP Journal14. Issue Date for Circulation Data Below +Sept 5, 2009
15. Extent and Nature of CirculationAverage No. Copies Each Issue During Preceding 12 MonthsNo. Copies of Single Issue Published Nearest to Filing Date
a. Total Number of Copies (Net press run)600600
b.PaCirculation (By Mail and Outside the Mail)(1)Mailed Outside-County Paid Subscriptions Stated on PS Form 3541(Include paid distribution above nominal rate, advertiser's proof copies, and exchange copies)507474
(2)Mailed In-County Paid Subscriptions Stated on PS Form 3541 (Include paid distribution above nominal rate, advertiser's proof copies, and exchange copies)00
(3)Paid Distribution Outside the Mails Including Sales Through Dealers and Carriers, Street Vendors, Counter Sales, and Other Paid Distribution Outside USPS®5050
(4)Paid Distribution by Other Classes of Mail Through the USPS (e.g. First-Class Mail®)00
c. Total Paid Distribution (Sum of 15b (1), (2), (3), and (4))557524
d. Free or Nominal Rate Distribution (By Mail a Outside the Mail)(1)Free or Nominal Rate Outside-County Copies included on PS Form 354100
(2)Free or Nominal Rate In-County Copies Included on PS Form 35412038
(3)Free or Nominal Rate Copies Mailed at Other Classes Through the USPS (e.g. First-Class Mail)00
(4)Free or Nominal Rate Distribution Outside the Mail (Carriers or other means)00
e. Total Free or Nominal Rate Distribution (Sum of 15d (1), (2), (3) and (4))2038
f. Total Distribution (Sum of 15c and 15e)577562
g. Copies not Distributed (See Instructions to Publishers #4 (page #3))2338
h. Total (Sum of 15f and g)600600
i. Percent Paid (15c divided by 15f times 100)9793
+ +16. Publication of Statement of Ownership +If the publication is a general publication, publication of this statement is required. Will be printed in the Third issue of this publication. + +□ Publication not required. + +
17. Signature and Title of Editor, Publisher, Business Manager, or Owner +Ahn AqfulhDate +Sept 5, 2009
+ +I certify that all information furnished on this form is true and complete. I understand that anyone who furnishes false or misleading information on this form or who omits material or information requested on the form may be subject to criminal sanctions (including fines and imprisonment) and/or civil sanctions (including civil penalties). \ No newline at end of file diff --git a/MCM/2009/A/2009-MCM-A-Com/2009-MCM-A-Com.md b/MCM/2009/A/2009-MCM-A-Com/2009-MCM-A-Com.md new file mode 100644 index 0000000000000000000000000000000000000000..b852cc208c7351abba1b512f3071c6636f0134c1 --- /dev/null +++ b/MCM/2009/A/2009-MCM-A-Com/2009-MCM-A-Com.md @@ -0,0 +1,149 @@ +# Judge's Commentary: The Outstanding Traffic Circle Papers + +Kelly Black + +Dept. of Mathematics + +Clarkson University + +P.O.Box 5815 + +Potsdam, NY 13699-5815 + +kjblack@gmail.com + +# Overview of the Problem + +Teams who decided to explore the "A" problem in this year's Mathematical Contest in Modeling examined ways to control the movement of vehicles in a traffic circle. A broad overview of the criteria developed by the judges and the experiences of the judges is given. + +In the following section, a brief overview of the problem statement is explored. Next, an overview of the judging itself is given. In the subsequent section, a list of some of the common approaches adopted by the teams is given. Finally, a list of some of the common themes and more detailed points that emerged as the judging proceeded is given. + +# Traffic Circles + +The focus on the "A" problem is to control the movement of vehicles in a traffic circle. A number of controls are explicitly given in the problem statement. The teams who submitted papers for this problem mainly focused on the given controls and very few examined other types of controls. + +The problem statement includes two requirements. First, the teams were asked to find a way to control the flow of traffic in an optimal way. Second, the teams were asked to write a summary of their findings. These two aspects are explored individually in the subsections that follow. + +# The Goal + +The goal for this problem is to find a way to move vehicles through a traffic circle in an optimal way. This was stated in the second paragraph of the problem statement : + +The goal of this problem is to use a model to determine how best control traffic flow in, around, and out of a circle. + +It is not clear what "best" means. It was left open for the teams to decide what "best" means. The teams were required to make it clear in their report how they interpreted this part of the problem: + +State clearly the objective(s) you use in your model for making the optimal choice as well as the factors that affect this choice. + +The judges expected the teams to clearly describe the objectives, and we expected that the subsequent evaluation of the model be consistent with the stated objectives. This can be difficult for the teams to achieve given the dynamic of writing as a team, the nature of how approaches evolve as the problem is explored, and the intense time pressure. Teams that managed to maintain a high level of consistency tended to elicit a more-positive response from the judges. + +# Technical Summary + +An essential requirement was to write a technical summary. The requirements for the technical summary were given in the problem statement. This was a difficult aspect to the problem. The teams were expected to provide a broad set of guidelines for a traffic engineer in a brief note. + +The traffic engineer should be able to read the summary and have a strong sense of the different methods available. Additionally, the different circumstances that impact the decision should also be included. Examples of important parameters are the radius or geometry of the circle, the rate of flow of traffic coming into the circle, and the density of traffic coming into the circle. Very few teams considered the traffic capacity of roads leaving the circle, and most assumed that the incoming traffic was a primary limiting factor. + +The traffic engineer is also expected to obtain a broad understanding of the conditions for which the model is applicable. This implies that the engineer should be able to read the summary and obtain a basic understanding of how the model was developed and an understanding of the potential pitfalls. + +Writing the summary was a difficult task for the teams. The teams had a diverse amount of information to convey in two pages. The teams that managed to convey a sense of the basic models, the underlying assumptions, and the limitations of their models tended to make a stronger impression. + +# Grading Process + +First, a brief overview of the evaluation process is given. The papers are evaluated in three stages. There is an initial round where the focus is on which papers to remove from the pool. The second, or screening round, focuses on which papers meet the minimal requirements for an advanced score. In the final round, the judges focus on which papers meet the highest standards. + +# Initial Grading + +The initial round is designed to remove papers from the pool that are not likely to meet the standards in the following round. Each paper is read by at least two people. Papers that receive consistent low scores are not passed on to the next round. Papers with mixed reviews are read by more people. When the reviewers are unsure, they try to err on the side of caution and pass the paper on to the next round. + +It is absolutely essential that a paper be well-written and have a clear, concise summary to make it past the initial round. A paper that does not provide a clear overview including results and a synopsis of the techniques used will not make a strong impression on the judges. The summary and the rest of the paper must also be consistent. Differences between the summary and the following pages can be immediately apparent and do not make a positive impression of the paper. + +# Screening Rounds + +As the judges examine papers in the next set of rounds, they try to decide if the paper meets the minimal requirements to do well in the following rounds. The number of times that a paper is read in these rounds varies from year to year. Again the judges try to err on the side of caution; but as the rounds proceed, the criteria for doing well becomes increasingly stringent. + +It is still important to have a strong summary, but the need for consistency across the whole paper is more important. The need for proper citations and correct grammar is also important. This year, a large body of literature was available for the teams. It was even more important than usual to include proper citations and make clear what work was done by the team and what work was found in the literature search. + +# Final Rounds + +In the final rounds of judging, the focus is on finding the best submission. At this point, each paper is read many times, and more time is available for each reading. The judges are able to focus more on each individual step and focus on consistency across the whole paper. The papers that remain in these final stages must maintain high scores to move forward. + +# Approaches + +The flow of traffic in roundabouts is an active research area. The available literature influenced many of the teams. Most teams used either a deterministic approach or a stochastic approach. Here we examine each of these approaches separately. + +# Deterministic + +The teams that adopted a deterministic approach tended to make greater use of models based on partial differential equations. There are a variety of different conservation laws that have been derived to model traffic flow. Such models tend to focus on relatively simple traffic geometries and require considerable adaptations to model a traffic circle. + +At first glance, a conservation law for a traffic circle seems to avoid the issues associated with boundary conditions because it is a periodic geometry. Unfortunately, the exits and entrances of the feeder roads create other difficulties. Adapting models to include the exits and entrances occupied the majority of the modeling efforts. + +The second difficulty with this approach is to find an approximation to the solution. The equilibrium solutions to the equations are piecewise-constant functions, and the conversation law gives rise to shocks. Given the complex boundaries, the method of characteristics is complicated, and the numerical approximations can be daunting since the techniques must account for upwinding. + +# Stochastic + +The majority of teams used a stochastic approach. In general, they examined either queues or networks, and a common approach was to use a hybrid model combining the two. A typical paper included an overview of the model, some theoretical results for a simple situation, and results for a computational model. + +Teams adopting this approach were expected to use proper citations because of the wide body of work available. The judges also paid more attention to the consistency across the whole paper. The summary, model, results, and discussion had to be consistent. + +Another issue that emerged with some papers is the disconnect between the section in the paper discussing the theory and that with the numerical simulations. Many of the top-rated papers provided some theoretical results for simplistic geometries or simulations. The majority of these went on to include the results of numerical simulations for the more complicated cases. The few teams that provided a confirmation of the numerical model on a simple geometry made an immediate positive impression. + +The other issue is how to report the results of simulations in a coherent manner. The development of the model requires a probabilistic approach. The + +analysis of the numerical trials requires a shift to a statistical approach. The majority of teams simply reported means and sometimes standard deviations. Few teams reported results using qualitative methods such as boxplots or histograms, and even fewer teams made use of appropriate quantitative statistical methods. + +Finally, when designing the numerical trials, few teams examined a range of values for the parameters in their models. Every year, the judges rate this aspect of the problem as a crucial part of the problem. We expect to see an exploration of the results given small changes in parameters or assumptions. The few teams that did examine this aspect immediately caught the judges attention. + +# Common Themes + +In the previous section, some observations specific to this year's competition are given. Some general observations that come up every year are explored here. + +# Summary + +The summary is an important part of the team's entry. It is the first thing that a judge will read. The summary is the first impression. It is vital that a paper have a complete and well-written summary to make it past the initial rounds. It is also vital that the details in the summary be consistent with the rest of the paper. + +Writing a one-page summary of the team's efforts is a difficult task. The teams are expected to provide a brief overview of the problem. They are then expected to let the reader know their specific conclusions and recommendations. Finally, the teams are expected to provide the reader with an overview of the approach that they used. + +It is difficult to include all three of these parts within the one-page summary. Many teams find it tempting to include a large amount of background information or provide clever narratives motivating the problem. Unfortunately, such material in the summary can drastically reduce the amount of space available to discuss the team's results and discussion of the approach that they adopted. + +# Grammar, Punctuation, and Equations + +The presentation of the team's model and results cannot be separated from the model itself. A team must have a reasonable model including a basic analysis of the model. The teams are expected to then share their results in a clear and concise discussion. + +Teams that do not make use of proper grammar and punctuation are not likely to make it past the initial rounds of the competition. Teams must know + +how to include equations in their writing and use proper punctuation. Advisers should not take it for granted that their students know how to do these things. + +# Proper Citations + +The judges expect every entry to include proper citations. Many teams are comfortable exploring the resources available to them, and it is unusual to come across an entry with a unique approach. The different types of approaches can be easily categorized, and the judges quickly figure out the sources available for each approach. + +# Sensitivity and Stability + +Sensitivity and stability are always important. The few teams that make a concerted effort to explore this aspect of their model will almost always stand out. Exploration of the sensitivity of a model can be as simple as testing what happens for a different range of values in a parameter, and it can include the use of more sophisticated methodologies such as an exploration of a sensitivity matrix. + +Every year, teams are able to implement nontrivial numerical simulations. The teams must make decisions about what numerical trials to examine. It is extremely rare for teams to scale a problem as a way to decide the combination of parameters that are important. + +# Figures and Tables + +The integration of graphs and tables into a paper is a challenge for many teams. It is not uncommon to see entries in which figures and tables are included with no detailed discussion of them. The teams need to integrate the figures and tables into their discussion. + +Given the increased use of simulations and numerical results it is vital that the teams find a way to include descriptions of their figures and tables into their narrative. The teams need to make sure to let their readers know the key aspects of their figures and tables and inform their readers how to look at the figures and tables. + +# Consistency Across the Paper + +The teams have a limited time to understand the problem, derive a mathematical description of the problem, perform the requisite analysis of their model, and then come back and interpret their work with respect to the original context. Over the course of the weekend, teams make decisions and explore a variety of different approaches. The time constraints make it extremely difficult to complete a paper in which the wide array of assumptions and analyses are consistent across the whole paper. + +# Conclusions + +A team's submission must satisfy a wide array of criteria to be successful and proceed through each stage of the judging. The presentation and grammar are vital aspects of a submission. The team's results are given through the filter of the team's writing. + +The team must provide a strong analysis. The teams only have four days, and the judges do not expect extensive and sophisticated models. A careful analysis of the resulting model is required, though. + +Each year, the expectations are different, but there are a few constants. For example, a clear discussion of the basic assumptions—with some justification, citations, and a discussion of the implications—is necessary. Additionally, judges always expect a focused discussion on stability and sensitivity. + +In this year's competition, the use of simulation was a part of the majority of entries. Incorporating an analysis of simulations is a difficult task, and the top entries did a remarkable job of integrating the development and analysis of their model with the discussion of the results of their numerical trials. + +Teams that were able to tie together the theoretical analysis of their model along with their numerical trials received immediate positive recognition. The best entries were able to develop multiple models of varying complexity and verify their numerical models with the theoretical results of the simpler models. + +# About the Author + +Kelly Black is a faculty member in the Dept. of Mathematics and Computer Science at Clarkson University. He received his undergraduate degree in Mathematics and Computer Science from Rose-Hulman Institute of Technology and his Master's and a Ph.D. from the Applied Mathematics program at Brown University. He has wide-ranging research interests, including laser simulations, ecology, and spectral methods for the approximation of partial differential equations. \ No newline at end of file diff --git a/MCM/2009/A/4339/4339.md b/MCM/2009/A/4339/4339.md new file mode 100644 index 0000000000000000000000000000000000000000..cc1afdf183f48380166fd5a065b6c9d657b630ac --- /dev/null +++ b/MCM/2009/A/4339/4339.md @@ -0,0 +1,599 @@ +# Team Control Number + +For office use only + +T1 +T2 +T3 +T4 + +# 4339 + +Problem Chosen + +A + +For office use only +F1 +F2 +F3 +F4 + +# 2009 Mathematical Contest in Modeling (MCM) Summary Sheet + +(Attach a copy of this page to each copy of your solution paper.) + +# Three steps to make the traffic circle go round + +Among all the solutions to maneuver vehicles at intersections is the traffic circle, designed by French architect as early as 1877. Nowadays, with the growing number of population, reconfigurations of traffic control devices at traffic circles are in urgent need. Despite the physical reconstruction, we are modeling to best control the traffic by add-ons: signal lights, stop/yield signs and orientation signs – a special sign designed in this article. + +First of all, we create two models - a macroscopic and a microscopic - to simulate transportations at traffic circles, assuming the information of add-ons are given. The former models the problem as Markov chain and solves in some other way, and the latter simulates the traffic by individual vehicles - a "cellular-automata like" model. + +Secondly, we introduce the multi-objective function to evaluate the control. Saturated capacity, average delay, equity degree, accident rate and device cost are taken into account as five aspects, and combined into a cost with US dollar / hour as its unit. + +Thirdly, we analyze the optimization problem of how to best choose the add-ons. Three steps are used to make the traffic circle go round: + +1) basic devices like lights and signs are optimally placed; +2) orientation signs to lead vehicles into proper lanes are optimally set up; +3) endow with self-adaptivity to allow the traffic auto-adjust according to different traffic demands + +Throughout this article the 6-arm-3-lane Sheriffhall Roundabout in Scotland has been examined. Detailed suggestion on the traffic control in this circle has been given. Lights are assigned with a 68-second period, and a sample orientation sign is given as the figure to the right. + +![](images/f53d3c4e2d0a4e013f6640254559375030c6680924036537c0282803d31962be.jpg) + +![](images/8d75122a972812ef4f62c82db6c632ae872d23b00a875284b31c88389a7d3b9a.jpg) + +![](images/e751adb2774a818e528880b95603e67b77edefeffa4fd775bb7150bc04aca058.jpg) + +Some smaller and larger dummy circles have been tested to verify the strength and sensitivity. Emergency cases have also been appended to judge the flexibility. + +# Three Steps to Make the Traffic Circle Go Round + +Team #4339 + +February $10^{\text{th}}$ , 2009 + +# Contents + +Introduction 2 + +Assumption 2 + +Terminology & Basic Analysis 3 + +Terminology 3 + +A Glance at Sheriffhall Roundabout 3 + +Simulation Model 5 + +Model I - The Macroscopic Simulation 5 + +Model II - The Microscopic Simulation 7 + +Comparison of Two Simulation Models and Sensitivity Analysis 8 + +The Multi-Objective Function 10 + +Basic Standards 10 + +How are the Objectives Affected 10 + +The Combined Objective: The Money We Lost 11 + +Application: Evaluate Typical Arrangements 12 + +Optimization Model 14 + +The All-Purpose Solution 14 + +Step 1 - Basic Device Placement & Timestamp Chosen 14 + +Step 2 - Orientation Sign Placement 16 + +Step 3 - Time Variance & Self Adaptivity 18 + +Verification of Optimization Model 18 + +The Circle is In Work 18 + +Accuracy 19 + +Sensitivity 19 + +Emergency Case 20 + +The Technical Summary 21 + +Conclusion 22 + +Reference 23 + +# Introduction + +Traffic control assures the safety and efficiency of transportation. It has been proved that the traffic circle is a decent solution to the traffic flow passing a node - either a busy conflux or a small scale crisscross. Should a traffic circle be accompanied with appropriate assistant devices, the whole control system would possibly perform better. + +To derive traffic factors from data, we develop two different models to simulate the flow. The macro-model uses a Markov process to move vehicles between junctions, while the micro-model concentrates on the behavior of each vehicle with a modified cellular automata algorithm. The outcomes of these two approaches show great consistency when applied to a real scenario in Scotland. + +At the same time, we must visualize the abstract definition of a "good" method. The annoyance from delay, the threat from accident and other aspects concerned serve as various objectives of the total effectiveness and make a thorough criterion complicated. Finally we choose five main objectives and combine them with an overall measure called the combined expense. + +In our pursuit of optimization, genetic algorithm is employed to generate the final control method. Not only has it solved the problem on determining the green light period, the orientation signs are also employed to direct the vehicles in order to save time. Moreover, the complexity of the problem itself brings indefinite results. We also consider the ability to treat with unexpected affairs like accidents or car breakdowns, which offer insight observations on the quality of the result. + +We eventually come up with a concise technical summary explaining how our model works and giving the answers under typical conditions. + +# Assumption + +The geometric design of the traffic circle cannot be changed. +The traffic circle is a standard one (at grade) with all lanes on the ground, that is, no grade separation structure. +The incoming vehicle flow in a period we study (e.g. 24 hours) is known. +> People drive on the left (since the example analyzed later is from the UK) +> Since we are considering the traffic flow, the pedestrians are ignored. In fact, the model can be easily modified to take this factor into account. +> Motorcycles move freely even in a traffic jam and are not taken into consideration. + +# Terminology & Basic Analysis + +# Terminology + +When we explain the definition of basic terms below, the relative discusses are presented at the same time, if necessary: + +$\succ$ Junction: the intersection where vehicles flow in and out of the traffic circle. +> Lane: part of the road for the movement of a single line of vehicles. + +The number of lanes directly affects the flux through a traffic circle by limiting the entrance and leave of vehicles. However, since both the conventional design and real-time photos suggest that vehicles are most likely to exit a traffic circle easily, our model neglects the restriction on outward flow. + +$\succ$ $\mathrm{l}_0$ : the number of lanes of the traffic circle +> Section: part of the traffic circle between two adjacent arms. +$\succ$ Yield/Stop sign: a yield sign asks drivers to slow down and give the right of way to vehicles in the different direction. A stop sign asks drivers to come to a full stop before merging into the flow. +Orientation sign: signals indicating the lane for vehicles to take according to their destination. +> Traffic light: a signaling device using different colors of light to indicate the moment to stop or move. + +It is claimed that a traffic light with direction arrows performs much better [1], so we are inclined to use such kind of traffic light in our model. + +Moreover, the function of traffic lights remains a controversial issue. Comparing with yield /stop signs, traffic lights slow down the vehicle movement. At the same time, however, even at a remote motorway traffic circle with few pedestrians, a malfunction of traffic light will probably lead to an accident [2]. The use of traffic light will be discussed in details later. + +> Cycle period: the time in which a traffic light experiences exact stages of all 3 colors. + +An optimal cycle period is critical whenever traffic lights are employed. The method we use is called Webster Equation [3]. The value we use in our model is calculated as 68 seconds. + +> Green light period: the time that a traffic light keeps green in one cycle. +> Timesteps: A sequence of characters denoting the start/end time of red/green lights. + +# A Glance at Sheriffhall Roundabout + +We here take an early look at the Sheriffhall Roundabout, the traffic circle to which our model is applied later. This traffic circle is relatively big in size among ones with sufficient data. + +![](images/6aa75996ebcc55ac7b63e80f006a9a78adba0cb2e3ea19f2a76c3dca75a4b77e.jpg) +Figure 1: The Sheriffhall Roundabout + +One characteristic of this traffic circle is that the arms of southwest(6)-northeast(3) direction have larger amount of flow than that of others. The arms of north(2) and south(5) direction have two lanes, while the other four arms and the circle have three lanes. The traffic circle is modeled as a ring with an inner radius of $38.79\mathrm{m}$ and an outer radius of $50.43\mathrm{m}$ . + +The discussion in following sections will use the origin-destination flow (Table 1) given by Xiaoguang Yang, et al [4]. Since the current traffic demand is far from saturate (discussed later), we will experiment on different scales of this inflow matrix, like its multiple of 1.2, 1.4, 1.6 and 1.8. + +
From\To123456
1-001887767
20-8374195
320-119791007
433812963-0208
51161241420-16
69017298823610-
+ +Table 1: Origin-destination flows (vehicles/hour) + +# Simulation Model + +The simulation model is the first step in controlling the traffic circle. Here we provide two different approaches. The macroscopic simulation takes vehicles as a whole, and is nonrandom. On the contrary, the microscopic simulation traces each individual vehicle, and includes randomness. + +# Model I - The Macroscopic Simulation + +To find the best way to control the traffic circle, we must acquire some information about it. Usually, we do not know how each vehicle behaves, especially, through which way it enters the circle and which way it leaves. Under the circumstances that we only know the number of vehicle come in and out of the circle through each arm, we adopt the macroscopic simulation. + +For the simplicity of statement, we first combine the lanes in the sections and arms together, and regard them as one-lane roads. We then further explain how the multi lane simulation works. + +# Assumptions + +> Vehicles in the same section of the circle are located uniformly in the section. +The arrival rate at each arm is constant in the time period we simulate. +For simplicity of statement, we consider an ideal round traffic circle (Figure 2). + +The macroscopic simulation itself does not depend on the shape of the circle. + +![](images/0b9158565b488aba735c63edf838dcf9ac706296347fe5f18355363771f6fe68.jpg) +Figure 2: Sample traffic circle + +# Sections and arms + +We divide the whole traffic area into sections and take vehicles in the same section as a whole. Label the sections and the arms as the figure above shows. Sections are attached with the following quantities: + +Number of vehicles contained in one section at a specific time: $\mathrm{num}_{\mathrm{i}}^{\mathrm{t}}$ The upper label stands for time. +Number of vehicles waiting to enter through one arm at a specific time: $\mathsf{arm}_{\mathrm{i}}^{\mathrm{t}}$ +The max number of vehicles possible to enter the traffic section through one arm per unit time: capi + +# A Markov Process + +The traffic state at time $t + 1$ depends only on the traffic state at time $t$ , so that the traffic is a Markov process. To describe traffic state of the whole system, only quantities $\mathbf{num}_i^t$ and $\mathbf{arm}_i^t$ are needed. Our task to realize the simulation is to determine $\mathbf{num}_i^{t+1}$ and $\mathbf{arm}_i^{t+1}$ , for $i = 1,2,\ldots,n$ . + +Our initial idea is to adopt the classical Markov method. In principle we can calculate the transition probability matrix under the assumption listed above. A final steady distribution can be achieved through the normal Markov method. + +The method does not work in our problem. This can be seen by estimating the dimension of the transition probability matrix. For a traffic circle with four arms/sections and each of them at most holds ten vehicles. The number of traffic states in it is $10^8$ . + +For this fact, we use the expectation $\overline{\mathrm{num}}_i^{\mathrm{t}}$ and $\overline{\mathrm{arm}}_i^{\mathrm{t}}$ instead of the distribution to denote a state. + +# The Simulation Process + +![](images/19d18a01650ba4720f8e0b82cd96e702e5336b7038ab4954c1827baa84941cff.jpg) +Figure 3: Flows at a junction + +$\succ$ $\overline{\mathrm{num}}_{\mathrm{i}}^{\mathrm{t}} \times \mathrm{out}_{\mathrm{i}}^{\mathrm{t}}$ vehicles is running out of the circle from section i. + +The ratio $\text{out}_i^t$ will drop when $\overline{\text{num}}_i^t$ approaches its capacity. + +$\succ$ To deal with the junction + +There are two streams $\overline{\mathrm{num}}_{\mathrm{i}}^{\mathrm{t}} \cdot (1 - \mathrm{out}_{\mathrm{i}})$ and $\mathrm{cap}_{\mathrm{i}}^{\mathrm{t}}$ trying to flow into the successive section. If traffic light is installed, depending on the current time $t$ , only one of them will be allowed. If stop/yield sign is used (at the arm side for example), then only a small fraction of $\mathrm{cap}_{\mathrm{i}}^{\mathrm{t}}$ can flow in. This fraction is denoted by the disobey rate $\alpha_{\mathrm{stop}}$ or $\alpha_{\mathrm{yield}}$ . + +An inflow of $\mathrm{in_i}$ newly-arrived vehicles runs into arm i. + +# Multi lane traffic circle + +Considering the separation of section into lanes, the simulation is expected to be more accurate. We assume that vehicles do not change lanes within arms and sections, which means they can only change lanes at junctions. This is reasonable because a large number of lane transfers happen at junctions. + +In order to treat lanes differently, we need to know what proportion of vehicles that passes through each lane. This indicates the relative popularity of lanes. At each junction, the outflow for a given lane will be distributed into successive lanes according to their popularity. + +![](images/a3c81404ffdbe7ca578de4dd705bbf4756bfb48ce88af53394abe2301d93cff8.jpg) +Figure 4: a two-lane circle has been divided into lanes. Each arc in the right figure denotes a single lane. + +# Model II - The Microscopic Simulation + +Partially inspired by the Sequential Cellular Automata, we adopt a microscopic model. In this model the traffic circle is divided into $l_{0}$ lanes. Vehicles can stay at real argument in polar coordinates but with discrete radius values. We model the behavior of each individual vehicle, with the help of some general principles: + +Traffic coming in + +As described in Table 1, the number of vehicles per hour is given in a matrix $\left(a_{i,j}\right)_{n\times n}$ . We use a Poisson distribution with mean value of $\frac{a_{i,j}}{T}$ to describe the incoming vehicles from arm i to arm j. + +$\succ$ Lane choosing and changing + +For a specific vehicle from arm i to arm j, the driver has his ideal lane to be in. The hidden principle is [5]: the more sections the vehicle is to pass before its exit, the inner lane the driver wish to take, both in the arm and the circle. In our simulation, we adopt this rule. We will discuss more about it in the Optimization Model. + +Vehicle speed + +We define maximum velocity $\mathbf{v}_{\mathrm{max}}$ and maximum acceleration $a_{\mathrm{max}}$ for vehicles, and record the velocity individually. The principles for an vehicle to accelerate or decelerate are: + +When a vehicle faces a red light or other vehicles, its speed decreases to zero. +When a vehicle changes lanes, it decelerates. +> Otherwise, it will attempt to accelerate. + +The function of a yield sign. + +When a vehicle faces a yield sign, it checks whether it is empty enough for it to enter the junction. If not, it will wait till it is empty enough, but with a disobey rate $\alpha_{yield}$ - ignore the sign and try to scramble. This reaction affects the accident rate. + +The function of a stop sign. + +When a vehicle faces a stop sign, it should stops instantaneously. As a next step, it deals with similar procedure as a yield sign. The only difference is that it will accelerate from a zero speed. The disobey rate is $\alpha_{stop}$ + +The effect of traffic lights - just like normal. + +After we have made all the things clear, we only need to discreet the time and follow the rules presented above for every individual vehicle after it comes to the circle. We can simulate the whole process. We can easily calculate the average passing time for one vehicle, and the accident rate (by the total number of touches of vehicles). A vivid view of simulation result is presented below: + +![](images/8d9901e77432888993b6feeec7566277a981f337cb00d0cedb92543a825207ec.jpg) +Figure 5: The vehicles around the traffic circle + +# Comparison of Two Simulation Models and Sensitivity Analysis + +# Results + +We use the two different models to simulate a real traffic circle: The Sheriffhall Roundabout in Scotland. We used a given traffic light configuration in [4]. For simplicity, we only take into consideration the average time needed for a vehicle to pass the traffic circle. This value for model I and II are 42.73s and 41.64s respectively. The two results are close enough and it is reliable to believe that the actual passing time is around 42 seconds. + +# Sensitivity + +We analyze the sensitivity by running the program with modified parameters. Some of the major variable parameters are listed in the following table. We study the sensitivity. + +
Parametervariationmodel Imodel II
vmax+10%-2.6%-8.5%
-10%10.5%11.1%
l0+1-19.6%-16.4
-1121%65.2%
rout+10%-7.3%-3.9%
-10%1.1%3.3%
Traffic flow+10%10.6%7.0%
-10%-3.0%-6.7%
+ +Table 2: Sensitivity test of Simulation Model + +The two models give similar sensitivity results. Note that the average passing time is very sensitive to the parameters except $l_0$ . It is reasonable since the number of traffic lanes in the circle affects the passing time significantly. + +Model II is a random simulation, which enables us to calculate the deviation. However, the standard deviation of passing time is no larger than $3\%$ in this case. + +# Complexity + +Both the time complexities of the algorithm of the two models is proportional to maximum number of vehicles that can be hold in the circle and the time needed to make the system work periodically – number of iteration. It is shown that the number of iterations is smaller than 1,000 by practice. Thus the complexity is about 1000 times a linear function of the size of the circle. This is short enough. + +Model I is a little simpler than model II, since we do not need to trace each individual. Reversely, Model II needs more a priori information than Model I. As the two models are consistent and give similar results. We adopt model II for further study. + +# The Multi-Objective Function + +It is complicated to evaluate a traffic control method. We here list possible metrics and try to conclude a synthetic objective function with a unified measure. + +# Basic Standards + +We want to include both subjective evaluations (such as the feelings of drivers) and objective measures (such as the expense on devices). Also, the standards here should be calculated from the viable data. Based on the reasons above, we choose 5 evaluation standards below: + +Saturated flow capacity + +When traffic flow becomes tremendous, vehicles will accumulate on arms as time goes by. The threshold flux to avoid such an accumulation is called the saturated flow capacity under a certain control method. + +Average delay + +A vehicle may suffer various kinds of delay in congestion, acceleration and more. The difference between the average time during which a vehicle passes the traffic circle and it passes a void one is called the average delay. + +Equity degree + +A multi-arm traffic circle may distribute the incoming flow inequally, which annoys drivers more frequently. The relative derivation of average delay is called the equity degree. + +> Accident expectation + +Too much emphasize on speed may mean potential risk of accident. The consideration of this threat calls for the description of accident per hour, which is named the accident expectation. + +Device cost + +The total expense on the traffic signs and lights is called the device cost. + +Any standard above may serve as an independent objective to maximize/minimize. Still, we aim at combining them with a synthesized function. Before that we first look into the factors that affect them. + +# How are the Objectives Affected + +Generally speaking, the objectives all come from the simulation process above implicitly. However, it is still necessary to point out some relationships. + +# Saturated Flow Capacity + +This objective is really complicated. Besides the geometric design, any little modification of the control system will change the amount. We below estimate the change because of signal types. + +A yield sign is likely to work more effectively since it seldom causes unnecessary stops on vehicles. A stop sign, however, at least add the acceleration/deceleration delay to every vehicle rushing inside. The efficiency of a traffic light, however, is highly related to its green light period. Dull traffic lights sometimes block vehicles from entering a void circle, while the self-adaptive ones may work according to the road condition. + +In fact, a traffic circle with yield signs at all junctions bears the heaviest traffic in the simulations + +above, and traffic lights are left with great potential to improve in optimization. + +# Average Delay + +First, the average delay is controlled by the incoming flux. The delay time will increase rapidly when traffic starts to congest. The signal arrangement also contributes here. + +In our model, the delay time of a vehicle is calculated only when it successfully exits the traffic circle. When this delay time is considered in the overall objective, there should be penalties on congestion rate, which is calculated from the current flow and saturated flow capacity. + +# Equity Degree + +Equity degree is directly calculated from the delay time distribution. The factor listed above will also affect it. Note that not only the flow distribution but the total flux contributes to the equity degree, since high flux may lead to unexpected distribution failures. + +# Accident Expectation + +Accident expectation is calculated with local accident rate (unit: accident/vehicle). Besides the disobey rate discussed in the previous section, we also assume that each kind of signal reduces the accident by its respective percentage. (These fractions are merged from [1][6][7]) The flux data and signal numbers are once again taken into consideration. + +# Device Cost + +This expense is the simplest objective which relies only on the number of each kind of signal. + +# The Combined Objective: The Money We Lost + +Now we come to a combined objective. Government usually mentions expense and economic loss in their report, and our model try to make a further integration measured with US dollar. The final result is called the combined expense (CE), which we attempt to minimize. + +We begin our approach from objectives most related to money. The prices of traffic control devices are easy to be accessed on the Internet [8][9]. Along the expense of maintenance and operation, the average cost per hour for each kind of device is calculated. Since traffic lights consume much electricity, we ignore the money spent on other types of devices. A traffic light is expected to spend $0.23/hour according to the viable data [10][11]. + +Accident expense losses are often reported in news. We take the average data from an annual report of local traffic office as the average loss per accident [12]: + +$$ +A c c i d e n t L o s s = \$ 6 3 0 \times F l u x +$$ + +The average delay time must be accompanied with a value of delay to get involved. From the data of US Department of Transportation and fuel price [13], about \(1.2 is lost on a delay of one hour (per vehicle): + +$$ +D e l a y E x p e n s e = \$ 1. 2 \times F l u x \times A v e r a g e D e l a y T i m e +$$ + +The saturate capacity shows the endurance of the control design as well as its ability to face sudden challenges. The unused part of this capacity assures for any extra incoming, whose value is + +estimated as following: + +Capacity Bonus = 5% × $1.2 × (Saturate Capacity - Flux) × Average Delay Time, + +in which $5\%$ is the possibility of an unexpected vehicle coming. + +Equity degree (ED) is a tricky component in the determination. Current traffic systems often fail to consider this factor. The most annoying situation is to keep two "main arms" open to traffic by sacrificing all other arms, whose equity degree is estimated to be the function of the number of arms $n$ : + +$$ +R e f e r e n c e E q u i t y D e g r e e (R E D) = \sqrt {\frac {\mathrm {n (n - 2)}}{2 (\mathrm {n - 1})}} +$$ + +The equity degree will be normalized by this reference and appear in a penalty on delay expense: + +$$ +\text {C o r r e c t e d D e l a y E x p e n s e} = \text {D e l a y E x p e n s e} \times \left(1 + \frac {E D}{R E D}\right) +$$ + +The combined index is then calculated out: + +$$ +C E = \text {C o r r e c t e d D e l a y E x p e n s e} - \text {C a p a c i t y B o u n s} + \text {A c c i d e n t L o s s} + \text {D e v i c e C o s t}, +$$ + +which serve as the final objective function we use in the following optimization. + +# Application: Evaluate Typical Arrangements + +We take a glance at three general control method - pure traffic light, stop sign only or yield sign only. + +![](images/d029ba8968fa1828209cd736c2a92c99bd8da03cb6f5102936082b376896dde2.jpg) +Figure 6: A view on 5 objectives of 3 general control methods + +The 5 objectives are first normalized (converting values into the interval between 0 and 1 from worst to best). A superficial look at this radar chart raises doubt on the expensive traffic lights. However, traffic lights act superiorly in controlling the accident, while the other two signs may be + +hazardous in accelerating the flow. The convoluted fact is clear when we compare their CE values: + +
Control MethodThe Combine Expense (US$/hour)
Traffic light66.76
Stop sign103.29
Yield sign116.61
+ +Table 3: Combined Expense for 3 typical control methods + +The results above suggest that traffic lights are worthwhile when encountered with heavy traffic. The optimization, however, needs more insight observations in the following part. + +# Optimization Model + +In the previous sections we have demonstrated two distinct methods to predict the transportation behavior in traffic circles. Now we are looking for techniques that will help maneuver vehicles through the traffic, exerting the best this traffic circle can do. + +# The All-Purpose Solution + +The multi-objective function of our problem has been clearly defined in last section, and all relative factors are converted to a single measure (in US dollar). The remaining difficulty is in how to find an optimal or quasi-optimal solution. + +For the reason that the objective function is calculated through our Simulation Model, its analytical form becomes difficult to obtain. Under such situation quasi-optimal solution is welcome and approximation algorithms turn into candidates. + +In this problem a normal approximation algorithm may fall into local maxima. However, some high-level technique can be used like Simulated Annealing, or what we have used - Generic Algorithm. Specifically, the traffic controls in different junctions are used as genes. The configuration of a traffic circle is a vector of genes, containing all the devices used in different junctions. In details: + +
ProcessExplain
BreedingCombine the traffic control methods of two different configurations.
MutationRandomly mutate the traffic control in a single junction.
EvolutionLocally adjust the traffic controls in all junctions, and seek for better solution.
+ +Table 4: The explanation of Generic Algorithm used for optimization + +In this article we are mainly dealing with three kinds of traffic control devices: 1) traffic lights, 2) yield/stop signs and 3) orientation signs - a special traffic signal designed ourselves. We name the first two basic devices. + +# Step 1 – Basic Device Placement & Timestamp Chosen + +A traffic junction can be placed with any one of the following five devices: 1) Traffic Light, 2) Yield Sign in the circle, 3) Yield Sign at the entrance, 4) Stop Sign in the circle, and 5) Stop Sign at the entrance. Besides, the timestamps of red/green lights for traffic lights are also changeable. + +# Sheriffhall Roundabout + +Considering all potential variables above, we run our program against the Sheriffhall Roundabout. Again, we use the origin-destination flow data in Table 1. We assume this flow matrix is fixed in a one-hour period. The solution of our program shows that traffic lights should be used prior to stop/yield signs, in respect that the accident rate will increase dramatically in this busy circle. + +![](images/de7b2e0e8240485efe380de0683399685ff85c7c54f15e34a16a87a787f2ada8.jpg) +Figure 7: The traffic light timestamps in 6 junctions (light green / and scarlet) Period = 68s (calculated in assumption), Original flow info is used. + +In Figure 7, green lights represent the permission of vehicles from the incoming road, and red lights indicate the in-circle pass. The optimal configuration creates a long period of red light for all junctions, and allows the digestion of vehicles quickly in this interval. This configuration accelerates the flows, but has a lower saturated flow capacity. + +
ObjectiveValue
Saturated Flow6904 vehicles / hour
Capacity
Average Delay42.763 seconds / vehicle · hour = 62.04$ / hour
Equity Degree0.3187
Accident Expectation4.63$ / hour
Device Cost1.38$ / hour
Combined expense78.98$ / hour
+ +Table 5: The multi-objectives of the optimal configuration of Sheriffhall, Original flow + +Sheriffhall Roundabout with $1.8 \times$ original inflow + +When the incoming flow density increases by 0.8 times, the optimal configuration shows significant difference: + +Figure 8: The traffic light timestamps in 6 junctions (light green / and scarlet) +![](images/51a41563b3ba0838d22dbf1b041e607ca9c20878c1a26c6193dd007485e19d13.jpg) +Period $= 68s$ (calculated in assumption), Original flow $\times$ 1.8 is used. + +In Figure 8, the green-light periods for all junctions are shortened in order to let the circle digest greater number of incoming vehicles. One may find an interesting fact that there no longer exists a long period with all junctions in red light. As an alternative, there exists a free pass between Junction 3 and Junction 6 (shadowed stripe). This greatly enlarges the saturated flow capacity, but reduces the pass speed – high in average delay (see Table 6). + +To point out why one needs to look at the origin-destination flow Table 1, in which the flow between Junction 3 and 6 constitutes a significant portion of all the inflows. The white stripe in Figure 8 actually gives a good opportunity and let vehicles travel between them. + +
ObjectiveValue
Saturated Flow Capacity8354 vehicles / hour
Average Delay81.278 seconds / vehicle · hour = 117.91$ / hour
Equity Degree0.3042
Accident Expectation5.41$ / hour
Device Cost1.38$ / hour
+ +![](images/3fc9ab0fb50ff900b633fad0f51f1249311c1345109e0bdda322ecbfef70e7e7.jpg) +Table 6: The multi-objectives of the optimal configuration of Sheriffhall, Original flow $\times$ 1.8 + +# Step 2 - Orientation Sign Placement + +Normally, the number of lanes inside a traffic circle and the number of junctions does not equal. In some country, a hidden rule [5] is: the vehicle nearer to its exit should stay left (Remark: we are driving on the left!) Now we are going to refine this rule. + +There are $n$ arms in total. Suppose a vehicle is at Junction $a(1 \leq a \leq n)$ , and its destination is $b(1 \leq b < n)$ junctions next to it. We manage two variables $\text{lower}_a^b$ and $\text{upper}_a^b$ so that such a vehicle is suggested stay in the range of $\text{lower}_a^b \leq x \leq \text{upper}_a^b$ . Our aim is to assign vehicles averagely into lanes, so that the traffic jam can be reduced to some extent. In order to optimize + +these intervals [lower $_a^b$ , upper $_a^b$ ], we do the Generic Algorithm again. + +![](images/ba175f70d3854e61b0ae4d883ff25627fb5fb23bb6978a82fb763a8b74fa5461.jpg) +Figure 9: The magic effect of Orientation Sign + +Figure 9 demonstrates the effect of Orientation Sign in reducing the average delay by different amount of inflow. As the number of incoming vehicles increase, the positive effect of our orientation sign becomes distinct. From the point of view of Saturated Flow Capacity, the configuration without Orientation Sign is 8354 (Table 6), and this number has increased to 8812 with the help of this newly-introduced sign. In short, the very last potential capacity has been extracted in our model. + +![](images/c0626a020a409ffcaf88618c0c40af56b34cb669a1124845ac6bab5d048d2a4f.jpg) +Figure 10: The orientation sign to hang over the junction entrance (At junction 3, with $1.8 \times$ original inflow) + +![](images/c946df68538ec8a82c006bb51475174f396b3b29d032d2bd9383094ac8c4d5b4.jpg) + +![](images/e7336c2bbfb8c5276502675194c0404a2f37c4c809436166ed31f408fff97793.jpg) + +# Step 3 – Time Variance & Self Adaptivity + +Apparently, the origin-destination flows vary from morning to evening. The easiest way to handle this is to run our previous step with different traffic demand information for different time-period. Actually, we can go further on this, and make the traffic control self-adaptive if traffic lights are in use. + +Given the traffic light timestamps calculated in Step 1, and assume in a following hour the traffic demand has changed to some new value. We may select the original configuration as our seed, and carry on the Generic Algorithm and gain a similar but better solution. As an example: + +![](images/d08da4cb814012f9f1889984ae87ed4f560c3c7130767fd578235c45c2b3ee80.jpg) +Figure 11: The Self Adaptivity as the inflow drops from 1.8 to $1.4 \times$ original inflow in an hour + +One may find that the timestamps change a little and hence will not significantly affect the vehicles already in circle. As the night falls, traffic demands become considerably small. Now the traffic lights are suggested to be replaced by Yield/Stop Signs. This cannot happen in the usual way. However, a special status of traffic light can be switched on - flashing yellow. + +Flashing yellow is used as an international light signal [14] that has the same effect as Yield Sign – to remind the drivers be careful. It can be used when the traffic is not too busy and greatly reduces the average delay. + +# Verification of Optimization Model + +The Circle is In Work + +![](images/3dca0c7058814b68964f8b74fb607bff7f3e1146a5c8bd2edabcfd927d8a4ec9.jpg) +Figure 12: 39 seconds later, most of the vehicles waiting at Junction 6 moves in + +Figure 12 shows that when the inflow is 1.8 times as high as given in Table 1, the traffic circle is + +still in work. + +# Accuracy + +As a follow-up study to verify the Optimization Model, we need to test on different traffic circles. For the reason of lack of precious data (most of them are in charge), we created our dummy traffic circles. A large traffic circle with 12 arms and 6 lanes are tested, and the result shows our model can treat with large data in time. + +An interesting thing is that when we are testing on a dummy suburban circle with 4 arms with relatively lower traffic demand: 1) the mixture of stop signs and traffic lights are encountered; 2) the in circle signs are encountered. In this example the origin-destination flow between Junction 1 to Junction 3 is remarkably greater than all other pairwise flows. + +Two of the quasi-optimal solutions are: + +Figure 13: Two intuitive configurations generated by our model. +![](images/d103ab23397e4ea04f310c8bccca6f73acf3ca404fa469819e7340b793f0ca79.jpg) +The left one has two in-circle stop signs and guarantees the left-right fast pass; +The right one has a mixture of traffic lights and stop signs. + +# Sensitivity + +The sensitivity of our model is tested in such way that the model has been run 50 times and the standard deviation of average delay is calculated as an example: + +
The multiple of income flowAverage DelayStandard Deviation
1.0 times42.76 seconds / vehicle · hour0.95 seconds / vehicle · hour
1.2 times47.22 seconds / vehicle · hour1.56seconds / vehicle · hour
1.4 times51.99 seconds / vehicle · hour2.54 seconds / vehicle · hour
1.6 times61.54 seconds / vehicle · hour3.81 seconds / vehicle · hour
1.8 times81.28 seconds / vehicle · hour8.30 seconds / vehicle · hour
+ +Table 7: The sensitivity test of Optimization Model + +# Emergency Case + +Our model can simulate the emergency case. As in Figure 14, one of the cars breaks down and has blocked the whole lane. However, the configuration stands its test and the traffic circle is still in work. However, the average delay time has increased by 10 seconds. + +The Self-Adaptivity of our model makes us able to adjust the light timestamps and reduce the traffic jam in emergency case. However, because of the limited time we cannot represent our adaptivity here. + +![](images/8cafdd87e3482252f12a3d44bdb8bd7ad924312647444c8d643a6d13f610f55e.jpg) +Figure 14: The broken-down of some vehicle slows down the traffic, but the circle is still in + +# The Technical Summary + +We present an arrangement of traffic control devices for a composite of the specific traffic circle information and the respective flow. Before applying this method, we remove the artificial patterns and construct a map in a simplified version with the key factors like: + +The geometry design, including arms and junctions around the circle +The number of lanes in any road + +The detailed information of the traffic flow is welcome. Basically we need the following: + +The incoming & outward flow of each arm, or +The origin-destination information between any two arms, if possible. + +Using the data above, we start to construct an optimal choice. We offer two different simulation methods to choose from. The macro one deals with insufficient data, while the micro one leads to a more accurate view, providing enough information. A combined objective function visualizes our goal. The typical results are discussed below. + +When applied to the simplest circumstance with few vehicles passing a traffic circle, yield signs are always welcomed to eliminate unnecessary wait caused by stop signs and red lights, as expected. + +When faced by a metropolitan traffic circle with many arms and lanes filled with heavy traffic, our model suggests that traffic lights control the system both safely and effectively. The green light period is determined by the traffic flow distribution. Usually the traffic lights will be adjusted so that vehicles from the most crowded arms encounter green light consecutively when driving around the circle. + +Another noteworthy thing is the imbalance phenomena that are likely to occur in suburban areas. The traffic flows on arms of two vertical directions show great difference. Our model advises that a stop sign should stand at the junction of one low-stream arm with traffic lights at the others. The forcible stop of the incoming flow avoids congestion inside the circle, especially when the traffic circle is relatively small. + +If the traffic flow notably changes with time, traffic lights are again recommended with their self-adaptive ability. The parameters of traffic lights do not show significant dependence on the flow variation. The lights may also activate their special status to deal with a flux drop at night. + +For any condition, whether specified above or not, our model points out the potential on an interpolation of orientation signs. This special measure amazingly reduces the delay time when the traffic shows evidence of congestion. + +Overall, our model will be useful for determining a decent solution, which is expected to be significantly more comprehensive regardless of the traffic circle condition. + +# Conclusion + +To estimate the overall performance of a traffic circle with a specific vehicle flow, we developed two simulation models. The first model uses a Markov process to overlook the whole flow. Contrarily, the second model transfers its attention to the individual behavior of each vehicle. + +Five objectives are chosen to evaluate the control method. They are finally converted to the combined expense. This standard is applied to a real-life stage with typical traffic control device setups. + +The combination of the former two parts gives the optimization model. This model solves our problem with traffic devices selection and provides ways to determine the green light period when traffic lights are used. In addition, the orientation signs are introduced as a thoroughly new measure to bring efficiency. The flexibility of these solutions is proved when confronted with accidents. + +# Reference + +[1] Markus Hubacher, Roland Allenbach. Safety-related aspects of traffic lights. bfu-Report 48, 2002 +[2] Businesses want Sheriffhall flyover. Edinburgh Evening News. Feb 9th, 2009. +[3] N.J. Garber and L.A. Hoel. Traffic and Highway Engineering, 3rd. edition, Brooks/Cole, Pacific Grove, CA, 2002. +[4] Mike Maher. The Optimization of Signal Settings on a Signalized Roundabout Using the Cross-entropy Method. Computer-Aided Civil and Infrastructure Engineering. 23 (2008) 76–85. +[5] How to Correctly Use a Traffic Circle. Feb 9th, 2009. +[6] London Road Safety Unit. Do traffic signals at roundabouts save lives? Transport for London Street Management. April 2005. +[7] Kay Fitzpatrick. Accident Mitigation Guide for Congested Rural Two-lane Highways. National Research Council (U.S.). Transportation Research Board. 2000. +[8] Feb 9th, 2009. +[9] Feb 9th, 2009. +[10] Feb 9th, 2009. +[11] Feb 9th, 2009. +< http://www.shjubao.cn/epublish/gb/paper148/20010815/class014800004/hwz462812.htm> +[12] Feb 9th, 2009. +[13] Status of the Nation's Highways, Bridges, and Transit: 2004 Conditions and Performance. Feb 9th, 2009. < http://www.fhwa.dot.gov/policy/2004cpr/chap14.htm> +[14] Unusual uses of traffic lights. Feb 9th, 2009. \ No newline at end of file diff --git a/MCM/2009/A/4806/4806.md b/MCM/2009/A/4806/4806.md new file mode 100644 index 0000000000000000000000000000000000000000..13b0cdf58d5b810a738166145b8b21707e5af18c --- /dev/null +++ b/MCM/2009/A/4806/4806.md @@ -0,0 +1,597 @@ +For office use only +T1 +T2 +T3 +T4 + +Team Control Number + +![](images/3a64f2070091909f31c26449bdcd3866d42166157b90986c661215ab1efad584.jpg) + +Problem Chosen + +A + +For office use only + +F1 +F2 +F3 +F4 + +# 2009 Mathematical Contest in Modeling (MCM) Summary Sheet + +(Attach a copy of this page to each copy of your solution paper.) + +Type a summary of your results on this page. Do not include the name of your school, advisor, or team members on this page. + +# Pseudo-Finite Jackson Networks and Simulation: A Roundabout Approach To Traffic Control + +Roundabouts, a foreign concept a generation ago, are an increasingly common sight in the United States. In principle, they are simple and effective methods of traffic control that reduce accidents and delays. A natural question for today's traffic engineer is "What is the best method to control traffic flow within a roundabout?" Using mathematics, it is possible to distill the essential features of traffic entering and exiting a roundabout into a system which can be analyzed, manipulated, and optimized for a wide variety of situations. As the metric of effective flow control, we choose time spent in the system. + +We use the concept of Jackson networks to create an analytic model. A roundabout can be thought of as a network of queues, where the entry queues receive external arrivals, which move into the roundabout queue before exiting the system. To form this model, we must assume that an equilibrium state may exist and that arrival rates are constant. The Jackson network is useful because, if certain conditions are met, a closed form stationary distribution may be found. Furthermore, the parameters for this system may be obtained empirically: how often cars arrive at an entrance (external arrival rate), how quickly they may enter the roundabout (internal arrival rate), and how quickly they exit (departure rate). We account for the traffic control method by thinning the internal arrival process with a "signal" parameter that represents the fraction of time that a signal light is green. + +One pitfall of this formulation is that restricting the capacity of the roundabout queue to a finite limit will destroy the useful analytic properties of the system. We utilize a "pseudo-finite" capacity formulation, where we allow the roundabout queue to receive a theoretically infinite number of cars, but optimize over the signal parameter to create a steady state in which a minimal number of waiting cars is overwhelmingly likely. Using lower bound calculations, we prove that the yield sign produces the optimal behavior for all sets of allowed parameters. The analytic solution, however, sacrifices important aspects of a real roundabout, such as time-dependent flow. + +To test the theoretical conclusions, we develop a computer simulation which incorporates more parameters: roundabout radius, car length, car spacing, car velocity inside the roundabout, periodicity of traffic signaling, and time-dependent input flow rates. The simulation uses these parameters to stochastically model individual vehicles as they move through the system, resulting in more realistic output. In addition to comparing yield and traffic signal control, we also examined variable distributions of input rates, non-standard roundabout constructions, and the relationship between traffic flow volume, radius size, and average total time. Our simulation is, however, limited to a single-lane roundabout. This model is also compromised by the very stochasticity that enhances its realism. Since it is non-deterministic, there is a good deal of randomness which may mask the true behavior. Another drawback of the stochastic formulation is that, as we show, the computational cost of mathematical minimization is enormous. We use our background research to set up the model as an empirical experiment to verify the hypothesis that a yield sign is almost always the best form of flow control. + +# PSUEDO-FINITE JACKSON NETWORKS AND SIMULATION: A ROUNDABOUT APPROACH TO TRAFFIC CONTROL + +Team # 4806 + +February 9, 2009 + +# 1 Introduction + +A report from the Wisconsin Department of Transportation noted that "to many, the idea of replacing four way signaling with a roundabout seems like replacing hot dogs with crepes at the ballpark"[3]. For many Americans, the roundabout is a foreign idea- even though the first unidirectional circular traffic installation was actually built in New York in 1903. Roundabouts fell out of favor in the U.S. by mid century, as recent studies show how much safer and more efficient they can be, there has been a resurgence in roundabout construction [7]. Half the states in the U.S. now have roundabouts, for a total over 1000 installations. In many types of intersections, roundabouts improve traffic flow, decrease the occurrence of accidents, and reduce the amount of gasoline wasted during idling [6]. In fact, a U.S. study indicated that, on average, fatal crashes decreased $90\%$ after traditional traffic lights were replaced by roundabouts [5]. + +A crucial aspect of efficiency and safety is the method of entry. A modern roundabout (Figure 1) is distinguished by two key characteristics: incoming traffic yields to traffic within the circle, and incoming traffic changes direction to some extent. Initially, roundabout entry rules were inconsistent and varied from place to place. Until the 1920s, "yield-to-right" regulations gave the right-of-way to incoming cars rather than to those within the circle. This tended to cause "locking" and delays in traffic circles at high traffic volumes. British studies indicated that adopting "priority-to-the-circle" rules allowed more cars to move through the circle more quickly and diminished accident rates. The deflection of entering traffic serves to prevent excessive speed within the roundabout and to further reduce incidence of accidents [7]. + +![](images/b5e63931fa0e5792bf34c234f55306c4290bebafe6947229d4d7ba28381917e2.jpg) +Figure 1: Roundabout and geometric parameters [8] + +Within the framework of the "priority-to-the-circle" rule, roundabout entry may be governed in different ways. The simplest and most common method is a "Yield" sign placed at each entry point to indicate that incoming cars must relinquish the right-of-way to cars already in the circle. The U.S. Department of Transportation advises that roundabouts "should never be planned for metering or signalization" [8]. Nonetheless, although roundabouts are in principle very simple systems, they are often besieged with complications. These intricacies include geometric irregularities, discrepancies between inflow volume at different entry points, and temporal traffic flow fluctuation. + +In order to evaluate the effectiveness of different methods of input control we develop a mathematical model for traffic flow within roundabouts. We first introduce the assumptions utilized in determining the key parameter inputs and developing a metric for "effectiveness." We subsequently formulate and solve a simple analytic model of networked queues in an equilibrium state. After discussing the limitations of the analytic model, we adapt this model into a computer simulation which allows greatly enhanced flexibility and complexity. This computer simulation allows for detailed analysis and may be used by traffic engineers to optimize the flow-control method. + +# 1.1 Assumptions + +The following assumptions were used in both the analytic and computer models: + +Exponential Arrivals/Departures: This model is based on arrivals and departures which follow a Poisson process with exponentially distributed interarrival times. The Poisson model is advantageous because, not only is it mathematically convenient, it is widely accepted as a realistic model for many situations involving random arrivals [4]. Rates for the Poisson process may be empirically obtained by taking the average number of arrivals or departures in a certain period of time. + +Local Variable Selection: In our model, we use only variables which contribute significantly to traffic flow patterns on a regular basis. External forces such as weather, special events, or acts of God may alter the system; however, we do not address these factors. + +Unbounded Output: When embedded in actual road systems, blockages in nearby areas may affect the output rate from the roundabout under study, causing output from one or more exits to slow down or cease. Although this phenomenon significantly alters flow in some locations, regulation of input rates in one roundabout would be unable to correct for blockages in another part of the system. Hence, this factor unnecessarily increases the + +complexity of the model, and we assume that cars are always able to leave the roundabout once they reach their exit. + +Yield and Stop Sign Equivalence: In our model, we will only examine the effect of a yield sign versus a traffic light. This is due to the fact that a stop sign will only perform as well as (or worse) than a yield sign in terms of efficiency. Therefore, we assume that if efficiency is the only consideration, a yield sign would be preferable to a stop sign. Stop signs may, however, be appropriate to facilitate a safer roundabout (such as in situations where there is high pedestrian traffic). + +# 1.2 Defining Effectiveness + +Both personal experience and literature searches led us to define "effectiveness" in the following way: an effective roundabout is one through which traffic may flow freely and efficiently without delay. The most effective roundabout design minimizes delay. Each model will quantify delay in a different manner. + +# 2 Analytic Formulation + +Our analytic model consists of a Jackson network of $\mathrm{M} / \mathrm{M} / s$ queues (indicating queues with Markovian arrivals, Markovian departures, and $s$ servers). A diagram of the queue is presented in Figure 2. In this model (which we infer to have a steady-state and no explicit time dependence), effectiveness is quantified by the probability that there will be few cars waiting within the system. This may be calculated once we find a stationary distribution for the Jackson network. For the most effective roundabout, the most likely stationary states will be those with the fewest total cars, implying that delay is minimized. + +# 2.1 Additional Assumptions + +In order to formulate a tractable analytic model, the following assumptions were also made: + +Constant Arrival Rates: Although there is a very low likelihood that any real traffic system would be time-independent, utilizing constant arrival rates allowed us to set up a system in which it was possible to analytically derive a stationary distribution and understand the asymptotic behavior. It is valuable to have a detailed understanding of how the system behaves in equilibrium, even though the system may never stay in equilibrium for a very long period of time. The equilibrium behavior acts as a snapshot of a particular moment in time, and serves as a basis to build up a more complex and realistic simulation. + +Perfect Driver Behavior: In the analytic model, we consider only the asymptotic behavior of a probabilistic system. We do not allow for people to miss their exits or break the rules imposed by the system. In reality, there will be people who swerve wildly while talking on cell phones, scrape their tires on the curb, mow down pedestrians, or cut in front of other drivers. These behaviors lead to accidents and major traffic delays. However, they are infrequent enough that we assume them to have little bearing on how the system looks when averaged over a long period of time. + +# 2.2 Description of Simple Queuing Network + +The basic idea behind our Jackson network is to break up the system into compartments modeled by queues. We assume that the roundabout is located at the intersection of $N$ streets, which yields a network of $N + 1$ queues. Each street contributes an input stream of cars, as well as an opportunity to leave the roundabout. Each input stream of cars is modeled as an M/M/1 queue with its own unique external arrival rate, $\lambda_{i}$ . All input queues are assumed to release cars at rate $\sigma_{i}$ which represents the rate at which cars transition from the incoming street into the roundabout. The presence of traffic lights or yield signs is represented by a thinning parameter, $g$ , which represents the percentage of time when a traffic light located at the intersection is green. Setting $g = 1$ corresponds to a yield sign. Thus, cars enter the roundabout at a thinned rate $g\lambda_{i}$ . They "return" to the same queue (i.e. remain there) with probability $(1 - g)$ . The queue representing the roundabout itself is an M/M/N queue, where the $N$ servers represent the $N$ exits. + +The stationary distribution for this system of queues, if it exists, denoted $\pi(n_1, \ldots, n_{N+1})$ , indicates the asymptotic fraction of time that the system spends in the state where there are $n_1$ cars in queue 1, $n_2$ cars in queue 2, etc. We are interested in creating a network of queues such that the stationary distribution may be found. Then we may choose $g$ such that the system spends a larger fraction of time in a state where the total number of cars within the system, $n_1 + \ldots + n_{N+1}$ , is low. The most intuitive way to do this is to have a queue representing each input street, and a limited-capacity queue representing the roundabout. The input queues would only put cars into the system if there is space in the roundabout queue. However, finite-capacity queuing networks do not generally yield closed form solutions for stationary distributions [1]. + +Therefore, to ensure an analytic solution, we allow every car which leaves an input queue to "enter" the roundabout queue; i.e., this queue has infinite capacity. Although this does not mirror what is physically happening, it allows us to construct a stationary distribution. This stationary distribution will be able to give us the same information: the probability that a certain total number of cars is "stuck" within the system. In the original model, cars wait in the + +street they are coming in on; in our model, they wait inside of the (infinitely large) roundabout. We are not actually concerned with where they wait but rather how many wait, and how likely it is that many cars will be waiting. + +In the case where the roundabout is not full, the finite- and infinite-capacity roundabout queue cases are clearly equivalent because an incoming car can always enter the system. Now suppose the roundabout is full to capacity, and observe that: + +(i) If a car waits in the street outside the roundabout, three events must occur before it exits the system: another car inside the circle must leave the roundabout (with some departure rate $\mu$ ), the car in question must enter the roundabout (with rate $\sigma$ ) and exit with rate $\mu$ . +(ii) If a car "waits" in the roundabout, and then exits, three events will also have occurred: the car will have entered the roundabout with rate $\sigma$ , and joined the interior queue. Since there are only $N$ servers, another car must exit the queue before the car in question is served; each of these processes happens with rate $\mu$ . Thus, either treatment is simply the superposition of three Poisson processes, and order is unimportant. Thus the queue exhibits "pseudo-finite capacity" as it mimics the qualitative behavior of a network where one queue size is bounded and the others are infinite. + +# 2.3 Formulation of Stationary Distribution + +For an equilibrium state, the input process for each queue in the network equals the output process [2]. This motivates us to define $r_i$ , the asymptotic departure rate from queue $i$ . Each $r_i$ will be equivalent to the sum of the arrival rates to queue $i$ . Defining $p(i,j)$ as the probability that a car leaving queue $i$ enters queue $j$ , we can write an expression for the asymptotic departure rates: + +$$ +r _ {j} = \lambda_ {j} + \sum_ {i = 1} ^ {N + 1} r _ {i} p (i, j) +$$ + +Or, in matrix form, we have: + +$$ +\mathbf {r} = \boldsymbol {\Lambda} + \mathbf {r p} +$$ + +Where $\mathbf{r}$ is a row vector of departure rates, $\Lambda$ is a row vector of arrival rates, and $\mathbf{p}$ is simply the matrix with elements $p(i,j)$ as defined above. Define two conditions on the system: + +(A) There exists, for each queue $i$ , a path of positive probability along which it is possible to exit the system. +(B) Defining $\varphi_{i}(n)$ as the departure rate from queue $i$ when that queue contains $n$ people, and letting + +![](images/9d57be4d51765fe2d2d32f28268320143fb726983215664edfcc82bb659b1d68.jpg) +Figure 2: Visual Schematic of Queuing Network + +Summary of Analytic Model Parameters + +
NNumber of streets which connect to roundabout
λiExternal arrival rate of cars to entrance i
σiRate at which cars may enter roundabout from entrance i
μRate at which cars may exit roundabout
π(n1,···nN+1)Stationary Distribution for the network
+ +$$ +\psi_ {i} (n) = \left\{ \begin{array}{l l} \prod_ {m = 1} ^ {n} \varphi_ {i} (m) & n \geq 1 \\ 1 & n = 0 \end{array} \right. +$$ + +There exists some positive constant $c_{j}$ such that + +$$ +\sum_ {n = 0} ^ {\infty} \frac {c _ {j} r _ {j} ^ {n}}{\psi_ {j} (n)} < \infty +$$ + +It can be shown that, if condition (A) is met, then the matrix $(\mathbf{I} - \mathbf{p})$ is invertible. If condition (B) is also met, then a stationary distribution $\pi$ exists [2] with the form: + +$$ +\pi (n _ {1}, \dots , n _ {N + 1}) = \prod_ {j = 1} ^ {N + 1} \frac {c _ {j} r _ {j} ^ {n _ {j}}}{\psi_ {j} (n _ {j})} +$$ + +For our system, we define the vector $\Lambda$ as: + +$$ +\left(\lambda_ {1} \dots \lambda_ {N} 0\right) +$$ + +Each $\lambda_{i}$ corresponds to the external arrival rate for queue $i$ ; the $N + 1$ queue has a zero external arrival rate because this is the queue corresponding to the roundabout. The $(N + 1)\times (N + 1)$ $\mathbf{p}$ matrix has the form: + +$$ +\left( \begin{array}{c c c c} 1 - g & 0 & \ldots & g \\ 0 & \ddots & \ldots & \vdots \\ 0 & \ddots & 1 - g & g \\ 0 & 0 & \ldots & 0 \end{array} \right) +$$ + +From any location in this particular network, there is a nonzero probability of exiting the system. Thus, Condition (A) is satisfied, the matrix $\mathbf{I} - \mathbf{p}$ is invertible, and we may write the vector of asymptotic release rates as: + +$$ +\mathbf {r} = \boldsymbol {\Lambda} (\mathbf {I} - \mathbf {p}) ^ {- 1} +$$ + +The simplicity of our system allows us to directly solve for the inverse matrix, $(\mathbf{I} - \mathbf{p})^{-1}$ , via Gauss-Jordan elimination: + +$$ +\left( \begin{array}{c c c c} \frac {1}{g} & 0 & \ldots & 1 \\ 0 & \ddots & \ldots & \vdots \\ 0 & \ddots & \frac {1}{g} & 1 \\ 0 & 0 & \ldots & 1 \end{array} \right) +$$ + +Thus, the asymptotic departure rates are found to have the following form: + +$$ +\mathbf {r _ {j}} = \left\{ \begin{array}{l l} \frac {\lambda_ {j}}{g} & 1 \leq j \leq N \\ \sum_ {i = 1} ^ {N} \lambda_ {i} & j = N + 1 \end{array} \right. +$$ + +Now, we formulate the parameters necessary to solve for a stationary state. We observe that for the entry queues: + +$$ +\varphi_ {j} (n) = \sigma_ {j} \quad 1 \leq j \leq N +$$ + +and that for the roundabout queue: + +$$ +\varphi_ {N + 1} (n) = \left\{ \begin{array}{l l} n \mu & 1 \leq n \leq N \\ N \mu & n > N \end{array} \right. +$$ + +Also, we formulate for the entry queue: + +$$ +\psi_ {j} (n) = \sigma_ {j} ^ {n} \quad 1 \leq j \leq N +$$ + +and for the roundabout queue: + +$$ +\psi_ {N + 1} (n) = \left\{ \begin{array}{l l} (n!) (\mu) ^ {n} & 1 \leq n \leq N \\ (N!) (N \mu) ^ {n} & n > N \end{array} \right. +$$ + +Now, we investigate under what conditions the Condition (B) is met. For the entry queues $(1 \leq j \leq N)$ , we need to find a positive constant $c_{j}$ such that + +$$ +\sum_ {n = 0} ^ {\infty} c _ {j} \left(\frac {\lambda_ {j}}{g \sigma_ {j}}\right) ^ {n} < \infty +$$ + +We may only choose a non-zero $c_{j}$ if this geometric series converges, which occurs when + +$$ +\left(\frac {\lambda_ {j}}{g \sigma_ {j}}\right) < 1 +$$ + +For the roundabout queue, we examine the convergence of + +$$ +c _ {j} \left(\sum_ {n = 0} ^ {N} \left(\frac {r _ {N + 1}}{\mu}\right) ^ {n} \frac {1}{n !} + \sum_ {n = N + 1} ^ {\infty} \frac {1}{N !} \left(\frac {r _ {N + 1}}{N \mu}\right) ^ {n}\right) +$$ + +The first term of the sum is finite for fixed $N$ and does not affect convergence. The second sum can be re-written as: + +$$ +\sum_ {n = 0} ^ {\infty} \frac {1}{N !} \left(\frac {r _ {N + 1}}{N \mu}\right) ^ {n} - \sum_ {n = 0} ^ {\infty} \frac {1}{N !} \left(\frac {r _ {N + 1}}{N \mu}\right) ^ {n} +$$ + +The second term of this sum is also finite for fixed $N$ , so we are only concerned with the first series. This geometric series converges if + +$$ +\left(\frac {r _ {N + 1}}{N \mu}\right) < 1 +$$ + +. Thus, we state the two conditions necessary for the existence of equilibrium and a stationary distribution in our queuing network: + +(i) +(ii) $\sum_{i = 1}^{N}\lambda_{i} < N\mu$ + +If these conditions are met, we may solve for the stationary distribution. First, choose the constant $c_{j}$ such that + +$$ +\sum_ {n = 0} ^ {\infty} \frac {c _ {j} r _ {j} ^ {n}}{\psi_ {j} (n)} = 1 +$$ + +Solving for $c_{j}$ , we find: + +$$ +\begin{array}{l} \frac {1}{c _ {j}} = \frac {1}{1 - \frac {\lambda_ {j}}{g \sigma_ {j}}} q \leq j \leq N \\ \frac {1}{c _ {N + 1}} = \sum_ {n = 0} ^ {N} \left(\frac {r _ {N + 1}}{\mu}\right) ^ {n} \frac {1}{n !} + \frac {1}{N !} \left(\frac {1}{1 - \frac {r _ {N + 1}}{N \mu}} - \sum_ {n = 0} ^ {N} \left(\frac {r _ {N + 1}}{N \mu}\right) ^ {n}\right) \\ \end{array} +$$ + +Now, we formulate the closed form of the stationary distribution [2]: + +$$ +\pi \left(n _ {1}, \dots , n _ {N + 1}\right) = \left(1 - \frac {\lambda_ {1}}{g \sigma_ {1}}\right) \left(\frac {\lambda_ {1}}{g \sigma_ {1}}\right) ^ {n _ {1}} \dots \left(1 - \frac {\lambda_ {N}}{g \sigma_ {N}}\right) \left(\frac {\lambda_ {N}}{g \sigma_ {N}}\right) ^ {n _ {N}} \left(\frac {c _ {N + 1}}{N !}\right) \left(\frac {r _ {N + 1}}{\mu N}\right) ^ {n _ {N + 1}} +$$ + +# 2.4 Optimization of Stationary State + +The parameters $\mu, \lambda_{j}$ , and $\sigma_{j}$ are presumably fixed by the physical location of the roundabout and the number of people who use it. Hence, the stationary state $\pi$ is a function of $g$ , and may be optimized over $g$ . The idea is to maximize the amount of time spent in a state in which the total number of cars in the system is less than or equal to the capacity of the roundabout. Define $\mathcal{K} \equiv \{\text{all } \{n_{i}\}_{i=1}^{N+1} \text{ such that } n_{1} + \ldots + n_{N+1} = k\}$ . Now we can define: + +$$ +\pi (k) = \sum_ {\text {a l l} \{n _ {i} \} \in \mathcal {K}} \pi (n _ {1}, \ldots , n _ {N + 1}). +$$ + +It would be useful to analyze how $\pi(k)$ depends on $g$ for small $k$ . Notice that for any given $k$ , the number of terms in the sum is equivalent to the number of non-negative integer solutions to the equation + +$$ +n _ {1} + \dots + n _ {N + 1} = k +$$ + +which is given by the well-established formula [9] + +$$ +\frac {(N + k) !}{N ! k !}. +$$ + +Clearly, the number of terms in the sum will grow exceptionally quickly, and directly examining $g$ dependence will become impossible. Instead of direct analysis, we establish a lower bound for $\pi(k)$ in terms of $\pi(0, 0, \ldots, 0)$ , the fraction of time in which no cars remain in the system. For this case, denoted $\pi(0)$ , we have: + +$$ +\pi (0) = \prod_ {i = 1} ^ {N} \left(1 - \frac {\lambda_ {i}}{g \sigma_ {i}}\right) \left(\frac {c _ {N + 1}}{N !}\right). +$$ + +Notice that neither $c_{N+1}$ nor $N!$ depend on our choice of $g$ . Therefore, $\pi(0)$ will be maximized over $g$ if the product + +$$ +\prod_ {i = 1} ^ {N} \left(1 - \frac {\lambda_ {i}}{g \sigma_ {i}}\right) +$$ + +is maximized over $g$ . The conditions under which this stationary distribution was constructed assert that + +$$ +\frac {\lambda_ {i}}{g \sigma_ {i}} < 1, +$$ + +ensuring that all terms of the product are between 0 and 1. Therefore, for a fixed set of $\{\lambda_i / \sigma_i\}$ (the constraints of the system), the optimal choice of $g$ would minimize each $\lambda_i / g\sigma_i$ so as to maximize the quantity $1 - (\lambda_i / g\sigma_i)$ . Therefore, the largest choice of $g$ will maximize $\pi(0)$ . Given the constraint $0 < g \leq 1$ , the optimal choice is $g = 1$ . Every other stationary state may be written in terms of $\pi(0)$ : + +$$ +\pi \left(n _ {1}, \dots , n _ {N + 1}\right) = \pi (0) \left(\frac {c _ {N + 1}}{N !}\right) \left(\frac {\lambda_ {1}}{g \sigma_ {1}}\right) ^ {n _ {1}} \dots \left(\frac {r _ {N + 1}}{N \mu}\right) ^ {n _ {N + 1}} +$$ + +We establish a lower bound for $\pi (k)$ by defining: + +$$ +\begin{array}{l} \frac {\epsilon}{g} \equiv \min \left\{\frac {\lambda_ {i}}{g \sigma_ {i}}, \frac {r _ {N + 1}}{N \mu} \right\} \\ C \equiv \frac {c _ {N + 1}}{N !} \\ \end{array} +$$ + +Now, we assert that, since each term in the product is less than or equal to $m$ , the sum of the powers on these terms is $k$ , and there are $\frac{(N + k)!}{N!k!}$ distinct elements of $\mathcal{K}$ : + +$$ +\pi (k) \geq \frac {(N + k) !}{N ! k !} C \left(\frac {\epsilon}{g}\right) ^ {K} \pi (0) +$$ + +In the event that + +$$ +\min \left\{\frac {\lambda_ {i}}{g \sigma_ {i}}, \frac {r _ {N + 1}}{N \mu} \right\} = \frac {r _ {N + 1}}{N \mu} +$$ + +all $g$ dependence comes from $\pi(0)$ , which is maximized for $g = 1$ . In the event that, for some index $j$ , + +$$ +\min \left\{\frac {\lambda_ {i}}{g \sigma_ {i}}, \frac {r _ {N + 1}}{N \mu} \right\} = \frac {\lambda_ {j}}{g \sigma_ {j}} +$$ + +we first define: + +$$ +\max \left\{\frac {\lambda_ {i}}{g \sigma_ {i}} \right\} = \frac {\delta}{g} +$$ + +which allows us to assert + +$$ +\pi (0) \geq \left(1 - \frac {\delta}{g}\right) ^ {N} +$$ + +which implies that + +$$ +\pi (k) \geq \frac {(N + k) !}{N ! k !} C \left(\frac {\epsilon}{g}\right) ^ {k} \left(1 - \frac {\delta}{g}\right) ^ {N} +$$ + +We turn our attention to the behavior of the function which governs the $g$ dependence of the lower bound of $\pi (k)$ : + +$$ +f (g) = \left(\frac {\epsilon}{g}\right) ^ {k} \left(1 - \frac {\delta}{g}\right) ^ {N} +$$ + +We differentiate with respect to $g$ and find that + +$$ +\frac {\partial f}{\partial g} = \frac {\epsilon^ {k} (g - \delta) ^ {N - 1} ((N - k) (g - \delta) + N g)}{g ^ {2 (k - N)}}. +$$ + +Since $\epsilon > 0$ , and $g - \delta > 0$ according to the assumptions with which we set up the system, the sign of $\frac{\partial f}{\partial g}$ is determined by the expression + +$$ +((N - k) (g - \delta) + N g), +$$ + +which is guaranteed positive for + +$$ +k < N + N g / (g - \delta). +$$ + +Therefore, for small $k$ , the slope is positive for all $g$ in our domain, implying that increasing $g$ increases the lower bound on $\pi(k)$ . Increasing the lower bound ensures the stationary distribution is larger. For our analytic model, the value of $g$ which guarantees the largest lower bound + +on $\pi(k)$ for small $k$ is $g = 1$ , regardless of other parameters. Therefore, our analytic model will always recommend a yield sign. + +To examine the actual stationary state behavior, we implemented a computer program which calculates $\pi(k)$ for each fixed value of $k$ , summed up over all the stationary states for which the total number of cars in the system is equal to $k$ . We examined this for a wide range of $\lambda$ , $\sigma$ , and $\mu$ values. In all cases, the stationary distribution for lower $k$ values is highest for $g = 1$ . In Figures 3 and 4, we compare the lower bound behavior and the actual behavior for a 4-entrance roundabout. We examine both the case where all input rates are equal, and the case where they are not. Our lower bound estimate curves and calculated curves have very similar shapes. Thus, a choice of $g$ which maximizes the area under the lower bound curve for small $k$ also maximizes the area under the actual curve. This validates our use of the lower bound estimate as a basis for the optimal choice of $g$ . + +Our analytic formulation always finds the optimal entrance rule to be a yield sign at every intersection. Although this is in part a result of the limitations of the model, such as lack of time dependence, it is mostly consistent with both the results of our computer simulation and our research into real-world practices. As noted in the introduction, roundabouts are generally neither planned nor implemented with traffic lights. Our computer simulation will attempt to find some of the situations in which traffic lights are more efficient than yield signs. + +![](images/3c3a0ecc7e4cf57e70b7ed5ad4151e07a3f73461d492167008914d772f20807a.jpg) +(a) Actual Value + +![](images/f628eb42fdd1e6bee2d64039d2e355c84b15b2938ec3c9dbb20f4cfd1f63345d.jpg) +(b) Lower Bound Estimate +Figure 3: Comparison of Actual Stationary Distribution and Lower Bound Estimate for Unequal Input Rates + +![](images/ed77a6df04f2a0733fd81045672584bdd2fbd11ebef8d255939e2018c96df70c.jpg) +(a) Actual Value + +![](images/af216fb57d2e6cab3ad8f1239966c041e72a0ec93bff90f393700edd454ebd5d.jpg) +(b) Lower Bound Estimate +Figure 4: Comparison of Actual Stationary Distribution and Lower Bound Estimate for Equal Input Rates + +# 3 Computer Simulation + +Given the weaknesses of the analytic model, we adapted it to create a computer simulation with the freedom to change some assumptions in order to create a more realistic model. + +# 3.1 Assumption Modifications + +Independent Arrival Processes: We assume that whatever process determining the arrival of cars through a given entrance street remains independent of the processes determining the behavior of other streets and the circle itself. Thus, the probability of a car approaching the circle from one street does not depend on the probability of a car approaching the circle from a different street, nor does it depend on the probability distribution of how cars enter or leave the traffic circle. This assumption is reasonable, since we would not expect a driver to have prior knowledge of what is happening in the traffic circle at any given moment. + +Drivers' Intentions: We assume that every driver in the computer-simulated model wants to leave the traffic circle through a specific exit and do so in the least amount of time possible. However, since it is quite possible for a driver to be confused or unaware of his or her surroundings, we define a probabilistic constant for each car successfully leaving the circle. While this allows for the possibility of getting stuck in the circle (reminiscent of Chevy Chase in National Lampoon's European Vacation), the probability of continually missing the exit is vanishingly low. Also, we assume that the driver never takes the wrong exit or alters his or her destination once inside the circle, not only because this makes the model simpler but also because no traffic engineer could possibly gather such information. + +Constant Car Length and Speed: We assume that the vehicles going through the traffic circle have fixed length and speed. While it is possible for a car to be very long and occupy more space in the circle, we also reason that another car could be very short and occupy less space, potentially nullifying the effects of the longer vehicle. Also, the actual vehicle speed varies between drivers, but adding variation would introduce unnecessary complexity into the model. + +Yield Sign is Optimal for Low Traffic Volume: According to both literature and common sense, a traffic light in a roundabout with few cars will only hamper traffic flow. This is clear when one considers that if the roundabout is less than full to capacity, incoming traffic will have little trouble entering if allowed to do so by traffic signals. If the signals + +periodically prevent them from entering, this only serves to decrease the efficient flow of traffic. + +# 3.2 Computer Simulation of One-Lane Roundabout + +Now that we have obtained an analytical model for the expected behavior of the system using Jackson networks, we want to compare these results to a more realistic simulation of traffic flow. We will simulate cars actually arriving to a theoretical traffic circle, entering the circle according to some heuristic process, moving through the traffic circle toward a specific exit, and finally leaving the circle at the desired exit (granted that the driver does not miss his or her turn - which is also possible). + +During the simulation, we fix the length of the car at 5 meters. The speed of the cars inside the circle is varied between 8 and $13\mathrm{m / s}$ , based on the ranges presented in [8]. The capacity of the roundabout, or the number of cars which can be inside at any one time, is determined by vehicle length, vehicle velocity, and roundabout radius. At full capacity, cars inside the roundabout are spaced by one second of driving, ensuring sufficient space to maneuver. + +# 3.3 Description of Simulation Process + +Before we simulate the actual flow of traffic through the circle, our simulation determines the exact times that cars arrive to the circle from each entrance street. Because we assumed the arrival processes to be independent, we can simulate the arrival times for each street individually. Using the principle that, for any random variable $U \sim \text{Uniform}[0,1]$ , the variable $(-\frac{1}{\lambda} \ln U)$ is exponentially distributed with parameter $\lambda$ , we can easily determine the inter-arrival times for each entrance road for the entire time period of inquiry, which will be one 24-hour day in this simulation. + +In regards to the inter-arrival times, we wish to vary traffic arrival rates depending on the time of day. To emulate real traffic flow, fewer cars should arrive at night than the middle of the day, and even more cars should appear during the morning and evening rush hours. To account for this behavior, we scale the peak arrival rates for the street. The scaling function $f(t)$ consists of one relatively narrow Gaussian centered at each rush hour time, and one smaller amplitude, slowly varying Gaussian centered at midday. This function is plotted in Figure 5 for rush periods of 1 hour each at 8:00 AM and at 5:00 PM. At the time $t$ that each new arrival time is calculated, the rate parameter $\lambda$ is scaled by $f(t)$ . This creates the desired time variation in arrival rates. + +The arrival times for each entrance queue are recorded and computed prior to simulation. During the simulation, at each simulated arrival time, we add a vector to the right end of a + +![](images/6692923144e45c44f054a7809044e39e2446672ba520739c5ad6c8a5706ce555.jpg) +Figure 5: Time Dependent Arrival Rate Multiplier + +dynamic matrix representing the entry queue. Each vector corresponds to a car, and contains the parameters that govern each individual car's behavior: arrival time, destination, and probability of missing the exit. The matrix columns represent the order of cars waiting to enter traffic circle, and because we treat the entrance queue as a "First In, First Out" buffer, only the car represented in the leftmost column of the matrix is allowed to enter the traffic circle. + +The destination of the car is determined using a given parameter that represents relative exit popularity. We reason that cars will be slightly less likely to use the circle to make a U-turn (i.e. exit adjacent to the position that it entered), so we determine probabilistically if the car should be able to make a U-turn (return to the street they entered on) before selecting its destination. For all cars, the probability of a U-turn is 0.05. The distribution is translated into a partition of the interval [0,1]. When a car arrives at the queue, a random variable is simulated; the interval into which it falls determines where the car will exit. When the car arrives at its exit, a random variable $U \sim \text{Uniform}[0,1]$ is simulated, and if this number is less than 0.05, the car misses its exit and stays in the traffic circle. + +To simulate traffic moving through the circle, we divide the traffic circle into discrete positions based on the circumference of the circle and the length of the typical car that drives through it. We then number these positions in the same direction of the flow of traffic, as in a car in position 1 would go into position 2, position 2 would go into position 3, and so on until it reaches the last numbered position, at which point the car returns to position 1. Vectors + +from the leftmost position in an entry queue matrix are placed into the traffic circle if the entry position and the position immediately behind the entrance are both vacant. Thus, we obtain a "circle matrix" where each column pertains to a position. Moving an entire column of the matrix simulates an individual's movement through the circle. + +At regular time intervals based on the speed of the cars and the size of the circle, we rotate the columns of the matrix according to the method described above (1 to 2, 2 to 3, etc.). After each rotation interval, cars in the circle check to see if they have reached their destination and, consequently, determine if they successfully exit the circle. Once a car exits, it calculates time spent in the circle by subtracting the arrival time from the exit time. The simulation then erases the values of the vector representing its current position from the circle matrix to indicate that that car has left the circle. After exiting cars leave, cars waiting to enter the circle make the following two checks, both of which must be satisfied in order to enter the circle: + +Check traffic signal: The car checks a "signal matrix" whose rows are indexed by the entrance locations and whose columns represent a fraction of time in a traffic light cycle. Thus, each entry $(i,j)$ of the signal matrix indicates whether the $i^{th}$ light is red or green during the $j^{th}$ signal interval, where each signal interval is 20/round $\left(\frac{\text{carlength+speed}}{\text{speed}}\right)$ seconds long. At the start of the simulation, $j = 1$ ; once one signal interval has elapsed, $j = j + 1$ . This continues until we reach the end of our signal matrix, signifying the end of the traffic light cycle. At that point, $j$ is set to 1. The time $t$ of the simulation step determines which value of $j$ is used. If the entry of the matrix is a zero, the light is red, and the car may not enter the circle. If the entry is an one, the light is green, and the car may enter the circle if there is space. For each run of the simulation, three signal matrices are used: one for late night/early morning, one for rush hours, and one for midday. A signal matrix whose entries are all identically 1 is referred to as a "yield matrix" because it acts like a yield sign. It should be noted that the late night/early morning signal matrix is always a yield matrix due to the diminished traffic flow. + +Check for cars in the circle: Cars that are permitted to enter the circle by the signal matrix must nonetheless yield to traffic already in the circle. As a result, entering cars must check the circle matrix to see if both the entrance position on the circle and the position before it are unoccupied so that it does not hit a car in the circle nor cut one off. + +Assuming that both conditions are satisfied, the simulation puts the car into the circle by removing the left-most column of its entry matrix and copying it into the entrance position on the circle matrix. The process of adding vectors to the dynamic matrices at the arrival times, rotating the columns in the circle matrix, removing cars that successfully exit, and adding cars + +that successfully enter continues until the end of the total interval, taken in this simulation to be a day. + +# 3.4 A Metric to Measure Effective Traffic Flow + +Now that we have a simulation-based model, we discuss metrics to quantify the effectiveness of a given flow-control method. Since we record the times that a car finally leaves the traffic circle, we can use this data to determine certain statistics that describe the process. + +Because of the large number of cars that enter and leave our simulation, we can use the average time spent in the system per car for that day as a good estimator of how the simulation behaved. However, cars driving during rush hour should be waiting longer than cars driving during midday or at night. As a result, the maximum time spent should give us a sense of the worst-case scenario for the flow-control model. After all, a driver waiting for almost an hour to go through one traffic circle probably will not care that the average driver waits for only twenty seconds. + +Another metric that could be considered is the minimum time spent, but this minimum will very likely measure a car arriving at an empty traffic circle at night, possibly because the driver is a student staying up late to work on a modeling contest. With no one in the traffic circle, the driver receives immediate service, and if his or her exit is adjacent to the entrance, there exists a good chance that the driver only spends a few computation cycles in the simulated traffic circle before successfully exiting on the first attempt. Thus, the minimum will not be a very descriptive statistic. The only real conclusion to be drawn from the minimum time is that either the situation described above occurred occurred at some time or, due to random chance alone, no car in the empty system was lucky enough to enter the circle adjacent to his or her exit or was able to successfully leave on the first attempt. + +In short, we will qualify our simulated traffic circles mainly due to the average and maximum time spent in the traffic circle. A "good" flow-control system (or signal matrix) will produce lower values for these two statistics, while a "poor" flow-control system will produce higher values. + +# 3.5 Justification of Experimental Methodology + +Now that we have created a simulation for a traffic circle that uses signal matrices as input, we theoretically could fix all of the other parameters such as radius, speed, etc. and minimize the average time spent in the system over the three signal matrices. However, there are many reasons why this method would not work. First we have a discrete input space that is all possible matrices in $\mathbb{R}^{N\times \alpha}\alpha \in \mathbb{Z}_{+}$ with entries either 0 or 1. We typically used $\alpha = N$ as we felt that + +having $N$ intervals gave us enough ability to vary the signal matrix. Because of the nature of the input space traditional minimization methods would not work. + +Furthermore, due to the fact that our simulation is a random process, the same set of input simulated twice would produce 2 distinct results. While this is representative of how a real traffic circle would behave on various days, it proves problematic in minimization methods. One possible solution to this would be to minimize the average of a large number of simulation runs on each set of input. However, the computational cost of such a method is very large; there are $2^{\alpha N}$ possibilities for each of the three inputs. This strategy could potentially be refined by eliminating impractical signal matrices, but the computational cost to minimize the average of a large number of simulation runs would still be very high. + +Due to all the aforementioned difficulties in the use of any traditional minimization method, we took a different approach. Our literature searches and analytic model revealed that yield control is by far the most common and effective form of roundabout traffic flow control. We decided to use our simulation to test the effectiveness of a yield sign versus a traffic light in a large number of experiments. First, we assumed that late at night and early in the morning when traffic flow is minimal a yield sign, or perpetually green traffic light, would be the optimal choice. Then, to account for the randomness of our simulation, we decided to run three simulations on each of 100 combinations of matrices, 98 of which are randomly generated. We always compared the random signal matrix results to the yield signal matrix and a fixed non-yield signal matrix, and every matrix set was run on the same roundabout. + +We want to eliminate matrices that represent periods of red light that are unrealistic. Thus we force that our midday signal matrix satisfies the following (where $\mathbf{g}_{\mathrm{yield}}$ represents the matrix of all ones and $\mathbf{g}_{\mathrm{mid}}$ is our midday matrix): + +$$ +\left\| \mathbf {g} _ {\text {y i e l d}} - \mathbf {g} _ {\text {m i d}} \right\| _ {\infty} < = 2 \tag {1} +$$ + +For our rush hour signal matrix we enforce the following condition (where $\mathbf{grush}$ is our rush hour signal matrix): + +$$ +\left\| \mathbf {g} _ {\text {y i e l d}} - \mathbf {g} _ {\text {r u s h}} \right\| _ {\infty} < = 3 \tag {2} +$$ + +These conditions force that there is a sufficient amount of ones in any given row of the matrix. We enforce slightly different conditions during rush hour and midday because of the decreased traffic volume during midday. We felt that as traffic volume decreased, the necessity of control decreased. If the traffic circle was more likely to be empty, we did not want a car to have to wait at a red light. + +# 3.6 Simulation Results, Part 1: Flow Control Considerations + +From the analytic model, we concluded that the most effective flow-control model involved all incoming cars to yield to cars inside the circle, so we wish to see if this result holds when we use the more complicated simulation. In terms of the simulation variables, we want to know if the yield matrix is the optimal choice of signal matrix. + +Using the yield matrix, we ran several simulations using different relative distributions for the input rates from four entrance streets, which is the standard construction of an intersection. The ratio of smallest input rate to largest input rate ranged from 1:1 to 1:8. To gauge their effectiveness in controlling traffic, we plot the number of cars in each part of the system against time. In Figures 7(a), these plots appear for the yield flow-control method using the same entrance rate for all of the side streets. As one would expect, the peak number of cars in the system occurs during the morning rush hour, when the input rates are highest. Traffic congestion appears in the plots as extreme peaks in the density, and as one can see in the second row from the back, the majority of cars are entering the circle almost immediately. + +Similar behaviors were observed in plots of the other systems where some streets have higher input rates than the others. In these systems, we consider a street to have a "major" input rate if its input rate is higher than the input rates of other streets. In Figure 6(a), two streets have major inputs, but the plots appear almost exactly the same as in Figure 7(a), with only a discrepancy in the peak total density. Furthermore, when we have only one major street, we saw even better performance, as demonstrated by the plots in Figure 6(b). The peaks are significantly decreased, mostly due to the fact that the sum of the input rates is much lower than in the other systems, but this shows that yield signs are self-regulating enough to be fairly well-behaved in high input and low input systems. We will explore this concept more in a later section. + +We now turn our attention to systems without yield signs at all streets. Instead, we install theoretical traffic lights at each entrance that prevent cars from entering on red lights. Essentially, this means that the traffic signal matrices contain both ones and zeros, although we enforce the condition that no row contains all zeros (stopping all traffic). Also, we only use these non-yield flow-control methods during the rush hour and midday periods and use the standard yield matrix during night hours. We make this adjustment because yield matrices are extremely efficient for low input rates. + +![](images/a86b0e5e29bafbb1052dce60c0a5418b9bd152f53765c1ee04746090cda6e683.jpg) +(a) Two Major Input Rates + +![](images/499cb0354c62cee04f2bbaa32fac01c94d7c9c527e3b4fc044cad727bbbfe032.jpg) +(b) One Major Input Rate +Figure 6: Car Density for Yield Flow-Control + +![](images/a7187f1977e33c0d5d6f2b2ae3e7ba69820b58372895fbb7a4039eec3ef8ad1e.jpg) +(a) Yield Flow-Control With Similar Input Rates + +![](images/357934a29b31606819a7f39225af2dc29e5d38743aeb37df2ff62e731fbf1afe.jpg) +(b) Non-Yield Flow-Control with Similar Input Rates +Figure 7: Car Density Comparison of Yield versus Non-Yield + +Using the same input rates as the system from Figure 7(a), we obtain the density plots found in Figure 7(b) for an arbitrary non-yield traffic matrix. The shape of the plots appear quite similar to those of the previous systems, but the scaling is quite different. The maximum peak of the total cars in Figure 7(a) barely reaches over 50 cars, but in Figure 7(b), the peak reaches over 70 cars. This result infers that non-yield flow-control may not be optimal. + +However, no conclusions can be drawn from only one trial, so we ran 100 trials of our simulation with different random traffic signal matrices and compared them with another trial using the yield matrix. The results are plotted in Figure 8(a). The horizontal lines indicate the values of the mean (395.8 seconds), median (232.3 seconds), and minimum (22.55 seconds) values of the data. Although the granularity of the plot partially hides this fact, we observed that the yield matrix, with an average of 31.83 seconds, was NOT the best trial during the simulation. In fact, four trials with random, non-yield traffic matrices beat the yield matrix by a margin of about 9 seconds. + +Nonetheless, these results do not shatter the conclusions we drew from our analytic model. Upon further inspection of the matrices that seemed to improve the traffic flow, we noticed that these matrices were extremely similar to the yield matrix, with only one or two zeroes in the entire matrix and no row containing more than one zero. In physical terms, these roundabouts would only have one or two entrances having a red light one-fourth of the time during a few hours of the day. Also, we only used these matrices during peak traffic hours (representing less than 1/12 of the entire time interval), so the fact that these matrices showed better performance than the yield matrix can be attributed simply to random chance. The overall experimental result is telling: $96\%$ of the trials were significantly worse than the yield matrix. It should also be noted that the "better" matrices improved the process by only a small margin - too small to warrant spending money on installing expensive traffic lights rather than simple yield signs. + +We also tested our simulation on various different roundabouts. For five other parameter sets, varying size, speed, and input flow, we ran the same type of experiment with 19 random matrices per parameter set. In each of these trials the yield matrix performed as well as, or better than, any of the signal matrices. One such example is seen in Figure 8(b), where the roundabout contains one large street and four small streets for a total of five inputs and exits. Each point on the figure represents the average time over three trials for a given set of signal matrices. In this example, as in all our trials, using a yield sign all of the time provided the lowest average time. Thus, we base our recommendation on the consistent results over 200 trials across various roundabout designs. + +In short, while some non-yield traffic signs may improve performance on any given day, our simulations show that the yield sign system is statistically better than the majority of other flow-control methods, even when given different variations of input rates. + +![](images/c9f15b8ca83787e5471e1c67aae55dec34a2131aafe45e32d389f44a14730620.jpg) +(a) 100 Trials + +![](images/5d84decd23a9db1223d837cd14fc4821fc927f006d037154d3ff673f5310cae1.jpg) +(b) 20 Trials +Figure 8: Average Time Spent in System with Random Traffic Signal Matrices + +# 3.7 Simulation Results, Part 2: Size Considerations + +Because larger traffic circles have higher capacities, we also investigate the effects of the traffic circle radius on traffic flow. As the radius of the circle increases, the number of cars that can fit in the circle also increases, so less cars should be waiting in the entrance queues as there will be more gaps between cars in the circle. Thus, if the total input of cars is large, a larger traffic circle should perform better than a smaller one. Of course, if we increase the radius infinitely, the traffic circle becomes less of a circle and more like a very large one-way street that curves. Cars in that roundabout would take a long time to pass through the system simply because they have to drive farther. Also, larger circles would cost more money and demand more space, so we wish to find an optimum radius for any given situation. + +However, we cannot use our simulation to find an exact relation between total input rate and optimum radius. Simulation results may demonstrate typical behaviors, yet the very nature of random simulation simulation prevents the creation of an exact function, say $r(\lambda_T)$ , of optimum radius in terms of total input radius. + +Using the yield matrix and a ratio of two major streets with two minor streets, we ran a set of trials while varying the total input rates and the traffic circle radius and recorded the average time spent in the system. In Figure 9, the flat plane represents well-behaved systems + +![](images/7462bf20691dba0676bff00db0396927ba7bfe09a73fa536a563503a478b18a1.jpg) +Figure 9: Average Time Spent in System for Various Input Rates and Circle Radii + +where the single-lane roundabouts with larger radii exhibit lower average time spent inside the roundabout. Also, when we fix the radius to be a certain value, we see that the average time spent in the system increases as the total input rates increase. + +What is most interesting in the plot is the rapid change in behavior after the total input rate goes above 3000 cars per hour, or more than one car every second, and radius is allowed to vary. We expect more delays as more cars try to enter the system, but we also expect larger radii to decrease the delays with some kind of proportionality. For example, when we fix the total input to be 4000 cars per hour, we see that a circle with a radius of 35 meters performed better than a circle with a radius of 30 meters, which is expected. However, the circle with a radius of 40 meters performed worse than the both the 35 meter circle and the 30 meter circle, which is entirely unexpected. Thus, we conclude that for total peak flow of less than 3000 cars per hour, increasing radius is directly correlated with decreasing average total time, but at higher flow rates, the correspondence between radius and flow rate becomes erratic. + +This unexpected behavior reveals the limitations of our model. A single-lane roundabout with four entrances cannot handle grossly inflated input rates, regardless of size, so areas with extremely high traffic densities need to use other constructions, such as multiple lanes to increase traffic capacity or express lanes to thin out cars who need to only drive to the next street over. However, our computer simulation model cannot handle these extra cases. + +# 4 Strengths and Weaknesses + +# 4.1 Analytic Model + +The analytic model, although satisfyingly simple to write down and perform calculus upon, is limited in many ways. We compromised many kinds of complexity in order to formulate a closed-form stationary distribution, but in the end, the sheer variety of equivalent states which our system could take thwarted our analysis. Our lower bound calculations for the stationary distribution are pretty but provide a bound which is, according to several numerical trials, an order of magnitude less than the function itself. We could show that the lower bound grows with $g$ for small $k$ , but we did not prove that the overall shape of the lower bound always emulates the actual function. We did show that the two functions are behaviorally similar in two specific cases, lending credence to our lower-bound based optimization. + +This model was useful in forming a basis for our computer simulation and narrowing our search for effective flow control systems. There is an alarmingly large number of ways to create a signal matrix; our analytic calculations led us to search around those similar to the + +yield matrix rather than perform tens of thousands of costly and unproductive calculations. + +# 4.2 Computer Simulation + +The computer simulation was able to cope with many of the limitations of the analytic model. It introduced time dependent flow, limited the capacity of the roundabout, and more directly simulated the action of a traffic light as a discrete system rather than a time-averaged parameter. This formulation allowed us to explore a wide range of parameters beyond the convergence constraints of the analytic model. In doing so, it gave us useful insight about the relationship between parameters. + +The computer simulation is limited by the vastness of the parameter space. Implementing an optimal signal matrix search could not be done by any function-based search algorithm because the functional value of a given signal matrix must be determined by how it performs in a computationally intensive random simulation, and because the dimensionality of the variable space is so large. The independent variable space for one signal matrix for a 4-entrance roundabout has 16 dimensions, and the simulation utilizes three different signal matrices during every run. Directly performing a search algorithm on random calculations in this variable space requires computing power far beyond our means. The analytic model was useful, therefore, in restricting our search to signal matrices close to the yield matrix. We ran hundreds of trials with randomly generated signal matrices containing no more than three zeros per row. Within this search space, the yield matrix performed better in the vast majority of cases. Thus, our simulation successfully confirmed that, compared to yield signs, traffic signals have at best comparable efficacy. + +The simulation is limited in scope, however. It does not account for pedestrian traffic, for driver mistakes and accidents, or for effects of weather conditions or other factors. It is also limited to roundabouts with only one lane. As Figure 9 shows, for our model, flow rates in excess of 2500 cars per hour will clog the roundabout for any input control. To some extent, the effect of increasing flow can be mitigated by increasing roundabout radius; however, for flow rates in excess of 3000 cars per hour, we believe a two lane roundabout would be necessary. A simple case of this would be a roundabout with outer "express" lanes from which a vehicle may travel only from one entrance to the next exit, such as in Figure 10. In this case, traffic signaling would always impair flow, because the "express" lanes are always vacant for an entering vehicle. More complex scenarios would require a more sophisticated model. Such cases include multi-lane roundabouts in which vehicles are free to merge from lane to lane throughout, or large roundabout systmes comprised of multiple smaller roundabouts. + +![](images/1ae7b5ea18818efce5a5083cdd60097b4f48837a1293832274a1abdcaa12f031.jpg) +Figure 10: An "Express" Roundabout + +# 5 Conclusion + +Our search through literature, parameter space, and computer-generated experimental results brought us to a conclusion validated in intersections across the United States: yield sign control is nearly always the best way to regulate roundabout entry. Our analytic formulation of a Jackson network led us to calculate that the optimal choice of thinning parameter for the system corresponds to a yield sign at each entrance. Our computer simulation concluded that single-lane roundabouts with yield signs at all entrances are extremely well-behaved with input rates below 3000 cars per hour, although higher flow rates may necessitate additional lanes. Although we concede that extenuating circumstances may require the use of a traffic signal, in the vast majority of cases a yield sign is the most effective method of flow control in a roundabout. + +# Technical Summary + +This analysis is best suited for single-lane roundabouts which are not unduly effected by neighboring traffic installations. In order to analyze roundabout efficiency and recommend an appropriate flow control method, the following information must be obtained: + +Number of Entrances and Exits: Include relative popularity of all exits. + +Separation Distance Between Entrances and Exits: This can be reported as an angular or a circumferential distance. + +Peak Input Flow from Each Entrance: These numbers should be reported as vehicles per hour, sampled during "rush hour". + +Roundabout Radius: This is the distance from the center of the roundabout to the center point of the vehicle lane. + +Peak Flow Times: This model requires the times of peak flow and average duration. In this model, traffic flow will follow a basic "rush hour" pattern; that is, there will be two periods during the day for which peak flow rates occur; at other times, flow is scaled down. Flow is scaled down most during the late night and early morning hours. For situations with less noticeable rush-hour patterns, a large "duration" parameter will smooth out flow variation over the course of the day. + +This model will be able to output the average time spent in the roundabout or waiting to enter the roundabout. Optimal efficiency is defined as minimizing this time. Our models of roundabout flow indicate that, in the vast majority of cases, the optimal mechanism for flow control is a yield sign at each entry point. This is largely because the traffic signal acts as a restricted yield sign, tending to impair traffic flow whenever volume dips below peak flow. We recommend a yield sign if your parameters fit into the ranges below: + +Any Number of Input Lanes: This is 2 or more. Variation of this parameter does not alter flow control recommendations. + +Adequate Separation Between Entrances and Exits: As long as there is enough space to fit all entrances and exits on the roundabout circumference, this parameter does not effect flow control recommendations. + +Low to Medium Total Peak Input Flow: The sum of the inputs is in the range of 200 and 3000 cars per hour. + +Relative Intensity of Input Flow: The maximum ratio of peak input flow from any two entrances is less than 8:1. For larger discrepancies, more analysis may be necessary. + +Slow to Medium Roundabout Travel Velocity: Vehicles in the roundabout are moving at speeds less than $30\mathrm{mph}$ , slower for smaller roundabouts. + +Roundabout Radius: Roundabout radius is between $15\mathrm{m}$ and $45\mathrm{m}$ . + +Peak Flow Times: This parameter does not in general effect efficiency. + +For all different combinations of size, velocity, and flow within these ranges, yield signs inevitably proved to be the best solution. Furthermore, these parameter ranges represent a full spectrum of radii, input flows, and velocities commonly found in U.S. roundabouts [7]. + +There are a few situations where signaled entry may be appropriate. If a certain entry point is subject to highly concentrated flow during very short times (such as a sports venue emptying out after a game), a traffic signal during those times may permit efficient release of cars. We recommend, however, that the light be left entirely green during all other times. If the roundabout experiences a large volume of pedestrian traffic, signals (or stop signs) may be a reasonable way to decrease accident likelihood, but this determination requires analysis of pedestrian habits and attitudes. + +If location permits, other design elements may be modified to improve roundabout efficiency. For traffic flows of up to 3000 vehicles per hour, single lane yield-control roundabouts are very efficient. For peak flow up to 2000 vehicles per hour, a roundabout of $20 - 40\mathrm{m}$ radius will be sufficient to ensure average total roundabout wait and travel time below 20 seconds. For peak flows between 2000 and 3000 vehicles per hour, we recommend planning for a 40-60 m roundabout radius, which will keep average wait a travel time below 25 seconds. As flow increases beyond 3000 vehicles per hour, average wait and travel time will increase very quickly, and we recommend designing a multiple-lane roundabout if possible. + +# References + +[1] A. Bouchouch, Y. Frein, and Y. Dallery. A decomposition method for the analysis of tandem queing networks with blocking before service. In Proceedings of the Second International Conference on Queueing Networks With Finite Capacity (Onvural and Akyildiz 1992), pages 97-112. North-Holland. +[2] Rick Durrett. Essentials of Stochastic Processes. Springer, 1999. +[3] Nina McLawhorn. Roundabouts and Public Acceptance. 'http://on.dot.wi.gov/wisidotresearch/database/tsrs/tsrroundabouts.pdf, November 2002. +[4] J. Medhi. Stochastic Models in Queueing Theory. Academic Press, San Diego, CA, 2003. +[5] Arizona Department of Transportation. Modern Roundabouts. http://www.ihs.org/research/qanda/roundabouts.html, 2009. +[6] Tim Padgett. You Want a Revolution. http://www.time.com/time/magazine/article/0,9171,1838753,00.html, September 2008. +[7] National Cooperative Highway Research Program. Modern Roundabout Practice in the United States. National Academy Press, Washington, D.C., 1998. +[8] Bruce W. Robinson. Roundabouts: An Informational Guide. ‘http://www.tfhrc.gov/safety/00068.html, June 2000. +[9] Sheldon Ross. A First Course in Probability. Prentice Hall, New Jersey, 2006. \ No newline at end of file diff --git a/MCM/2009/A/5180/5180.md b/MCM/2009/A/5180/5180.md new file mode 100644 index 0000000000000000000000000000000000000000..9438b94aca4dc521ddd1ee54ba8b87e6defcba6f --- /dev/null +++ b/MCM/2009/A/5180/5180.md @@ -0,0 +1,324 @@ +# Summary + +Our goal was to design a model that could account for the dynamics of vehicles in a traffic circle. We mainly focused on the rate of entry into the circle to determine the best way to regulate traffic. We assumed that vehicles circulate in a single lane and that only incoming traffic can be regulated (i.e. the incoming traffic could never have the right-of-way). + +For our model, the adjustable parameters were the rate of entry into the queue, the rate of entry into the circle (service rate), the maximum capacity of the traffic circle, and the rate of departure from the circle (departure rate). We used a compartmental model with the queue and the traffic circle as compartments. Vehicles first enter the queue from the outside world, then enter the traffic circle from the queue, and lastly exit the traffic circle to the outside world. We modeled both the service rate and the departure rate as dependent on the number of vehicles inside the traffic circle. + +In addition, we ran computer simulations to have a visual representation of what happens in traffic circles during different situations. This allowed us to examine different cases, such as unequal traffic flow coming from the different queues or some intersections having a higher probability of being a vehicle destination than others. It also implements several life-like effects, such as the way vehicles accelerate when on an empty road but decelerate when another vehicle is in front of them. + +In many cases, we found that a fast service rate was the optimal way to maintain traffic flow, signifying that a yield sign for incoming traffic would be most effective. However, when the circle became more heavily trafficked, a slower service rate better accommodated the traffic, indicating that a traffic light should be used. Thus, a light should be installed in most circle implementations, with variable timing depending on the expected amount of traffic. + +The main advantage of our approach was that the model was very simple and allowed us to clearly see the dynamics of the system. Also, the computer simulations that we ran provided more in-depth information about traffic flow under conditions that the model could not easily show, and enabled visual observation of the traffic. Some disadvantages to our approach were that we could not analyze the affects of multiple lanes nor stop lights that controlled the flow of traffic inside the circle. In addition, we had no way of analyzing singularities in the situation, such as vehicles that drive faster or slower than the rest of the traffic circle and pedestrians. + +# One Ring to Rule Them All: The Optimization of Traffic Circles + +MCM Contest Question A + +Team # 5180 + +February 9, 2009 + +# Contents + +1 Introduction 1 +2 The Models 3 + +2.1 A Simplified Model 3 +2.2 An Intermediate Model 3 +2.3 A Congestion Model 4 +2.4 Extending the Model Using Computer Simulation 4 + +2.4.1 Simulation Assumptions 5 +2.4.2 Limitations 7 + +3 Analyzing the Models 8 + +3.1 The Simplest Model 8 +3.2 Intermediate Model 8 +3.3 Congestion Model 9 +3.4 Simulation Results 12 + +4 Conclusion 15 +5 Technical Summary 18 + +# 1 Introduction + +Traffic circles, often called rotaries, are used to control vehicle flow through an intersection in both small towns and large cities. Depending on the goal + +![](images/1a76de78de4d01989d68e8dca6503bcd9a9e21a2e87ab0e05db55ed216a303a5.jpg) +Figure 1: This figure illustrates a simple traffic circle. Traffic circles may have more than one lane and may have a different number of intersections. + +of the rotary, it can take different forms. Figure 1 shows a simple model of a traffic circle upon which variations build. A circle can have one or more lanes; vehicles that enter the traffic circle can be met by a stop sign, a traffic light, or a yield sign; the circle can have a large or small radius; a circle can confront roads containing different amounts of traffic. These features of the traffic circle affect the cost of the circle to build, the congestion that a vehicle is confronted with as it circles the rotary, the travel time of a vehicle in the circle, and the size of the queue of vehicles waiting to enter the rotary. Each of these variables could be metrics for evaluating the efficacy a traffic circle. + +Our goal is to determine how to best control traffic flow entering, exiting, and traversing a traffic circle. The design that we created modeled vehicles circling a rotary. We modeled the dynamics of a traffic circle by taking as given the traffic circle capacity, the arrival rates at each of the roads, the rate of departure from the rotary at each road, and the initial number of vehicles circulating in the rotary. Our metric is the queue length, or buildup, at each of the entering roads. We try to minimize the queue length by allowing the rate of entry from the queue into the circle to vary. In order for a vehicle to traverse the rotary efficiently, its time spent in the queue should be minimized. + +We present a simple model that along with our approach makes the following assumptions: + +- We assume a certain time of day so that the parameters are constant. +- There is a single lane of circulating traffic (all moving in the same + +direction). + +- Nothing impedes the exit of traffic from the rotary. +- There are no singularities such as pedestrians trying to cross. +- The circulating speed is constant (i.e. a vehicle does not accelerate/decelerate to enter/exit the rotary). +- Any traffic light in place regulates only traffic incoming to the circle. + +# 2 The Models + +# 2.1 A Simplified Model + +We modeled the system as being continuous. This can be thought of as modeling the vehicle mass dynamics of a traffic circle. The simplest model that we created made the assumptions the rate of arrival to the back of the entering queue and the rate of departure from the queue into the traffic circle were given and independent of time $t$ . Thus, the rate of change in the length of the queue is given by + +$$ +\frac {d Q _ {i}}{d t} = a _ {i} - s _ {i} \tag {1} +$$ + +where $Q_{i}$ is the length of the queue coming in from the $i^{th}$ road, $a_{i}$ is the rate of arrival of vehicles into the $i^{th}$ queue, and $s_{i}$ is the rate of removal, also called the service rate, from the $i^{th}$ queue into the traffic circle. + +We also modeled the vehicular flow circulating the traffic circle. To do this, we introduce the parameter $d_{i}$ , the rate at which vehicles exit the traffic circle. We let be $C$ the number of vehicles traveling in the circle. Then we model the change in traffic in the rotary by the difference between the influx and outflux of vehicles, where the outflux of vehicles is dependent on the amount of traffic in the rotary: + +$$ +\frac {d C}{d t} = \sum s _ {i} - C \sum d _ {i}. \tag {2} +$$ + +# 2.2 An Intermediate Model + +The model proposed above simplifies the dynamics of a traffic circle. The most glaring simplifications that it makes are that there is no way to indicate that the circle has a maximum capacity and that the flow rate into + +the traffic circle $s_i$ is not dependent on the amount of traffic already circulating. These are both corrected by proposing that the traffic circle has a maximum capacity $C_{\text{max}}$ . As the number of vehicles circling approaches this maximum capacity, it should become more difficult for another vehicle to merge into the circle. At the extreme, when the traffic circle is operating at capacity, no more vehicles should be able to be added. Now, the $s_i$ in the previous model can be represented logistically as $s_i = r_i(1 - \frac{C}{C_{\text{max}}})$ , where $r_i$ is how fast vehicles would join the circle if there were no traffic slowing them down. Thus, the equation governing the rate at which the $i^{th}$ queue length changes becomes + +$$ +\frac {d Q _ {i}}{d t} = a _ {i} - r _ {i} \left(1 - \frac {C}{C _ {\text {m a x}}}\right). \tag {3} +$$ + +The equation governing the number of vehicles in the traffic circle becomes + +$$ +\frac {d C}{d t} = \sum r _ {i} \left(1 - \frac {C}{C _ {\text {m a x}}}\right) - \sum d _ {i} C. \tag {4} +$$ + +# 2.3 A Congestion Model + +The previous two models that we proposed failed to take into account congestion. We consider that congestion will alter the circulation speed. The circulation speed directly affects the departure rate $d_{i}$ of the vehicles from the circle. Eq. 3 above still holds, but we need to find some way to indicate that $d_{i}$ is not fixed. The vehicles will travel faster if there is no congestion, so they will be able to depart at their fastest rate $d_{i,\max}$ . When circle is operating at maximum capacity, the departure rate will decrease to be $d_{i,\min}$ . Then, the number of vehicles present in the circle is affected positively in the same manner as in Eq. 4, but the lessening factor changes to the weighted average of the $d_{i,\max}$ and $d_{i,\min}$ : + +$$ +\frac {d C}{d t} = \sum r _ {i} \left(1 - \frac {C}{C _ {\max }}\right) - C \left(\sum d _ {i, \max } \left(1 - \frac {C}{C _ {\max }}\right) + \sum d _ {i, \min } \left(\frac {C}{C _ {\max }}\right)\right). \tag {5} +$$ + +# 2.4 Extending the Model Using Computer Simulation + +Concurrent and supplemental to our mathematical model, we created a computer simulation in MATLAB®. The simulation was designed to be able to account for variables that would be too complicated to use in the + +mathematical model. That model does not deal with the vehicle's speeds while inside the traffic circle, so the computer simulation focused mostly on areas related to vehicle speed. The computer simulation focused on the following: + +- Enabling drivers to accelerate to fill gaps in the traffic (with a maximum speed) +- Forcing drivers to decelerate to maintain distance between cars +- Requiring that drivers accelerate and decelerate when entering and exiting the circle +- Giving probabilistic weights to the different directions of travel +- Keeping track of time spent within the traffic circle for each vehicle +- Giving each intersection a different vehicle introduction rate + +Figure 2 on page 6 shows an outline of the program flow and design. + +# 2.4.1 Simulation Assumptions + +This model makes several key assumptions about the vehicles: + +- There is only one lane of traffic +- No traffic control signals exist within the traffic circle +- Vehicles are all the same size +- Vehicles all have the same top speed +- Vehicles all accelerate and decelerate at the same rate +- Drivers all have the same spatial tolerance +- There are only four intersections +- There is only one circle size +- There are no pedestrians trying to cross the circle + +![](images/61c8e95b52722cfc58e09444aa87e341d7543d1d347be52715373b30cee50397.jpg) +Figure 2: Chart showing program flow. Each intersection is modeled as a queue of vehicles with a traffic control device. Vehicles are added to the queue at a constant rate. For a vehicle to leave the queue and enter the traffic circle, the area in the circle must be clear of other vehicles. Additionally, if the queue has a stop light, the light must be active. + +# 2.4.2 Limitations + +The assumption of one lane is not a key factor because of our other assumptions. Since we do not allow for different vehicle speeds, we do not need to put the slow vehicles in one lane and the fast vehicles passing them in another lane. However, we do have slower vehicles if they ever become backed up behind vehicles trying to exit, or begin to exit themselves. This would be an opportunity for another vehicle to use a different lane to maintain their faster speed. Additionally, we cannot let emergency vehicles through the circle if there is only one lane. The vehicles present in the circle have no way to let the emergency vehicle by unless they exit. For a more detailed discussion of emergency vehicles and traffic circles, see [2]. + +By not allowing traffic control devices inside the traffic circle, we restrict the possible circle configurations we can explore. We also limit the effectiveness of our stoplight model. The model only prevents vehicles from entering the circle, it does not stop vehicles that are currently in the circle. They act much like a stop light on a busy highway that restricts the flow of vehicles entering the highway, but does not affect the vehicles currently on the highway. + +Since we do not allow for different vehicle properties (size, acceleration, top speed, etc.), we cannot model the effect of large trucks, motorcycles, or other nonstandard vehicles on the flow of traffic. Since emergency vehicles are often large and slow, this is another factor preventing us from modeling them. + +Giving all of the vehicles the same acceleration and top speed, along with forcing all of the drivers to have the same spatial tolerance prevents us from modeling aggressive drivers and their interaction with timid drivers. One kind of aggressive driver might only have a smaller spatial tolerance (they would put their vehicle in smaller gaps in traffic), while another might only accelerate and decelerate faster. It would be interesting to see how these accelerations affected the bunching together of vehicles, discussed on page 12 in section 3.4. Additionally, since cars decelerate before exiting even if they are already moving slowly, we generate small amounts of false traffic backups. + +Only having four intersections limits our applicability in the real world. If the circle is created to help slow traffic, it is likely to have only four intersections. [2] Many traffic circles are not created just as a speed control device, and they often contain more than four intersections. Without further testing, we cannot determine how adding another intersection would actually affect the flow of traffic. We think it is likely that adding an inter + +section would give rise to a similar difference in traffic flow from increasing one intersection's service rate. However, we have to consider the fact that there is another exit for the vehicles as well. Without further development, this simulation is limited to four intersections. + +Limiting the size of the circle does not really limit our ability to model real world traffic circles. Since we are mostly looking at driver behavior with the computer simulation, we should see the same behaviors as we scale up the circle and its corresponding traffic. + +# 3 Analyzing the Models + +# 3.1 The Simplest Model + +In all of the above models, the rate $r_i$ is indicative of the sort of regulation imposed at the $i^{th}$ intersection. A near zero $r_i$ indicates that a traffic light is in use. A longer $r_i$ indicates that a yield sign regulating only the incoming traffic is in place. + +For the simplest model, we were able to find explicit formulae for the queue length and the number of vehicles in the rotary by integrating with respect to time. We found that + +$$ +Q _ {i} = \left[ a _ {i} - s _ {i} \right] t + Q _ {i 0}, \tag {6} +$$ + +and + +$$ +C = \frac {\sum s _ {i}}{\sum d _ {i}} + \left(C _ {0} - \frac {\sum s _ {i}}{\sum d _ {i}}\right) e ^ {- \sum d _ {i} t}. \tag {7} +$$ + +Therefore, given all of the inputs of the system, we would be able to predict the queue length. We note that to minimize the queue length, we solve Eq. 1 for when the queue length is decreasing $\frac{dQ_i}{dt} < 0$ . This indicates that in order to minimize the queue length, the $s_i$ term should be maximized. + +# 3.2 Intermediate Model + +For the model wherein we added a carrying capacity of the system. Again, the model was simple enough for us to find explicit formulae for the queue length and the number of vehicles in the rotary by integrating with respect to time. We found that + +$$ +Q _ {i} = \left[ a _ {i} - r _ {i} \left(1 - \frac {C}{C _ {\text {m a x}}}\right) \right] t + Q _ {i 0}, \tag {8} +$$ + +and + +$$ +C = \frac {\sum r _ {i}}{\frac {\sum r _ {i}}{C _ {m a x}} + \sum d _ {i}} + \left(C _ {0} - \frac {\sum r _ {i}}{\frac {\sum r _ {i}}{C _ {m a x}} + \sum d _ {i}} e ^ {- \left(\frac {\sum r _ {i}}{C _ {m a x}} + \sum d _ {i}\right) t} \right. \tag {9} +$$ + +Therefore, if given the other inputs of the system, we would be able to predict the length of the queues. We could also solve for where Eq. 3 is less than zero to find for what service rate the queue lengths are decreasing. We found that the queue length decreases when + +$$ +r _ {i} > \frac {a _ {i}}{1 - \frac {C}{C _ {\text {m a x}}}}. \tag {10} +$$ + +# 3.3 Congestion Model + +When we began to model congestion, the model was sufficiently complex that we were unable to intuit what conditions would optimize (minimize) the queue length. We noticed that the differential equation Eq. 5 was quadratic: + +$$ +\frac {d C}{d t} = A C ^ {2} + B C + D, \tag {11} +$$ + +where + +$$ +A = \frac {\sum d _ {i , m a x}}{C _ {m a x}} - \frac {\sum d _ {i , m i n}}{C _ {m a x}}, +$$ + +$$ +B = - \left(\frac {\sum r _ {i}}{C _ {m a x}} + \sum d _ {i, m a x}\right), +$$ + +$$ +D = \sum r _ {i}. +$$ + +Since $\sum d_{i,\max} > \sum d_{i,\min}$ , it will always be the case $A > 0$ . In addition, $B < 0$ and $D > 0$ . This means that the curve for $\frac{dC}{dt}$ is a concave up quadratic curve with a positive y-intercept and a global minimum at some $C > 0$ . Furthermore, notice that for $C = C_{\max}$ , $\frac{dC}{dt} = -\frac{d_{i,\min}}{C_{\max}}$ which is always negative for $d_{i,\min} > 0$ . Thus, the global minimum for the curve must be in the fourth quadrant. Figure 3 shows an example of such a curve using sample parameters. + +We notice from Figure 3 that there are two equilibrium points for this differential equation: $C = \frac{-B - \sqrt{B^2 - 4AD}}{2A}$ is a stable equilibrium point and $C = \frac{-B + \sqrt{B^2 - 4AD}}{2A}$ is an unstable equilibrium point. Also, since for $C = C_{\text{max}}$ , $\frac{dC}{dt} < 0$ , the number of vehicles will eventually decrease to an equilibrium value less than $C_{\text{max}}$ . We will call this equilibrium point for the number of vehicles $C_{\text{limit}}$ . + +![](images/a00c5cdc41fe8229d034806e2a61aeab6260ef94ceebeccee08bcfe6b5af5c40.jpg) +Figure 3: This graph shows the relationship between $\frac{dC}{dt}$ and $C$ for the congestion model using some sample parameters: $r_1 = 60$ , $r_2 = 60$ , $r_3 = 60$ , $r_4 = 60$ , $d_{1,max} = 2$ , $d_{2,max} = 2$ , $d_{3,max} = 2$ , $d_{4,max}$ , $d_{1,min} = 0.5$ , $d_{2,min}$ , $d_{3,min}$ , $d_{4,min}$ , and $C_{max} = 30$ . + +Since our metric for how well a traffic circle is maintained depends on how many vehicles are in the queues, we would like the queue flow $(a_{i} - s_{i})$ to be as small as possible. In other words, we would like $s_{i}$ to be as large as possible. In the congestion model, this is given by Eq. 3. + +Without loss of generality, we can analyze queue 1 because the equations for each queue only differ by their $a_i$ and $r_i$ . We will keep these constant among the queues in the mathematical simulations. Since the only changing variable in Eq. 3 is $C$ , when $C = C_{\text{limit}}$ , $Q_1$ will also be at its equilibrium. + +Using this fact, we can evaluate whether we would like to use a traffic light or not and for how long the light would be red. Thus, we compared different values for the service rate constant $r_1$ and the value of $\frac{dQ_1}{dt}$ at $C = C_{limit}$ . The results can be seen in Figure 4. The graph shows that when $r_1$ increases $\frac{dQ_1}{dt}$ decreases. + +A situation that occurs in real life is congestion of the traffic circle causing the vehicles in the circle to move very slowly. Decreasing the value of $d_{1,min}$ would cause the vehicles to depart the traffic circle at a slower rate when there is more congestion in the circle. Using lower departure rates to approximate slower vehicle speeds inside the traffic circle, we can examine what happens for decreasing values of $d_{1,min}$ . The results from this are shown in Figure 5. For values of $d_{1,min} < 0.5$ the smallest value for $\frac{dQ_1}{dt}$ is not at $r_1 = 60$ , but for smaller values of $r_1$ . + +![](images/4dd24e310a4ecce7c5fdb8d77f64649f54cfa965ce5ed8af478a82cf94fd7351.jpg) +Figure 4: This graph shows the relationship between $r_1$ and $\frac{dC}{dt}$ for the congestion model with $C = C_{limit}$ . The constant parameter values are $d_{1,max} = 2$ , $d_{2,max} = 2$ , $d_{3,max} = 2$ , $d_{4,max}$ , $d_{1,min} = 0.5$ , $d_{2,min}$ , $d_{3,min}$ , $d_{4,min}$ , and $C_{max} = 30$ . $r_1, r_2, r_3, r_4$ are being changed from 1 to 60 + +![](images/230936076a327304c43e634824acee2e1c69ea3261fb79c36eb305435553c4fa.jpg) +Figure 5: This graph shows the relationship between $r_1$ and $\frac{dC}{dt}$ for the congestion model with $C = C_{limit}$ . The constant parameter values are $d_{1,max} = 2$ , $d_{2,max} = 2$ , $d_{3,max} = 2$ , $d_{4,max}$ , and $C_{max} = 30$ . The values of $r_i$ are ranging from 1 to 60 for different values of $d_{i,min}$ . + +![](images/5fa08a0b0d461ba3b2fe098ba01d35541f592e72983d80c8bd8e501d085fd9c1.jpg) +Figure 6: This graph shows the relationship between $r_1$ and $\frac{dC}{dt}$ and $C$ for the congestion model with $C = C_{limit}$ . The constant parameter values are $d_{1,max} = 2$ , $d_{2,max} = 2$ , $d_{3,max} = 2$ , $d_{4,max}$ , $d_{1,min} = 0.5$ , $d_{2,min}$ , $d_{3,min}$ , and $d_{4,min}$ . The values of $r_i$ are ranging from 1 to 60 for different values of $C_{max}$ + +Another situation that the congestion model can approximate is the addition of extra lanes. We can make a crude approximation of this event by saying that the every time a lane is added, the capacity would increase by a factor of $C_{\text{max}}$ . If $C_{\text{max}} = 30$ for one lane, then for two lanes $C_{\text{max}} = 60$ . Figure 6 shows the results of plotting $r_1$ versus the $C_{\text{max}}$ for adding different amounts of lanes. Like in the previous plots, the correlation is negative. + +# 3.4 Simulation Results + +One of the most interesting things effects we saw in our MATLAB simulation was the buildup of vehicles in front of each of the exits. As the vehicles slow down in preparation of their exit, they force other vehicles behind them to decelerate in order to maintain a safe distance. This buildup creates a higher queue at the intersection before the exit, as the buildup prevents those vehicles from exiting their queue because the space is taken. In Figure 7, we can see the large number of vehicles in the fourth queue, and the buildup in the fourth quadrant. + +Another interesting part of real life that the simulation shows are the bunching and expanding effects that vehicles experience. Because they can decelerate more quickly than they accelerate, the vehicles bunch up behind a slow moving vehicle, then expand again as that vehicle accelerates into the free space ahead. Figure 8 shows an example of this compaction. + +![](images/bf03c63fceaaeed6e31e51047f52e000d0b52ce984cb9e4943c0322a6d508954.jpg) +Figure 7: Shows the buildup of vehicles in front of the first intersection as vehicles slow down to exit. Additionally, the queue at the fourth intersection is quite high because vehicles cannot enter into full traffic circle. + +![](images/4f4b5bfe1f9c50f4369a723bc0dd2c06888d1bbb9205e4d039a601624ee96e22.jpg) + +![](images/7b2c08256dc1bbdb2a7838b26b455b3b4a29bdb4170cf2a6a8b263c9432d521e.jpg) +Figure 8: The arrow points out bunching occurring in the second quadrant. Bunching happens because drivers decelerate faster than they accelerate. + +We tested several different traffic circle and vehicle setups to explore the problem of optimal circle design. The first setup we tested was having a single intersection with high arrival and service rates. It created a large traffic buildup in the quadrant immediately following it, even though the vehicles all had random destinations. The effects of random destinations could not overcome the vehicles slowing down to exit. Interestingly, the queue for the intersection was not appreciably higher than the other queues. Figure 9 shows the buildup in quadrant 1 when the first intersection (at angle 0) had a high arrival and service rate, but it also shows that queue 1 was not appreciably higher than the others. + +![](images/044cdcc52e4b14061fb5be5954024483cd88bea5c2310df498ff6b992561d3d3.jpg) +Figure 9: The first intersection has both high arrival and service rates. This creates a traffic buildup before the next intersection. However, the queue for the first intersection does not increase, since there is limited traffic coming from the intersection behind it. + +![](images/6fbf4f82f47c1a66f58ea9873ad73c78576c8c988ce70b5e01b099dfd9f10394.jpg) + +Our second test focused on one intersection having a much higher chance of being a destination. This is very possible in the real world if one of the roads leads to an important commercial center or highway. This creates the expected buildup in front of the likely exit, as seen in Figure 10. However, it also creates a substantial buildup in front of the previous exit, and a severe buildup in that intersection's queue as vehicles are prevented from entering. The buildup in the adjacent road must be taken into account when constructing a traffic circle with a high volume intersection. + +We also tested the outcome if one intersection had a high service rate and the standard arrival rate, and another intersection had a high arrival rate and standard service rate. The traffic distribution was mostly random, with a slight tendency towards backups in quadrant following the intersec + +![](images/695e54a3badff80e89e4671b58e1d0601559017461cecabeeffb8b058895e3d9.jpg) +Figure 10: The first intersection has a higher probability of being chosen as a destination. This creates a buildup in front of that intersection and a smaller buildup in front of the previous intersection. It also creates a very large increase in the queue of the previous intersection since those vehicles cannot enter the full circle. + +![](images/e28f0e17607017b5a2978340b74bf1581476b127af2fb116a958c1ebf08906b0.jpg) + +tion with high service rate. This was expected, since the intersection with high service rate could only add as many vehicles as it had in its queue, which was limited by its low arrival rate. The intersection with high arrival rate and low service rate also had a much larger queue than the other intersections, entirely as expected. + +# 4 Conclusion + +We modeled the dynamics of a traffic circle in order to determine how to best regulate traffic coming into the circle. As shown in figure6, increased capacity decreases the queue flow which leads to a decrease in queue size. This indicates that a multiple lane traffic circle will better accommodate more people by decreasing the length of the queue in which they must wait. However, as shown in the same figure, the marginal utility of increasing the maximum capacity does decrease. Using a cost function (where cost varied proportionally to the amount of space that the circle takes up), then there would exist an optimum size of the traffic circle. + +Although the simpler models that we created indicate that letting vehicles into the rotary as fast as possible would be optimum, analysis of the congestion model showed that if $d_{i,\min}$ is sufficiently small given all other parameters, then the highest service rate is no longer optimal. The implica + +tions of this result in terms of construction is that traffic lights could make travel through the rotary more efficient in certain cases. During times when many vehicles would use the traffic circle, such as during the morning and evening commutes, there would be enough vehicles so that the $C_{limit}$ is reached. In this case, using stop lights would help to alleviate the flow of traffic. However, the duration of the red light should be adjusted according to the $d_{i,min}$ for the specific traffic circle. + +In addition to the mathematical models, we created a computer simulation that tracked individual vehicles progress through the traffic circle, and their effect on other vehicles. Our simulation showed several traffic effects that can be observed in real life, namely a buildup of vehicles in front of the exits, and vehicles bunching together and expanding apart as drivers accelerate and brake. We also tested several traffic circle configurations. If a single intersection had both a high service rate and a high arrival rate, traffic built up heavily in the area of the circle immediately following. However, if the intersection only had a high service rate or high arrival rate, the resulting traffic was mostly random, and only the intersection with high arrival rate built up a large queue. Our final configuration gave one intersection a higher likelihood of being a destination. This created a traffic buildup in front of that intersection, but also created a very large queue at the previous intersection. This large queue buildup should be considered when building any circle with more popular intersections. + +While both of our modeling techniques have limitations, they enabled us to consider the problems of designing a traffic circle. Combining the results of both techniques, we can provide a fairly comprehensive look at the dynamics of a traffic circle, and to draw conclusions about how to optimally implement such a circle in a real world situation. + +# References + +[1] Rahmi Akcelik. Roundabout model calibration issues and a case study. TRB National Roundabout Conference, May 2005. URL http://www.aasidra.com/documents/TRB RouConf2005_AKCELIK_RouModelCalib.pdf. +[2] Jim Mundell. Constructing and maintaining traffic calming devices. Seattle Department of Transportation. URL http://www.seattle.gov/Transportation/docs/TORONT04.pdf. Viewed on 2009 Feb 8. +[3] Eugene Russell. Roundabout design for capacity and + +safety the uk empirical methodology. Kansas State Center for Transportation Research and Training, 2004. URL http://www.k-state.edu/roundabouts/research/RACapacity.pdf. + +# 5 Technical Summary + +In order to explore issued related to traffic circle design, we have created a model to simulate vehicle dynamics within a traffic circle. Our model describes the net rate of vehicles in each queue (the vehicles on the incoming roads waiting to be let into the traffic circle) as well as the net rate of vehicles in the traffic circle. The equations for the rates are as follows: + +$$ +\frac {d Q _ {i}}{d t} = a _ {i} - r _ {i} \left(1 - \frac {C}{C _ {m a x}}\right), +$$ + +$$ +\frac {d C}{d t} = \sum r _ {i} \left(1 - \frac {C}{C _ {\text {m a x}}}\right) - C \left(\sum d _ {i, \text {m a x}} \left(1 - \frac {C}{C _ {\text {m a x}}}\right) + \sum d _ {i, \text {m i n}} \left(\frac {C}{C _ {\text {m a x}}}\right)\right) +$$ + +where $Q_{i}$ is the number of vehicles in the $i^{th}$ queue, $C$ is the number of vehicles in the traffic circle, $a_{i}$ is the rate at which vehicles enter the queue, $r_{i}$ is the rate at which vehicles enter the traffic circle when the circle is empty, $C_{max}$ is the capacity of traffic circle, and $d_{i,max}$ and $d_{i,min}$ are the maximum and minimum departure rates from the circle based on congestion. The rates can be found empirically except for $r_{i}$ for each queue. + +We made several assumptions in our model. One assumption is that the traffic circle is a single lane with all vehicles moving in the same direction. Our model takes into account the lowering of the speed due to congestion (through $d_{i,max}$ and $d_{i,min}$ ), but not due to the random acceleration and deceleration of vehicles. + +The $r_i$ for each queue can be found using the model. The smaller $r_i$ is, the more time each vehicle takes at the front of the queue before entering the traffic circle, which models a stop light with longer red light periods. + +One limitation of this model is that it cannot be used to examine the possibility of a traffic light that gives priority to the vehicles in the queue. Another is that we are modeling the vehicles deterministically which does not allow for the special behavior of individual vehicles (such as one vehicle speeding). + +We also created a computer simulation to enhance and extend the capabilities of the mathematical model. The simulation tracked every vehicle's progress as it traversed the traffic circle, and modeled their interaction with other vehicles. We were able to observe real world effects such as traffic bunching together into groups, and traffic build up in front of the exits. + +Based on testing of both our mathematical and computer models, we would recommend the following: + +- Most of the time, letting vehicles enter the circle as quickly as possible is optimal, which means that yield signs should be the standard traffic control device. +- During periods of high traffic, slowing the rate of entry into the circle helps prevent congestion, which decreases the efficiency of the circle. Therefore, in areas of high traffic, stop lights should be used as traffic control devices. +- If any single road has a high traffic, its vehicles should be given preference in entering the circle. This will help prevent a large queue. +- Traffic often builds up in front of each intersection as cars exit, so a separate exit lane could help keep traffic moving. \ No newline at end of file diff --git a/MCM/2009/A/5702/5702.md b/MCM/2009/A/5702/5702.md new file mode 100644 index 0000000000000000000000000000000000000000..5af2a261562af406f240ad37a58c93dfbffe1516 --- /dev/null +++ b/MCM/2009/A/5702/5702.md @@ -0,0 +1,1381 @@ +# A Simulation-Based Assessment of Traffic Circle Control + +MCM Team #5702 + +February 9, 2009 + +# Technical Summary + +Evaluating the performance of traffic control systems for traffic circles and determining the most effective traffic control system to use for a specific circle is currently an active area of research. The difficulty of this problem lies largely in its crucial dependence on the local interactions between individual drivers. In particular, traffic circles are relatively small (when compared to highways, for instance) and are therefore susceptible to blockages caused by the actions of individual cars such as lane changes, entrances, and exits. A complete model, then, must account for the effects of individual car behavior. Existing models of large traffic circle behavior, however, do not track performance at the level of individual cars. + +We propose here a novel simulator-based approach to evaluating and selecting traffic control systems for traffic circles. We create a multi-agent, discrete time simulation of traffic circle behavior under different traffic control systems. The behavior of individual cars in our simulator is determined autonomously and locally, allowing us to capture the effects of their local interactions. In addition, by modeling each car separately, we are able to track the time spent in the traffic circle for each individual car, giving us a more specific measure of traffic circle performance than the more commonly used aggregate rate of car passage. + +Measuring the performance of several different traffic control strategies using these two metrics, we found that the rate of incoming traffic and the number of lanes in the traffic circle were the major driving factors behind the optimal choice of traffic control system. Based on the simulated performance of traffic circles with varying values of these parameters, we have two different recommendations for traffic control systems based upon the rate of incoming traffic. + +When the rate of incoming traffic is low, we recommend that entering cars yield to cars already in the circle. When the rate of incoming traffic increases beyond a certain threshold, which should be determined empirically, we recommend the use of traffic lights + +that control entering traffic and the outermost lane of the traffic circle. These lights should be synchronized so that the time between successive lights turning green is the average time needed for a car to travel between them. + +In the case with a low rate of incoming traffic, the circle is relatively clear of cars. Therefore, cars entering the circle are able to merge in without blocking the road and slowing down the flow of traffic. By making entering cars yield to cars in the circle, we are able to maximize the total rate of cars passing through while maintaining the average speed of cars. + +When the rate of incoming traffic increases to saturated the circle with cars, allowing cars to merge freely into the circle creates gridlock within the circle. In particular, the presence of so many cars attempting to enter and exit impedes the flow of the others. While the throughput of this system is still quite high, our simulation predicts that each individual car will spend an extremely long time traversing the traffic circle. + +Instead, we recommend that traffic lights be used to attenuate the incoming flow of cars. In this case, while cars must wait slightly longer to enter the circle, the number of cars in the circle is limited, allowing cars within the circle to travel at a reasonable speed. Our simulator predicts that this will allow fewer cars to travel through the circle at a much higher speed. By viewing the performance of the control system at the level of the individual cars, our simulator is able to distinguish between the performance of these two systems in this case and select the correct system to use. + +Given a traffic circle, then, we recommend using our conclusions in the following way. First, measure the rate of incoming traffic incident at the circle during various times of day and examine the occupancy level of the circle under the outer yield control system. For time periods with high occupancies and rates of incoming traffic, we recommend implementing synchronized traffic lights. For the other time periods, we recommend requiring the entering roads to yield to the cars in the circle. Under this system, the total throughput of the traffic circle is maximized, while still maintaining an acceptable level of individual performance. + +# Contents + +# 1 Introduction 2 + +1.1 Terms and Notation 2 +1.2 Problem Background 3 +1.3 Our Results 3 + +# 2 Problem Setup 4 + +2.1 Simulator 4 +2.2 Control System Evaluation 5 + +# 3 Simulator 5 + +3.1Assumptions and Setup. 6 +3.2 The Simulator 6 +3.3 Validation Against Existing Empirical Models 8 +3.4 Validation Against a Simple Model 9 + +# 4 Predictions and Analysis 11 + +4.1 Criteria 11 +4.2 Analysis 12 +4.3 Recommendations 14 + +# 5 Conclusions 16 + +5.1 Strengths 16 +5.2 Weaknesses 16 +5.3 Alternative Approaches and Future Work 16 + +# A Source Code for Simulator 17 + +# 1 Introduction + +The traffic circle is a type of circular intersection featuring traffic circulating from multiple streets circulating around a central island, usually in one direction. In particular, traffic flows in a circle around a central island in one direction only, and the roads running into and out of the circle intersect it at different locations. An example of a traffic circle is shown below [1]: + +![](images/14f4b3c39bdbcbda5c0c6b5682ae59c62272cdbacfb5864af996e7b12621f2a1.jpg) +Figure 1: An aerial view of Dupont Circle in Washington, DC. + +Other examples of large traffic circles include Columbus Circle in New York City, while small, one-lane traffic circles often exist in residential neighborhoods. Traffic circles are often notorious for the frequent traffic jams that occur due to their unconventional design, and many different methods exist to control traffic within a traffic circle. Different circumstances for the use of traffic circles seem to require different types of traffic controls, and it therefore seems worthwhile to investigate the impact of the different approaches. + +# 1.1 Terms and Notation + +In this paper, we will consider a traffic circle to be a one-way circular road with two-way roads meeting the circle at T-junctions. In particular, we will not consider circles with separate entry and exit ramps. We will assume that each road carries cars into the circle at a fixed rate and that cars have an equal probability of leaving the circle through any of the other roads. In measuring performance of the traffic circle, we will measure two statistics—the average rate at which cars arrive at their desired exit location per time step, which we will + +denote as the average throughput, and the average number of timesteps between when a car arrives at the back of the queue to enter the circle and when it exits the circle, which we will simply denote as the average total time. + +# 1.2 Problem Background + +Modern traffic circles have recently been recognized as safer alternatives to traditional intersections. Research by Zein et. al. [18] and Flannery et. al. [8] using statistical methods has demonstrated the added safety traffic circles bring to both urban and rural environments. Attempts to understand the specific safety and efficiency benefits of traffic circles have taken four primary approaches: critical gap estimation, regression studies, continuous models, and discrete models. + +Critical gap models build off of how drivers empirically gauge gaps in traffic before merging or turning into a traffic stream. However, according to Brilon et. al., attempts that began in the 1980s to model roundabout capacity based off gap acceptance theory were not exceptionally promising [5]; in particular, Briton et. al. claimed critical gap estimation lacked valid procedures as well as general clarity [6]. More recent research applying gap-acceptance models to understanding traffic circles has included Polus et. al. [16] and the modeling of unconventional traffic circles by Bartin et. al. [2]. + +Nevertheless, regression studies on empirical data made much progress, beginning in 1980 with [12], where Kimber studied roundabouts in England and discovered a linear relation relating entry capacity to circulating flow and constants depending on entry width, lane width, the angle of entry, and the traffic circle size. Further regression studies have built extensively off of Kimber's work, such as in [15], which determined the importance of traffic circle diameter in small-to-medium circles. + +In addition to regression studies, research in modeling traffic flow has also progressed from a continuous approach to modeling vehicular traffic, e.g. Helbing with improved fluid-dynamic models [10], Bellomo et. al. with fluid dynamic models that followed the Boltzmann equations [3], Daganzo with improvements on fluid-based models [7], and Klar et. al. with homogeneous and inhomogeneous traffic flow [13]. However, the four aforementioned papers model traffic flow in standard traffic environments apart from traffic circles. + +Additional research has also included discretized approaches such as cellular automata models [9, 13] and discrete stochastic models [17]. Discrete models are more suitable for research involving small environments like traffic circles, where studying individual car-to-car interactions takes an interest over generalizing traffic flow as a whole. + +Furthermore, discretized approaches have attempted to model multi-lane traffic flows [14]. However, despite the results of the research described above, to our knowledge there has been no research done on discrete models of multi-lane traffic circles of varying sizes. + +# 1.3 Our Results + +In this paper, we approach the problem of traffic circle control by first creating a simulator that models the traffic flow around a given circle. The simulator treats individual cars as autonomous units, allowing us to capture local interactions such as lane changes and traffic + +blockages due to cars entering and exiting. We validated this simulator against both a new stylized model of the situation and existing models of traffic circle flow. + +Using this simulator, we implemented and tested different control systems on different types of traffic circles. Based on the simulated results, we were able to isolate the rate of incoming traffic and the number of lanes in the traffic circle as the major driving factors behind the optimal choice of traffic control system. We were thereby able to recommend two different systems for different circumstances. When the rate of incoming traffic is low, we recommend that entering cars yield to cars already in the circle. When the rate of incoming traffic increases beyond a certain threshold, we recommend the use of traffic lights that control entering traffic and the outermost lane of the traffic circle. These lights should be synchronized so that the time between successive lights turning green is the average time needed for a car to travel between them. + +The rest of this paper is organized as follows. In Section 2, we divide the problem into two portions and define our objectives for each. In Section 3, we introduce a simulator for the performance of a traffic circle and validate its performance against a mathematical analysis and models from other sources. In Section 4, we use our simulator to analyze the performance of several types of traffic circles to produce recommendations for which control systems should be used for each type. In Section 5, we provide an overview of the advantages and disadvantages of our approach and give some directions for future work. + +# 2 Problem Setup + +In this paper, we approach the problem in two portions. First, we create a discrete simulator to model the behavior of cars in a traffic circle based on local interactions at the level of each car. We then use this simulator to evaluate the performance of the various techniques of traffic control and to determine which method should be used in each specific circumstance. Before proceeding, however, we specify our objectives in each part a bit more closely. + +# 2.1 Simulator + +Our goal in this part of the paper is to create a simulator that, given a set of conditions and traffic rules, can produce an accurate prediction of the behavior that will result from following these rules. To achieve this goal, we would like our simulator to fulfill the following requirements: + +1. The simulator takes into account the local interactions between cars. +Because traffic circles are often occupied by many cars that enter, exit, and change lanes quite frequently relative to the size of the traffic circle, such interactions between cars make a major contribution to the speed and efficacy of any traffic circle. Any effective simulator must therefore take these effects into account. +2. The simulator can support variation in the number of cars, size of the circle, and number of lanes. +In order to make predictions for the many different types of traffic circle extant, it is necessary to create a simulator that will work for all of them. + +3. The simulator can track properties of both the entire traffic circle and the individual cars passing through it. + +Because individual interactions are so important in determining the behavior of traffic circles, we must evaluate the results of a specific control system not only at a global level but also for each individual. + +For each of these cases, while the real behavior of cars in a traffic circle may vary widely, we wish to capture the most essential aspects of a control system for a particular traffic circle. Therefore, we restrict our simulator to dealing with idealized behavior of cars. In particular, we will assume that all cars in our simulation follow the traffic regulations that we have put into place. In addition, we will assume that no accidents or crashes happen. + +# 2.2 Control System Evaluation + +The ultimate objective of this paper is to apply the previously described simulator to produce concrete recommendations for which method of traffic control should be chosen for a specific traffic circle. We do so based upon the following statistics: + +1. The average throughput (the average rate at which cars pass through the traffic circle). +2. The average number of cars in the traffic circle. +3. The average total time it takes for each car to traverse the traffic circle, including time spent waiting to enter. +4. The average time each car spends driving through the traffic circle. + +Note that statistics 1 and 2 measure global properties of the traffic circle, while statistics 3 and 4 are properties of each individual car. To evaluate the efficacy of a control system, then, we must consider both the global performance of the system and the differences in the performance of the system for each individual. In particular, our goals are to: + +1. Maximize the average throughput of the traffic circle. +2. Minimize the total time spent traversing the traffic circle (for individual cars). + +Here, Goal 1 gives a measure of the overall performance of the system, which we of course wish to maximize. However, while doing so, it is desirable for the system to avoid bad performance for individual cars, which is given by Goal 2. For the rest of this paper, then, we will evaluate the performance of a traffic circle by the rate of cars passing through the circle (average throughput) and the total time required to traverse the circle (average total time). Our goal will be to choose the traffic control method that performs best with respect to both of these metrics. + +# 3 Simulator + +In this section, we describe a discrete traffic simulator used to predict the behavior of a traffic circle under a given set of traffic rules. + +# 3.1 Assumptions and Setup + +To model the behavior of traffic under certain conditions, there are essentially two possibilities: + +1. Make a (usually continuous) abstraction away from the discrete interactions of cars and deal with a more stylized model of the entire system. +2. Model the behavior and movement of each car separately. + +Many studies of traffic flow model traffic behavior as continuous and fluid-like, as in the first possibility. Such models are suitable under a macroscopic view of traffic, for instance in the study of traffic on long roads or highways. However, for intersections and traffic circles where specific car-to-car interactions occur much more frequently, a continuous fluid model seems inadequate. + +In this paper, we follow the first second possibility to model traffic flow in a traffic circle using a multi-agent discrete time simulation. Our simulation is based around the following two key principles: + +1. It is microscopic. +2. Behavior and information is local. + +We do not use an abstract view of traffic as a flow, but instead let each car in the traffic circle be its own individual agent. This allows us to account for the effects of car-to-car interaction, particularly in congested situations. From this interaction on the microscopic level, we then examine the macroscopic consequences of the simulation, instead of beginning with an arbitrary conception of what the macroscopic behavior should be. + +Each car is its own independent agent, trying to enter the traffic circle and exit at the desired exit as quickly as possible; no collaboration between cars or higher level organizational principles exists. Also, only local information, namely the cars in the immediate neighborhood, is available to each individual car. + +# 3.2 The Simulator + +Our approach will be to develop a simulation that realistically captures driver behavior in traffic circles and examine the consequences of this behavior on traffic flow. Specifically, the simulation operates using the following model: + +- Time is modeled in discrete timesteps. +- The traffic circle is a rectangular grid. The width of the grid is the number of lanes in the traffic circle, and the height of the grid represents the length of the traffic circle. The upper edge of the grid wraps around to the lower edge of the grid (so that the grid is actually a circle). At any time step, each square of the grid can either be empty or hold one car. + +- Certain squares in the right-most lane are designated as entry squares. A queue of cars waits at each entry square to enter the traffic circle. (These cars are not located on the grid itself.) The queues start off empty, and for each entry square, there is a fixed probability of a car being added to the queue at each time step. +- Certain squares in the right-most lane are designated as exit squares. Cars can exit the traffic circle at these exit squares. When a car is added to the queue for an entry square (and thus to the system), an exit square is chosen at random as the location at which the car will wish to exit the traffic circle. +- Each car has a speed indicated by how often it gets the chance to move. E.g., faster cars may get the chance to move at every time step while slower cars may only move at every other or every third time step. This difference in speed can simulate the differing levels of impatience or aggression among drivers. + +In each timestep, the simulation proceeds as follows: + +1. Determine the subset of all the cars in the system that will move during this timestep. Randomly assign the order in which these cars will move. +2. Allow each such car to move. Cars move under the following rules: + +A car that is already in the traffic circle at position $(i,j)$ (i.e. lane $i$ , vertical position $j$ ) will, in the following order of preference, + +(a) Exit if $(i,j)$ is the exit square at which the car wishes to exit. +(b) Move forward to $(i,j + 1)$ if $(i,j + 1)$ is unoccupied. +(c) Move forward and right to $(i + 1, j + 1)$ if there is a lane to the right and locations $(i + 1, j)$ and $(i + 1, j + 1)$ are both unoccupied. +(d) Move forward and left to $(i - 1, j + 1)$ if there is a lane to the left and locations $(i - 1, j)$ and $(i - 1, j + 1)$ are both unoccupied. +(e) Stay where it is. + +An exception occurs for cars that are about to exit—if the vertical distance between the car's current location and its desired exit is less than 4 times the horizontal distance, then the second and third items above are switched. (This is to ensure that under non-congested situations, cars will be able to exit at their desired exits.) + +A car that is the first car in the queue at an entrance location will, in order of preference, + +(a) Move to the entry square if that square is unoccupied. +(b) Stay where it is. + +All later cars in the queue cannot move for this turn. + +In addition to the above rules, the specific traffic control systems impose the following additional rules: + +(a) Outer-yield: In this system, a car at the front of an entrance queue and waiting to enter can only enter when both the entry square and the square directly behind the entry square are empty. That is, if there is a car directly behind the entry square, the entering car must yield to that car. +(b) Inner-yield: In this system, if a car in the circle wishes to move onto an entry square (in the rightmost lane) but the queue waiting at that entrance is nonempty, then the car cannot move to that square. If the car has no other possible moves, then it does not move for that turn. This reflects the situation in which cars in the circle need to yield to entering cars. +(c) Traffic lights: In this system, a traffic light controls each entry square. At any time step, the light is either green for cars in the circle and red for the waiting queue, or vice versa. If it is green for cars in the circle, then the first car in the waiting queue cannot enter the circle. If it is green for the waiting queue, then no car in the circle can move onto the entry square. Note that in a multi-lane circle, this traffic light only controls traffic in the rightmost lane. This behavior is inspired by the design of metering lights at highway ramps. + +For the traffic light control system, we consider two different methods of synchronizing the traffic lights around the circle. In the first method, all lights to turn green and red simultaneously. In the second method, the difference in time between each traffic light turning green and the next light turning green (and also the difference in time between this light turning red and the next light turning red) is directly proportional to the distance between these two lights. The proportionality constant is chosen so that a car that was waiting at a traffic light and begins to move when that light turns green will reach the next light just as it turns green. + +3. For each entry queue, add a car to the end of that queue with the fixed probability for that entry location. +4. Flip the traffic lights if it is the correct timestep to do so. + +# 3.3 Validation Against Existing Empirical Models + +The two criteria on which we evaluate the various traffic control systems, average throughput and average total time, are not unrelated. In fact, our simulations indicate that increasing one comes at the cost of increasing the other. Therefore, we would like to validate the accuracy of our simulation by examining the relationship produced by our simulation. In simulations on a fixed traffic circle under the outer yield system, the plot of reserve throughput (maximum throughput minus average throughput) against average total time over variations in incoming car density is shown below: + +This inverse relationship is clear intuitively, as a higher amount of throughput indicates a greater volume of traffic on the road and hence both slower driving speeds and longer wait times to enter the circle. This result also matches the relationship between average total time and reserve capacity given by the Kimber-Hollis delay equation in [4]. This rough correlation indicates that the results of our simulator are relatively reasonable. + +![](images/bd729fa6edeecaba92d79299286e92ba5edf8d37102866825efa268f217966f9.jpg) +Figure 2: Average Total Time vs. Reserve Throughput + +# 3.4 Validation Against a Simple Model + +To provide further verification of the accuracy of our simulator, we compare the large scale features of its output to a mathematical model for a simple case. In particular, we will consider a single-lane traffic circle in which cars entering the circle need to yield to cars already in the circle. For simplicity, we assume that all cars move at the same speed of one square per time step. We assume that roads leading to the traffic circle are all two-way roads, so that each entry point of the traffic circle is also an exit point. The model is given as follows: + +Suppose that there are $n$ entry/exit roads to the traffic circle, and that all cars have an equal probability of wanting to leave through each of the $n$ roads. For $i = 1$ to $n$ , let $r_i$ be the probability that a new car appears at road $i$ at any time step. Let $x_i$ be the volume density of traffic in the segment of the circle between roads $i$ and $i + 1$ . The expected change in the number of cars between roads $i$ and $i + 1$ is given by a sum of four terms: + +- The probability that a car will leave this segment through exit $i + 1$ is $x_{i} \cdot \frac{1}{n}$ , as $x_{i}$ is the probability that there is a car in the exiting square and $\frac{1}{n}$ is the probability that this car wishes to exit. +- The probability that a car will move from this segment to the next segment is $x_{i} \cdot \frac{n - 1}{n} \cdot (1 - x_{i + 1})$ , as $\frac{n - 1}{n}$ is the probability that the car in the exiting square will not exit and $1 - x_{i + 1}$ is the probability the the square after the exiting square, which is the first square of the next segment, is unoccupied. +- The probability that a car will move from the previous segment to this segment, similarly, is $x_{i - 1} \cdot \frac{n - 1}{n} \cdot (1 - x_i)$ . +- The probability that a car will enter through entrance $i$ is the probability $p$ of there being a sufficiently large space at entrance $i$ for a car to enter, times the probability that there is a car waiting to enter at that entrance. This latter probability can be computed as $r_i + r_i(1 - r_i)(1 - p) + r_i(1 - r_i)^2 (1 - p)^2 + \dots$ since there is an $r_i$ probability of a car + +arriving at entrance $i$ this time step, an $r_i(1 - r_i)(1 - p)$ probability of a car arriving at entrance $i$ last time step (but not this time step) and remaining until this time step, etc. This sum is $\frac{r_i}{r_i + p - r_i p}$ . We note that in our simulation, $p = (1 - x_{i-1})(1 - x_i)$ as a car can enter into the circle if the entrance point and the previous square are unoccupied. + +So the expected change in the number of cars in this segment in one time step is + +$$ +\Delta c _ {i} = - x _ {i} \cdot \frac {1}{n} - x _ {i} \cdot \frac {n - 1}{n} \cdot (1 - x _ {i + 1}) + x _ {i - 1} \cdot \frac {n - 1}{n} \cdot (1 - x _ {i}) + \frac {r _ {i} (1 - x _ {i - 1}) (1 - x _ {i})}{r _ {i} + (1 - x _ {i - 1}) (1 - x _ {i}) + r _ {i} (1 - x _ {i - 1}) (1 - x _ {i})}. +$$ + +In equilibrium, this change should be 0 for all segments, giving a system of equations in the $x_{i}$ . If we consider the case where the roads have equal incoming traffic, i.e. $r_{i}$ is the same for all $i$ , then by symmetry the $x_{i}$ are the same for all $i$ , and we may solve the equation + +$$ +\Delta c = - x \cdot \frac {1}{n} - x \cdot \frac {n - 1}{n} \cdot (1 - x) + x \cdot \frac {n - 1}{n} \cdot (1 - x) + \frac {r (1 - x) ^ {2}}{r + (1 - x) ^ {2} + r (1 - x) ^ {2}} = 0 +$$ + +numerically for $x$ in terms of $r$ . Here, $x$ is the traffic volume density for the circle as a function of $r$ , the rate at which cars enter the circle through each road. The result of numerically solving for $x$ is shown in the figure below together with the corresponding plot generated by our simulator: + +![](images/6ab5ed069694c25860560c4de126653bd36ec00ba6358a2cdd76be6deccc4882.jpg) +Figure 3: Traffic Volume Density vs. Rate of Incoming Vehicles + +Here, the black data points were generated by our simulator, and the curve was produced by the rudimentary model. The volume density of both seems to grow in a somewhat linear fashion for low rates of incoming vehicles. When the number of incoming vehicles increases, the traffic circle appears to become saturated at a fixed density. As can be seen, the simulator and our mathematical model agree on these large-scale features. While there is some disagreement around the critical rate of incoming vehicles, this might be explained by the fact that our mathematical model essentially considered the cars in each segment as equivalent, which ignores the small scale interactions that occur near gridlock. + +# 4 Predictions and Analysis + +In this section we apply the simulator to analyze different types of traffic circles. + +# 4.1 Criteria + +We will characterize traffic circles by the following variables which are fixed for each traffic circle and which we therefore take as given when deciding between traffic control systems. + +1. Rate of Incoming Vehicles: This is a result of the amount of traffic present on the roads entering the traffic circle and will influence the total number of vehicles trying to enter the circle and hence the traffic in the circle. +2. Length: This affects the number of cars that can be contained in the circle at a single time, which has many implications for how the entry mechanism of the circle should be determined. +3. Number of Lanes: This affects both the number of cars that can be in the circle at a time and their maneuverability around each other. Because cars can more easily pass one another with more lanes, increasing the number of lanes may reduce the effects of traffic blockages. +4. Number of Incoming Roads: This affects the rate at which cars need to enter and exit the traffic circle, which may influence the magnitude of traffic blockages. + +When considering different traffic control systems, we wish to consider systems that are relatively close to conventional traffic control systems, as it would be impractical and hazardous to introduce radically different systems that may be unfamiliar for drivers who do not encounter traffic circles very frequently. Therefore, we will evaluate the performance of the following traffic control systems when we vary the parameters for our traffic circle: + +1. Outer Yield: Cars attempting to enter the circle yield to cars already in the circle at all times. +2. Inner Yield: Cars already in the circle yield to cars attempting to enter the circle. +3. Simultaneous Lights: The intersections between the circle and other roads are controlled by traffic lights which all turn green and red at the same time. However, the traffic lights only apply to the outermost lane of the traffic circle. +4. Synchronized Lights: The intersections between the circle and other roads are controlled by traffic lights where the time interval between a light turning green and the next light turning green is proportional to the distance between the two lights. Here, the traffic lights only apply to the outermost lane of the traffic circle. + +With the exception of the traffic lights, these control systems are all similar to existing traffic control systems. However, there is a crucial difference between our traffic light system and standard traffic lights, as only stopping the outer lane of the traffic circle allows traffic in the inner lane to proceed undisturbed, improving the throughput of cars in the circle. This approach is a hybrid of normal traffic lights and metering lights for congested highways. + +# 4.2 Analysis + +To analyze the effects of the different traffic control systems on the different types of traffic circles, we created different traffic circles by varying our parameters. We then ran each traffic control strategy on these different circles and created plots of the average throughput and average total time per car for each strategy. In particular, our goal through these simulations was to determine which of the parameters had the greatest effect on the simulated performance of the different traffic control systems. + +Changing the different parameters, we produced the following plots measuring the effectiveness of the various traffic control systems: + +![](images/b644338891aa4ccdb5dac1b1e3409d4f1735cc21785c6b919275ca0415b1e2b7.jpg) +Average Throughput vs. Rate of Incoming Vehicles + +![](images/645bb1f17718089fe3eec571eea03a213f3ce9dc8f892f073eb2e8b8e393016d.jpg) +Average Total Time vs. Rate of Incoming Vehicles + +![](images/4cbf96be1c6da8ab6699a724c2d44b7cc42582794234b741849ab564913a8e47.jpg) +Average Throughput vs. Rate of Incoming Vehicles +Figure 5: Performance for 3 lanes, length 100, 4 roads, rate variable. + +![](images/c44d6227f612b0da0deb92f87169cc359f41851157520b6eed094b8d6511156e.jpg) +Figure 4: Performance for 1 lane, length 100, 4 roads, rate variable. +Average Total Time vs. Rate of Incoming Vehicles + +![](images/5b7bbc67322f4402c1c06fae3e391b6d8fe403567fc991403c8e35f42ec27031.jpg) +Average Throughput vs. Rate of Incoming Vehicles + +![](images/e5431702daecb4440920961f42aeca0d9b4ad4b911cb62a5e177eb51a3d97d2e.jpg) +Average Total Time vs. Rate of Incoming Vehicles + +![](images/02022c8c9e796a1c290c71b1b21450ef40b977f0711ac00248ff5638683bb947.jpg) +Figure 7: Performance for 3 lanes, rate 0.1, 4 roads, length variable. + +![](images/84380f420197e455dc9bb5f2ba7792a09e699b595ea373aff733f7f240e3e7e1.jpg) +Figure 6: Performance for 5 lanes, length 100, 4 roads, rate variable. + +![](images/7b603d2d378a4de6b30c66201e5adb579c44a4657a8569169fc4ea32d24cf218.jpg) +Figure 8: Performance for 3 lanes, rate 0.1, length 100, roads variable. + +![](images/5839b39556e562fb1cc3b62ae9c6d1d9144af9c6a248fce6e9626573ad470f3d.jpg) + +From these plots, we can make the following observations: + +- In almost all the plots, the inner yield system has almost no throughput, as the cars in the road become gridlocked because they too often yield to incoming cars and therefore cannot exit. Note that the low value of the average total time for this control system results from the fact that the only cars that are able to exit do so before the road becomes entirely gridlocked. As a result, we can reject the inner yield system as a possible control system in any circumstance. +- As the rate is varied in the first three sets of plots, the throughput of the outer yield system and the traffic light systems correspond for small values of the rate. However, for each system, the throughput reaches a plateau after a certain value of the rate. At this point, it appears that the traffic circle has been saturated, meaning that it can no longer accept more cars from the incoming roads. +- The throughput value at which saturation occurs is much higher for the outer yield system, meaning that more cars can pass through the circle in a given time period under this system. However, the amount of time required for each individual car to pass through the traffic circle under the outer yield system is extremely high, almost an order of magnitude higher than needed under either traffic light system. +- When there are either 3 or 5 lanes, the synchronized traffic light system allows slightly greater throughput than the simultaneous traffic light system. This might be explained by the fact that, with more lanes, cars can move in a more uniform manner, allowing them to use the synchronized lights and move through the circle more quickly. +- The number of lanes and the number of roads do not seem to have a significant effect on either the throughput or total time of the outer yield system or either of the traffic light systems. However, it appears that the traffic lights may perform worse for some values of the distance between roads, perhaps due to synchronization issues. + +In general, it appears that the outer yield and traffic light methods have an advantage over the inner yield method, and that the correct choice of control system is largely determined by the rate of incoming vehicles on each of the roads which meet at the circle. + +# 4.3 Recommendations + +Based on the observations we made in the previous section, we would like to give concrete recommendations for what traffic control system to use for a specific traffic circle. As we noted, the number of lanes and the number of roads incident at the traffic circle do not seem to significantly affect the average throughput of the circle or the average time required for each car to traverse the circle. We may therefore restrict our attention to the rate of incoming cars and the number of lanes in the circle. + +Of these two variables, the rate of incoming cars accounts for a large part of the variation in performance, as can be seen in Figures 4, 5, and 6. For values of this rate between 0 and 0.1 cars per timestep, the performance of the outer yield and traffic light systems is identical, as in this range traffic is light and there is very little interaction between cars. + +As the rate increases to between 0.1 and 0.2 cars per timestep, the traffic circle reaches its maximum throughput under the traffic light system, while the average total time stays fixed. However, under the outer yield system the throughput of the system continues to increase at the cost of a rather dramatic increase in the average total time. In this range, choosing between the outer yields system and the traffic light systems involves a tradeoff between the throughput, or the quantity of cars passing through, and the total time, or the speed at which cars can pass through. + +Finally, as the rate increases above 0.2 cars per timestep, the traffic circle becomes saturated with cars, meaning that the average total time for the outer yield system increases dramatically. In this case, this circle experiences gridlock with the outer yield system, meaning that cars move extremely slowly and must wait a very long time to pass through. Under the traffic light systems, however, a smaller number of cars can pass through, but the average total time required for them to pass remains similar to that required for a much lower rate. As the inner yield system requires an extremely large total time in this range, the traffic light systems are clearly superior for a rate of above 0.2 cars per timestep. + +Observe that in each of these cases, the synchronized traffic lights allowed for higher throughput than the simultaneous traffic lights, meaning that they should always be preferred. + +We can now make the following recommendations for the control of traffic circles: + +- For a low rate of entering cars, no traffic lights should be used in the traffic circle. Instead, cars already in the circle should be given the right of way, and cars entering the circle should yield. +- As the rate of entering cars increases, the use of synchronized traffic lights for the outer-most lane of the traffic circle only should be considered in order to ensure a reasonable time of traversal for most cars. +- For large rates of entering cars, as may occur during rush hour, synchronized traffic lights should be used in order to ensure that the traffic circle does not become congested. By preserving a reasonable flow of cars within the circle, synchronized traffic lights allow a slightly smaller number of cars to pass through the circle much more quickly, which is definitely preferable to deadlock for all cars. + +In the low and high limits of the rate, our recommendation seems to correspond to similar practices elsewhere. Few traffic lights are deemed necessary when there are a small number of cars present in the low limit. In the high limit, it is a common practice in California to use metering lights to limit the number of vehicles that can merge onto a highway during peak hours in order to ensure that the cars already on the road can move at a reasonable speed. Our recommendations then, seem to be a mix of these two ideas applied to traffic circles. + +# 5 Conclusions + +# 5.1 Strengths + +Our simulator takes into account the behavior and outcomes of individual cars traveling through a traffic circle. By doing so, we are able to detect interactions at a microscopic level and to track the performance of a traffic control system for each individual rather than only in aggregate. This allows our model to evaluate the effects of cars changing lanes and entering and exiting from specific lanes. The simulator was validated against both an existing empirical model and the results of a simple model for the steady-state limit. + +Using this simulator, we were able to simulate the performance of a widely varied spectrum of traffic control systems on a range of different traffic circles. Our results allowed us to isolate the rate of incoming cars and the number of lanes in the traffic circle as the two parameters key to determining the correct control system. + +By analyzing the performance with respect to these parameters, we recommended the use of either an outer yield system or synchronized traffic lights to control the traffic circle, depending on the rate at which cars enter the circle. These recommendations based on evidence from our simulator also correspond to traffic control systems used for similar practices, suggesting that our changes to them may also be effective when applied to traffic circles. + +# 5.2 Weaknesses + +While our simulator attempted to model the behavior of drivers fairly accurately, it was of course impossible to completely capture the dynamics of lane changing and breaking. Further, while using a discrete time, discrete space model for the simulator allowed us to capture the local, multi-agent nature of individual drivers, it forced us to make some simplifications with regards to the continuity of car movement and with regards to simultaneous actions. + +In addition, our simulation did not take into account the fact that, in an actual traffic circle, the inner lanes have slightly shorter length than the outer lanes, as we are assuming that the circle is large enough so that this does not occur. + +We only considered traffic lights with simultaneous or synchronized light changes, and it was infeasible computationally for us to consider a wider variety of light switching approaches. While using synchronized lights did allow us to improve upon the throughput of regular traffic lights, further improvements might be available by examining other possibilities. + +# 5.3 Alternative Approaches and Future Work + +There were several extensions to our simulator that we could not pursue due to the time constraint. We considered evaluating the safety of a particular traffic control system by counting the number of conflicting desired movements at the local level. We would then be able to compare systems by safety as well as pure performance, and in particular we might be able to evaluate the claim that certain types of traffic circles are safer than intersections [8]. + +# A Source Code for Simulator + +The source code for the simulator has been appended to this paper. + +# References + +[1] Aerial photograph of Dupont Circle in Washington, D.C., USA. Taken by the United States Geological Survey. Available at http://en.wikipedia.org/wiki/index.html? curid=1017545. +[2] Martin, Bekir, Kaan Ozbay, Ozlem Yanmaz-Tuzel, and George List. "Modeling and Simulation of Unconventional Traffic Circles." Transportation Research Record: Journal of the Transportation Research Board, V. 1965 (2006). +[3] Bellomo, N., M. Delitala, V. Coscia, and F. Brezzi. “On the Mathematical Theory of Vehicular Traffic Flow I: Fluid Dynamic and Kinematic Modelling.” Mathematical Models & Methods in Applied Sciences, V. 12 (2002). +[4] Brilon, Werner and Mark Vandehey. “Roundabouts—The State of the Art in Germany.” Institute of Transportation Engineers (ITE) Journal, V. 68 (1998). +[5] Brilon, Werner, Ning Wu, and Lothar Bondzio. "Unsignalized Intersections in Germany—a State of the Art." Published in the Proceedings of the Third International Symposium on Intersections without Traffic Signals, Portland Oregon (1997). Available at http://www.ruhr-uni-bochum.de/verkehrswesen/vk/deutsch/Mitarbeiter/Brilon/Briwubo_2004_09_28.pdf. +[6] Brilon, Werner, Ralph Koenig, and Rod J. Troutbeck. "Useful Estimation Procedures for Critical Gaps." Transportation Research Part A (1999). Available at http://www.sciencedirect.com/science/article/B6VG7-3VF9D7R-2/2/2b325096a3b448fd0c0a09952c091ff4. +[7] Daganzo, Carlos F. “Requiem for Second-Order Fluid Approximations of Traffic Flow.” Transportation Research Part B: Methodological, V. 29 (1995). Available at http://www.sciencedirect.com/science/article/B6V99-3YKKJ1D-F/2/f9d735df36de4048d1e62a0a20e844b0. +[8] Flannery, Aimee and Tapan K. Datta. "Modern Roundabouts and Traffic Crash Experience in United States." Transportation Research Record: Journal of the Transportation Research Board, V. 1553 (1996). +[9] Fouladvand, M. Ebrahim, Zeinab Sadjadi and M. Reza Shaebani. "Characteristics of Vehicular Traffic Flow at a Roundabout." Phys. Rev. E 70, 046132 (2004). +[10] Helbing, Dirk. "Improved fluid-dynamic model for vehicular traffic." Phys. Rev. E, V. 51 (1995). +[11] Hossain, M. "Capacity Estimatin of Traffic Circles under Mixed Traffic Conditions using Micro-Simulation Technique." Transportation Research Part A: Policy and Practice, V. 33, Issue 1 (1999). Available at http://www.sciencedirect.com/science/article/B6VG7-3VCC88F-3/2/2a0bc271ba272f8398e67867cda48ccd. +[12] Kimber, R. M. “The Traffic Capacity of Roundabouts.” L.R. 942 (1980). + +[13] Klar, Axel, Reinhart D. Kühne, and Raimund Wegener. “Mathematical Models for Vehicular Traffic.” Surv. Math. Ind., 6 (1996). Available at http://citeseer.ist.psu.edu/old/518818.html. +[14] Nagel, Kai, Dietrich E. Wolf, Peter Wagner, and Patrice Simon. "Two-lane traffic rules for cellular automata: A systematic approach." Phys. Rev. E, V. 58 (1998). Available at http://prola.aps.org/pdf/PRE/v58/i2/p1425_1. +[15] Polus, Abishai and Sitvanit Shmueli. “Analysis and Evaluation of the Capacity of Roundabouts.” Transportation Research Record: Journal of them Transportation Research Board, V. 1572 (1997). Available at http://trb.metapress.com/content/p1j1777227757852. +[16] Polus, Abishai, Sitvanit Shmueli Lazar and Moshe Livneh. “Critical Gap as a Function of Waiting Time in Determining Roundabout Capacity.” J. Transp. Engrg. V. 129, Issue 5 (2003). +[17] Schreckenberg, M., A. Schadschneider, K. Nagel, and N. Ito. "Discrete Stochastic Models for Traffic Flow." Phys. Rev. E 51, V. 51 (1995). Available at +[18] Zein, Sany R., Erica Geddes, Suzanne Hemsing, and Mavis Johnson. "Safety Benefits of Traffic Calming." Transportation Research Record: Journal of the Transportation Research Board, V. 1578 (1997). + +# { + +} } + +Ttnu = tnti Tnnc: (t) + +}((x)prxtnloepoeepoeepoeepoeepoeepoeepoeepoeepoeepoeepoeepoeepoeepoeepoeepoeepoeepoeepoeepoeepoeepoeepoeepoeepoeepoeepoeepoeepoeepoeepoeepoeepoeepoeepoeepoeepoeepoeepoeepoeepoeepoeepoe + +1 + +} + +:TTnu $=$ 1yTt +} ()peerdnooosr $\cdot$ (yubx)AeBxpeooy +} ((xno)praoopoeupeoeyi yB ()peerdnooossr $\cdot$ yB xTTnu $=$ yTb +:((xno)RyBorobpeooy = yB + +Ttnu = xexu + +} + +1 + +} ()pctdnoocost $\cdot$ (xex)AeaeBbpeBnEeB) + +} (xin)PteTtXoeppepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepe + +: ()sttAexyMe = tEx sttAexy + +PON 8:00 p.m. + +: (xh3) ppe·fət + +1 + +:(x)ppe· + +} [ [n] = 1 +x∈u) + +(2) $\because {a}_{4} = {a}_{1} + 3 = 4$ + +(i) + +(2)股东可以登陆证券公司交易客户端通过交易系统投票 + +// + +${x}^{2} + {5x} - 6 = 0$ + +:(y6)ppe· + +(tnu =i ybxt) // + +Apeoey) + +$\left\{ \begin{array}{l} \text{ 若 }P \text{ 是 }\left( {{OA} + {OD}}\right) \text{ 的一个 } \\ \text{ 若 }P \text{ 是 }\left( {{OB} + {OD}}\right) \text{ 的一个 } \\ \text{ 若 }P \text{ 是 }\left( {{OC} + {OD}}\right) \text{ 的一个 } \end{array}\right.$ + +: (jup)pee' + +(1)[td=4qD] + +1 + +(){TINy $\equiv$ iAIEIT + +1 + +:(xu)ppe· + +(ttu=,4x@) + +( + +:(xue)ppe· + +(Ttnu $\equiv$ ixue) + +1 + +eneeantieaeeaaee + +![](images/20aaa9fe0a941dbb2bd71b47897bf807bbbe351322d8a2362c046b63d90bd833.jpg) + +eae·rntaeeaaee66y +} eantid sueeetdnt eantidtessss66s sseotortqnd +$\therefore m - 1 \neq 0$ ; +:istAexiA·ttn-eAe.4oouf +102 + +# + +1 + +:innn + +1 + +${x}^{2} + {5x} - 6 = 0$ + +:00T= xin + +} (oot ootetcoT)olanow pOAn otqnd + +__________ + +$\left( {x{t}^{2} - x}\right) t + x{y}^{2} = 0\;\left( {t\text{ 为直不等式 }}\right)$ + +' + +1 + +}este{ + +:ennt u nnt + +?++788J0 + +} (0 == pəəds % əsfo) + +1 + +:++pos#outc:pc# + +(1) + +}()ANONST ueTooog @Tqnd + +${180}^{ \circ }$ + +iin nne + +${x}^{2} + {5x} - 6 = {12}$ + +peeds winter + +}()peedst6utotqnd + +$\left\{ {1,2,3}\right\} \Rightarrow \;\left\{ {1,3,4}\right\} \Rightarrow \left\{ {2,3,4}\right\} \Rightarrow \left\{ {3,4,5}\right\} \Rightarrow \left\{ {4,5,6}\right\} \Rightarrow \left\{ {5,6}\right\} \Rightarrow \left\{ {6,7}\right\} \Rightarrow \left\{ {7,8}\right\} \Rightarrow \left\{ {8,9}\right\} \Rightarrow \left\{ {9,10}\right\} \Rightarrow \left\{ {10,11}\right\} \Rightarrow \left\{ {11,12}\right\} \Rightarrow \left\{ {12,13}\right\} \Rightarrow \left\{ {13,14}\right\} \Rightarrow \left\{ {14,15}\right\} \Rightarrow \left\{ {15,16}\right\} \Rightarrow \left\{ {16,17}\right\} \Rightarrow \left\{ {17,18}\right\} \Rightarrow \left\{ {18,19}\right\} \Rightarrow \left\{ {19,20}\right\} \Rightarrow \left\{ {20,21}\right\} \Rightarrow \left\{ {21,22}\right\} \Rightarrow \left\{ {22,23}\right\} \Rightarrow \left\{ {23,24}\right\} \Rightarrow \left\{ {24,25}\right\} \Rightarrow \left\{ {25,26}\right\} \Rightarrow \left\{ {26,27}\right\} \Rightarrow \left\{ {27,28}\right\} \Rightarrow \left\{ {28,29}\right\} \Rightarrow \left\{ {29,30}\right\} \Rightarrow \left\{ {30,31}\right\} \Rightarrow \left\{ {31,32}\right\} \Rightarrow \left\{ {32,33}\right\} \Rightarrow \left\{ {33,34}\right\} \Rightarrow \left\{ {34,35}\right\} \Rightarrow \left\{ {35,36}\right\} \Rightarrow \left\{ {36,37}\right\} \Rightarrow \left\{ {37,38}\right\} \Rightarrow \left\{ {38,39}\right\} \Rightarrow \left\{ {39,40}\right\} \Rightarrow \left\{ {40,41}\right\} \Rightarrow \left\{ {41,42}\right\} \Rightarrow \left\{ {42,43}\right\} \Rightarrow \left\{ {43,44}\right\} \Rightarrow \left\{ {44,45}\right\} \Rightarrow \left\{ {45,46}\right\} \Rightarrow \left\{ {46,47}\right\} \Rightarrow \left\{ {47,48}\right\} \Rightarrow \left\{ {48,49}\right\} \Rightarrow \left\{ {49,50}\right\} \Rightarrow \left\{ {50,51}\right\} \Rightarrow \left\{ {51,52}\right\} \Rightarrow \left\{ {52,53}\right\} \Rightarrow }.$ + +__________ +40x100 +(1) $|a - b| = |c - d| = 4$ +$\left\{ \begin{array}{l} \text{ 若有 } \in S \Rightarrow \text{ 若有 } - \frac{1}{2} \in S \\ - \frac{1}{2} \in S \end{array}\right.$ +:(xu)ppe· +}(Ttnu=ixeU) +${120}^{ \circ }$ +:()ppe· +} (ffnu = i jubtx) gtf +xans+4s+peoadq+ +(/){ // +:(y6)ppe·// +}([nu =i y6x)]// +# +$\left\{ \begin{array}{l} \text{①} \\ \text{②} \\ \text{③} \end{array}\right.$ +: (jut) ppe.4e1 +1 +:(1)ppe· +}(Ttnu=1 +$\left\{ \begin{array}{l} \text{ 若 }P \text{ 是 }\left( {{OA} + {OD}}\right) \text{ 的一个角 } \\ \text{ 若 }P \text{ 是 }\left( {{OB} + {OD}}\right) \text{ 的一个角 } \\ \text{ 若 }P \text{ 是 }\left( {{OC} + {OD}}\right) \text{ 的一个角 } \end{array}\right.$ +:(xed)ppe +(2) (1) (2) +1 +__________ +A. $\mathrm{t}$ 、 $\mathrm{v}$ 、 $\mathrm{u}$ 、 $\mathrm{p}$ + +:en17 = peoRuo +} (peoRoo proa orTqnd + +1 + +1.() + +1 + +{ :pessesedewtipeoei xnxnex } () pessesedewtipeoei tntotnpnd + +1 + +pessedewttlletotnnt () pessedewttlletotn + +1 + +no + :1e 104, uunex +} (but) but + +# + +![](images/ed0da1efcb38011c2f46ca7153e05426c63213f1383017649ecffeee1b3bff78.jpg) + +![](images/dcc86e9b43d7d37715c3435eea0985d8831d98ef021197416e340ca431ef1aaa.jpg) + +```python +>>> python.__name__.hex + +__________ + +${a}_{i} \in a\;{a}_{i}$ 分母最小值为 $k$ . + +:TTnu uinx + +} st{ + +?p uinjex + +(1) $\overrightarrow{AB} \cdot \overrightarrow{AC} = 0$ ;(2) $\overrightarrow{AB} \cdot \overrightarrow{AC} = 1$ + +<10710> + +${180}^{ \circ }$ + +${a}_{i} \in a\;{a}_{i}$ 分母最小值为 $k$ . + +:TTnu uinx + +} st{ + +?p uinjex + +(1) $\overrightarrow{O}$ 与 $\overrightarrow{A}$ 成的角相等 + +<()zts:6btpm) + +(1) + +(1) $\because {a}_{4} = {a}_{1} + 3 = {10}$ + +: (p) ppe -bu11tem + +$\therefore {S}_{\Delta OBC} = {S}_{\Delta AOB} + {S}_{\Delta BOC}$ + +1 + +:enxunx + +(1) Alistod st aetood atrd + +1 + +CTAPXIV M0U = buttem + +$\therefore {0}^{2} = {\theta m}\overrightarrow{a} - {\overrightarrow{a}}^{2}$ + +$\begin{array}{rl}{!(\mathrm{sod}\tilde{\Lambda}} & {\mathrm{'sodx})\mathrm{xedns}} \end{array}$ + +[ou] ouTteOoTteTeeT + +TATIIG>STITARIX 2014 + +10.2.1.3.4.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1.1. + +1 + +# + +1sTAAeAeTAneAe[ + +: dew abe + +eae·oortoeotxur + +```c +let int x = 10; +} (int x); + +__________ +: (sod $\lambda$ 'sodx) xedns +(sodx aut 'sodx aut) uotteooTtXe orqnd +} ootootootootootootootootootootootootootootootootootootootootootootootootootootootootootootootootootootootootootootootootootootootootootootootootootootootootootootootootootootootootootootootootootootootootootootootootootootootootootootootootootootootootootootootootootootootootootootootootootootootootoot + +:deu ebspeed + +Location.java + +public class + +private int xPos; +private int yPos; + +private technical occupation; private int type; + +public Location(int xPos, int yPos) { + +this.yPos = yPos; +this.yPos = 1 + +$\therefore m - 1 \neq 0$ ; + +public int getXPos() { return xPos; + +$\therefore m - 1 \neq 0$ ; + +```bash +pureline getzeros() +return vPos; + +__________ + +return occupied; + +11 + +occupied $=$ !occupied; + +public int type() { + +return -1; // not occupied + +return type; // 1 for standard, 2 for aggressive + +} + +public voidsetType(int type) { this.type $=$ type; + +public boolean isEntry() + +return false; + +__________ + +return false; + +public String toString() { return "\( + xPos + " , " + yPos + ")"; + +:stx stTAEy eaeetid +:sse steAey eaeetid +:quee tut eaeetid +:tput aetetid +:peo<stTAEy eaeetid + +# esrj uin + +(1)peidncooocoo (2) (Aunstc) 0f ((uBneI % (I + )sodxieaeaeaeaeaeae)) eob (yupm) oob oob = oot ooe + +(I)VATM--(1)0x26:awatina + +(1) + +__________ + +0364) 1nfer este + +:mpmumn (Aunusr.o | ()AnuunwJ) + +(00000001 00000001) + +(1) + +(07) + +( + +:slxuun + +(s)xtxgct6 stTAAeAA orTqnd + +# + +:saeue uanex + +r + +__________ + +1 + +(1) $4 + 4 + 4 + 4 + 4 + 4 = {12}$ (1) + +1 + +${174} + 4 + 8 + {48} - 9 = {216}$ + +(1) + +(1) + +()() +} +(4a0a13 4:43501)10a2408 4:435013 [q] +AEBF·PEOH + +![](images/f568f96add7279ac4bb37f026bb9cd0433926708f664438111c6eecd87df2beb.jpg) + +RandomTrafficLightController.java + +package simulator; + +RandomTrafficLightController.java +public class RandomTrafficLightController implements TrafficLightController{ public RandomTrafficLightController(Road road, ArrayList entryLocs, int green, int red) + +public RandomTrafficLightController(Road road, ArrayList entryLocs, int green, int red) { lights = new ArrayList(entryLocs.size()); +lights = new ArrayList(entryLocs.size()) +flipRed = new ArrayList(entryLocs.size()); +flipGreen = new ArrayList(entryLocs.size()); +greenPlusRed = green + red; + +Random generator = new Random(); + +for (int i = 0; i < entryLocs.size(); i++) { +for (int i = 0; i < entryLocs.size(); i++) { + +int flipRedTime = generator nextInt(greenPlusRed); +flipRed.add(new Integer(flipRedTime)); +flipGreen.add(new Integer((flipRedTime + red) % greenPlusRed)); + +for (int i = 0; i < entryLocs.size(); { +boolean roadGo = generator.nextBoolean(); +lights.add(new TrafficLight(road.getPrev(road.getNext(entryLocs.get(i))), entryLocs.get(i), roadGo, !roadGo)); + +time $= -1$ +this step(); + +publicHashSetLOCATION>step() +lic HashS + +for (int i = 0; i < greenPlusRed; i++) + +if((time%greenPlusRed) == flipR (time%greenPlusRed) - + +lights.get(i).flip(); + +```java +HashSet stoppedLocs = new Sets(lights.size()); +for (TrafficLight light : lights) + +if (!light.getRoadGo()) return getRoadGo.add(this) + +stopDLoLs.add(light.getEntryLoc()); + +return stoppedLocs; + +public ArrayList getLights() +return lights; + +![](images/06b21ca119d70bf4c8038a81c6c949b41d538b37e978a1f4d945b0b2bec04665.jpg) + +![](images/35c37a9bd43014e988641e3d0b3fbb4c434fb31273be20cd343a0adcb51d5bea.jpg) + +(1)0500: nO : (u.u.) eTIm'nO 2nO : \((\text{一} _ { \text{一} } + (\text{一} _ { \text{一} } + (\text{一} _ { \text{一} } + (\text{一} _ { \text{一} } + (\text{一} _ { \text{一} } + (\text{一} _ { \text{一} } + (\text{一} _ { \text{一} } + (\text{一} _ { \text{一} } + (\text{\*} _ { \text{一} } + (\text{一} _ { \text{一} } + (\text{一} _ { \text{一} } + (\text{一} _ { \text{一} } + (\text{一} _ { \text{一} } + (\text{一} _ { \text{一} } + (\text{一} _ { \text{一} } + (\text{一} + ++ :spnnN | n + yabnT + :uT | n + saeT + :sun) +?(n:pTeA|n+qoIdua+b+n:sue|n +}(0==PTaTAs)J +(n) +(n) +(n) +(n) += eadjaydtj tse{ +===dAJUHT)ITEST{ +}(乙==adAnuBti)ITeste{ +} {ε==aAInuOHTI} IIT stE{ +1(n n +funocntu+ n :xepunN |n)eTm·n +eeppeeppeeppeeppeeppeeppeeppeeppeeppeeppeeppeeppeeppeeppeeppeeppeeppeeppeeppeeppeeppeeppeeppeeppeeppeeppeeppeeppeeppeeppeeppeeppeeppeeppeeppeeppeeppeeppeeppeeppeeppeeppeeppeeppeeppeeppeeppeeppeeppeeppeep +()0807n +} (oouofooooi) +or ddoxoIO) 407 + +{ + +乙ebed + +${180}^{ \circ }$ + +:seeppepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepepe pe + +${a}_{i} \in a\;{a}_{i}$ 分母最小值为 $k$ . + +:seewlllreer + +1 + +(1) + +__________ + +$\therefore m - 1 \neq 0$ ; + +${a}_{i} \in a\;{a}_{i}$ 分母最小值为 $k$ . + +$\left\{ \begin{array}{ll} {10} & \text{ 若 }x > 1 \\ {80} & \text{ 若 }x < 1 \end{array}\right.$ + +A. $\mathrm{I},\mathrm{II},\mathrm{T}$ 三s· + +# + +# + +1 + +tdnooost'ej) + +${180}^{ \circ }$ + +: () aowst·p + +1 + +$\therefore \overrightarrow{PB} \cdot \left( {\overrightarrow{PA} - \overrightarrow{PC}}\right) = 0$ + +1 + +${180}^{ \circ }$ + +$\left\{ {1,2,3}\right\} \Rightarrow \;\left\{ {1,3,4}\right\} \Rightarrow \;\left\{ {2,3,4}\right\} \Rightarrow \left\{ {3,4}\right\} \Rightarrow \left\{ {4,5}\right\} \Rightarrow \left\{ {5,6}\right\} \Rightarrow \left\{ {6,7}\right\} \Rightarrow \left\{ {7,8}\right\} \Rightarrow }$ + +__________ + +# + +$0 = = \varepsilon u n u)$ + += 5wnu 4uT + +)=wifu4u + +HOTB2017J3V) + +$\therefore \overrightarrow{PB} \cdot \overrightarrow{CA} = 0$ + +${180}^{ \circ }$ + +)AOWs·sTAnTJp + +p) ppe'saertpeo + +(1) + +101 : p 1 + +${180}^{ \circ }$ + +${180}^{ \circ }$ + +$\{ 1\} ,\{ 2\} ,\{ 3\} ,$ + +66 + +017e6·p + +eaeep + +1 + +# + +$\text{品} \cdot \text{品}$ ) + +# + +![](images/54573fdba24ca0a5d5aebefde18ba50c3bf35934e3bb44aaf5abb9b29ab3e7e9.jpg) +# +获益多者 + +T əbEd + +# 103 + +: (sTeTtip peoey)Txn76p = s001 stT + +} (aonoo7 : p xanTxD) + +(d) + +(1) + +unu) 2e6·eAnowOT = duoe7 xanT + +(2) $\left( {a + b}\right) \left( {{a}^{2} + {ab} + {b}^{2}}\right) = {a}^{3} + {b}^{3}$ . + +(++.·du[>t.0=4a]) + +()zT·aONoT= uO + +${15}/{16}$ + +P (p) ppe'eAnowot + +}() + +(sloantip : p 1oot) + +e + +}() + +(2015·南京) + +'piai=piai; + +$\left\{ {1,2,3}\right\} \;\left\{ {1,3,4}\right\} \;\left\{ {2,3,4}\right\}$ + +:()pətndnooodTJ· + +:(T)aDAdIITaeS·e + +sA + +: aouotooaun) + +( ) āzīs·spuə = spu + +# + +14433 pcde94=84 + +Aeiv Mau =seu + +HexyMeu = SseTuL + +STANXyM=St + +let() peod Med = peo; +{peo} {peo}; + +$ qulr = saui + +X667 + +seut 10eTnnt + +NEs eTqnop Teutj p + +//p[0]x4u p + +P + +6667uI>stTAnExxVp + +TATX>STTAXp + +A + +. qubet [u]p . + +P + +:peoeeppeep + +1. 一元一次不等式 + +1 + +1 + +: + +eae[ioe]nnt + +1 + +$\therefore m - 1 \neq 0$ ; + +# + ++ + +SynchronizedTrafficLightController.java + +public ArrayList getLights() + +return lights; + +private ArrayList lights; + +private ArrayList flipGreen; + +private int greenPlusRed; + +Page 1 + +Page 2 + +: ()s#bTt#e stXexy orlqnd + +(1) dəs ɪəsəyseH orɪqnd + +Tnrrnnrnnnne nnnnne nnnnne nnnnne nnnnne nnnnne nnnnne nnnnne nnnnne nnnnne nnnnne nnnnne nnnnne nnnnne nnnnne nnnnne nnnnne nnnnne nnnnne nnnnne nnnnne nnnnne nnnnne nnnnne nnnnne nnnnne +__________ +} +: +__________ +:ioeTnns abexoeed +eae:xttoctnoocotgct + +![](images/99bc72e55eb2df7d1e3a47f045bd379e58f5b0842c665fcdc04fe443e6d0ad6e.jpg) +Pae 2 + +1 +中 ++ +1 +# +. + +$\therefore m = \frac{3}{11}$ ; + +0 +a +中 ++ +C + +2 +3 +# +$\therefore m = \frac{3}{11}$ +D + +# + +# +C +一 +1 +[\text{、}] +# + +。 +D + ++ +1 +√ +. +, + +1 + +。 +1 + +。 +1 + +. +, +。 +1 +, + + + + +1 + +1 +H +C +On +# +H +P +3 0 +中 +E +# +# +# +. +中 +# +P +# +。 +# +H +# +。 +# +0 +# +0 +C +# +. + +. +H +# +H +节 +0 +O O +3 +中 +5 +0 +() +# +(1) + +50 +H +(上) +F +# +1 +C +# +A +1 +e +# +. ++ +# +# + +$\therefore m = \frac{3}{11}$ + +: +, +. + +。 +. +. +1 + +. + +: +。 +, + +。 +1 +F H 6 +# +0 +T +# +( +# +C +F H A +H +C 0 +C +H +F +0 +. +C +# +F +C +# +C 0 +# +C +< +C +(1) +F H +0 +# +H +、 + +: +${13}/{14}$ +2.1 +中 +0 1 +2 ++ +D 1 ++ +D +# +X 1 +Q 1 ++ +. ++ +0 1 +4 +Q 0 +D 0 +# +# +Q 1 ++ +) +2 +0 +. +Q +0 +B +0 +# +. +. + +# +3 +# ++ +# +1 +。 +。 +A +3 +# +3 +# +D +5 ++ +三. +号 ++ +C +n +V +# +. +. +# +。 +1 + +. +, + +图 +# +a +A +# +0 +1 +H +35 +0 +H +# +H +th +(1) +1 +ne +3 +# +shn +a +U +# +C +中 +H +5+ +Q +V +V +. + +UniformTrafficLightController.java + +import java.util.ArrayList; + +public class UniformTrafficLightController implements TrafficLightController { public UniformTrafficLightController(Road road, ArrayList entryLocs, int green, int red) + +lights $=$ new ArrayList(entryLocs.size()); + +lights.add(new TrafficLight(road.getPrev(road.getNext(loc)), loc)); + +this.green $=$ green; this.red $=$ red; + +```java +public HashSet step() + +time++; + +# + +for (TrafficLight light : lights) + light.flip(); + +} + +HashSetLOCATION>stoppedLocs $\equiv$ n + +[ \text{HashSet} ] stoppedLocs = n for (TrafficLight light : lights) + +if (!light.getRoadGo()) + +stoppeLooS.addLight.getReadLoc()) else if(!light.entryGo()) + +stoppedLocs.add(light.getEntryLoc()) + +return stoppedLocs; + +public ArrayList getLights() + +return lights; + +# +private int red; +private int time; + +} \ No newline at end of file diff --git a/MCM/2009/B/2009-MCM-B-Com/2009-MCM-B-Com.md b/MCM/2009/B/2009-MCM-B-Com/2009-MCM-B-Com.md new file mode 100644 index 0000000000000000000000000000000000000000..f3f55acc90a52ae8459b9f3b780ff9651e0ee333 --- /dev/null +++ b/MCM/2009/B/2009-MCM-B-Com/2009-MCM-B-Com.md @@ -0,0 +1,83 @@ +# Judges' Commentary: The Outstanding Cellphone Energy Papers + +Marie Vanisko + +Dept. of Mathematics, Engineering, and Computer Science + +Carroll College + +Helena, MT 59625 + +mvanisko@carroll.edu + +# General Remarks + +As in past years, the diverse backgrounds of the undergraduate participants yielded many interesting modeling approaches to the stated problem. The judges assessed the papers on the breadth and depth of analysis for all major issues raised, on the validity of proposed models, and on the overall clarity and presentation of solutions. + +The Executive Summary is often still below par; it should motivate the reader to read the paper. It must not merely restate the problem, but indicate how it was modeled and what conclusions were reached, without being unduly technical. + +Assumptions must be clearly stated and justified where appropriate. Some teams overlook important factors and/or make unrealistic assumptions with no rationale. It should be made clear in the model precisely where those assumptions are used. + +Graphs need labels and/or legends and should provide information about what is referred to in the paper. Tables and graphs that are taken from other sources need to have specific references. Failure to use reliable resources and to properly document those resources kept some papers from rising to the top. The best papers not only list trustworthy resources but also document their use throughout the paper. + +# Requirements and Selected Modeling Approaches + +The Cellphone Problem involved the "energy" consequences of the cellphone revolution, and five Requirements were delineated. To receive an Outstanding or Meritorious designation, teams had to address issues raised in each of these Requirements. Additionally, Outstanding papers considered both wireless and wired landlines and the infrastructure to support cellphones and landlines. + +# Requirement 1 + +Teams were first asked to estimate the number of U.S. households in the past that were served by landlines and also to estimate the average size of those households. They were then to consider the energy consequences, in terms of electricity use, of a complete transition from landline phones to cellphones, with the understanding that each member of each household would get a cellphone. + +To address this problem, the energy used by current landlines had to be considered. Whereas corded landline phones use relatively little electricity, the same cannot be assumed about cordless landline phones. The top papers researched this issue and arrived at documented estimates of the number of corded vs. cordless landline phones and the average number of each per household. This led to a more realistic appraisal of the energy used by the landline phone system. + +With regard to cellphones, teams that rose to the top considered the infrastructure necessary—for example, the building of numerous additional communication towers if cellphones are to replace landline phones completely. This was of special importance in the analysis of the transitional phase. Also, the varying amount of electricity required by different types of cellphones was a consideration in the transitional and steady-state phases. + +Interesting models were constructed for the transitional phase of the cellphone "takeover." Some teams considered the spread of cellphones as the spread of a disease and used the Verhulst model for logistic growth, using the population of the U.S. as the carrying capacity and estimating the rate of growth of cellphones from published reports on the growth of cellphone use. Other teams generalized this to an SIR model or used the Lotka-Volterra predator-prey model, with cellphones as the predators and landline phones as the prey. A few used the competing-species model. The judges looked very favorably upon models for which sufficient rationale was given as to why that model might be appropriate in this circumstance. Interpretation of the parameters and solutions as they applied to the problem at hand was essential. + +Many papers ignored the transition phase and only considered the energy comparison for the steady state in order to determine the energy consequence. Some teams merely talked their way through the issues and did not construct + +a mathematical model. After estimating energy costs associated with landline phones and cellphones, many teams used linear equations to model the total costs associated with the numbers of phones. + +# Requirement 2 + +Teams were asked to consider a "Pseudo U.S."—a country similar to the current U.S. in population and economic status, but with neither landlines or cellphones. They were to determine the optimal way, from an energy perspective, to provide phone service to this country. The teams were also to take into account the social advantages that cellphones offer and the broad consequences of having only landlines or only cellphones. + +Once again, consideration of the infrastructure for phones was important. In addition to landline phones and cellphones, many teams considered the VoIP (Voice over Internet Protocol) communication option. Not every team that considered VoIP took into account the costs for laying the cables; some assumed that such cables were already in place (a questionable assumption). However, failure to consider the VoIP method of phone service may have kept a Meritorious paper from becoming an Outstanding paper. If one were to assume that households would already have one or more computers with Internet access, the energy costs associated with VoIP would be quite small. + +In terms of finding the optimal way to provide phone service from an energy perspective, some teams used linear programming, using the costs determined in Requirement 1 and quantifying in various ways the social advantages of cellphones, as well as the preference for landlines in certain circumstances. Other teams used AHP (Analytic Hierarchy Process), which worked well to get parameters used in the optimization routine but did not work as an optimization tool in itself. Teams that considered the advantages and disadvantages of various phone types not just for individuals, but for businesses also, demonstrated a thoroughness that was commendable. Another factor that some teams considered was the number of children under 5 who would have no need for cellphones. + +# Requirement 3 + +This was an extension of Requirement 2, asking teams to consider the electricity wasted when cellphones are plugged in that do not need charging and when chargers are left plugged in after the phone is removed. Teams were to continue to assume that they were in the Pseudo U.S. and were to interpret energy wasted in terms of barrels of oil used. + +To determine the amount of energy wasted, teams had to first estimate the number of hours that a "typical" cellphone charger is in the state of charging a phone, left plugged into a phone not in need of charging, and left plugged in when the phone is not connected to it. Some teams did their own informal surveys, but better estimates were arrived at from publications and surveys. + +In some papers, probability distributions were used to describe this behavior, but use of such distributions was not always justified. + +Teams that were more comprehensive took into account the fact that some cellphones and chargers use less power than to do others, based on brands, age, and capabilities of the phones and chargers. Assuming that all electrical energy is generated by oil, translating kilowatts of energy into barrels of oil used was a straightforward transformation. + +# Requirement 4 + +This requirement extended the concepts in Requirement 3 and asked teams to estimate the amount of energy wasted by all electronic chargers. Since this question was very open-ended, contest papers showed a wide variety of estimates for the energy wasted in terms of barrels of oil. The top teams estimated the average hours that appliances are left plugged in, charging and not charging, and also the number of hours that chargers are plugged in without the appliance. + +More-comprehensive papers considered many other kinds of electronic devices and by comparison showed that the amount of energy wasted by cellphones is relatively small. + +# Requirement 5 + +For this part, students were to consider the population and economic growth of the Pseudo U.S. for the next 50 years and predict energy needs for providing phone service based on their analysis in the first three Requirements. Predictions were to be interpreted in terms of barrels of oil used. + +Papers needed to consider both economic growth and population growth in order to estimate energy needs in the future. Various types of regression fits were applied to historical population data and economic data such as GDP. Using earlier estimates of energy requirements, coupled with the regression equations from historical data, predictions were made for the amount of energy used every decade for the next 50 years. Some teams predicted greater efficiency in future phones and the development of chargers that would use less electricity. + +Papers showed estimates for the number of barrels of oil used on a per-day basis or perhaps on a per-year basis. There was no one right answer, and answers given depended on assumptions made at the start. Some papers contained graphs displaying future values but did not give tables. A table together with a graph is a better way to display information. + +# Concluding Remarks + +Mathematical modeling is an art that requires considerable skill and practice in order to develop proficiency. The big problems that we face now and in the + +future will be solved in large part by those with the talent, the insight, and the will to model these real-world problems and continuously refine those models. + +The judges are very proud of all participants in this Mathematical Contest in Modeling, and we commend you for your hard work and dedication. + +# About the Author + +Marie Vanisko is a Mathematics Professor Emerita from Carroll College in Helena, Montana, where she has taught for more than 30 years. She was also a visiting professor at the U.S. Military Academy at West Point and taught for five years at California State University Stanislaus. While in California, she co-directed MAA Tensor Foundation grants on Preparing Women for Mathematical Modeling, a program encouraging more high school girls to select careers involving mathematics, and was also active in the MAA PMET (Preparing Mathematicians to Educate Teachers) project. Marie serves as a member of the Engineering Advisory Board at Carroll College, is on the advisory board for the Montana Learning Center for mathematics and science education, and is a judge for both the MCM and HiMCM COMAP contests. \ No newline at end of file diff --git a/MCM/2009/B/2009-MCM-B-Com2/2009-MCM-B-Com2.md b/MCM/2009/B/2009-MCM-B-Com2/2009-MCM-B-Com2.md new file mode 100644 index 0000000000000000000000000000000000000000..94e3316d8460629bb81bec9cf56a48595df2ab9b --- /dev/null +++ b/MCM/2009/B/2009-MCM-B-Com2/2009-MCM-B-Com2.md @@ -0,0 +1,73 @@ +# Judges' Commentary: + +# The Fusaro Award for the Cellphone Energy Problem + +Marie Vanisko + +Dept. of Mathematics, Engineering, and Computer Science + +Carroll College + +Helena, MT 59625 + +mvanisko@carroll.edu + +Peter Anspach + +National Security Agency + +Ft. Meade, MD + +anspach@aol.com + +# Introduction + +MCM Founding Director Fusaro attributes the competition's popularity in part to the challenge of working on practical problems. "Students generally like a challenge and probably are attracted by the opportunity, for perhaps the first time in their mathematical lives, to work as a team on a realistic applied problem," he says. The most important aspect of the MCM is the impact that it has on its participants, and, as Fusaro puts it, "the confidence that this experience engenders." + +The Ben Fusaro Award for the 2009 Cellphone Energy problem went to a team from the Lawrence Technological University in Southfield, MI. This solution paper was among the top Meritorious papers and exemplified some outstanding characteristics: + +- It presented a high-quality application of the complete modeling process. +- It demonstrated noteworthy originality and creativity in the modeling effort to solve the problem as given. +- It was well written, in a clear expository style, making it a pleasure to read. + +# The Problem: Energy Consequences of the Cellphone Revolution + +The Cellphone Energy Problem involved many facets of the "energy" consequences of replacing landlines with cellphones and five Requirements were delineated. Teams had to address issues raised in each of the five Requirements. Additionally, the best papers considered both wireless and wired landlines and the infrastructure to support cellphones and landlines. + +# The Lawrence Technological University Paper + +# Assumptions + +The team began with a page of assumptions, most of which were well-founded and enabled them to determine parameters in their models. However, certain assumptions made were unrealistic and these led to results that did not reflect the real-world situation. In particular, in the eyes of the judges, assuming that all landline phones are cordless was a serious shortcoming that greatly impacted the issue of energy use. Furthermore, while the team did address the issue of infrastructure, the assumption that infrastructure for cellphones is equal to that for landline phones seemed to ignore the need for the large number of additional communication towers if cellphones were to replace landlines. + +# Requirement 1: Mathematical Formulation for the Transition + +In Requirement 1, teams were to consider the energy consequences in terms of electricity utilization of a complete transition from landline phones to cellphones, with the understanding that each member of each household would get a cellphone. The Lawrence Tech team shone in mathematically modeling this transition! For their first model representing the transition from landline to cellphones, the team used the basic logistic differential equation to model the rate of change in the number of cellphones over time. They used the total population as the carrying capacity and determined the intrinsic rate of growth of cellphones from published results. This was very well done, though references for the tables and graphs should have been included. The second model introduced was a predator-prey system of differential equations, and the team is to be commended on their clear statement of rationale for using this model, with cellphones causing the demise of landlines. However, this model quickly became complicated, so they headed "down a different route." And, once again, their rationale for + +the equations used and parameters arrived at was commendable. + +In modeling the energy used by cellphones, the team considered three distinct models of cellphones and did a good job of researching the habits of individuals of different ages regarding talking times. The assumption they made that the average number of calls is directly related to the talk time per call might be questionable, but this was not considered a serious deficiency and it enabled them to estimate needed parameters in their model. + +# Requirements 2, 3, 4, and 5 + +Documented sources were used to estimate energy used for charging batteries, and these were translated into barrels of oil used. Energy usage comparisons were demonstrated for landline cordless phones and cellphones. This was taken forward into Requirement 2 and they seemed to conclude that the optimal mix of landline and cellphones would be the state where the same amount of energy was used by cordless landline and cellphones. + +For Requirement 3, after gathering data on energy consumption by phone chargers, the team demonstrated an interesting comparison of energy consumed by daily vs. weekly charging and charger left plugged in or not, and from this they estimated the long-term consequences of avoiding wasteful practices in the charging of cellphones. The team introduced a percentage comparison of energy wasted by various charging methods. + +Requirement 4 extended the concepts in Requirement 3 and asked teams to estimate the amount of energy wasted by all idle electronic appliances. Since this question was very open-ended, contest papers showed a wide variety of estimates for the energy wasted. The Lawrence Tech team limited themselves to the average hours that computers, televisions, DVD players/VCRs, and game consoles are left plugged in and the resulting annual oil consumption from such wasteful practices. A linear pattern of growth was projected up to 2059. More-comprehensive papers considered many more electronics and, by comparison, showed that the amount of energy wasted by cellphones is relatively small compared to many other electronic devices. Thus, when the team referred to cellphones as the "most energy consuming devices" in the Executive Summary, judges questioned the credibility of the paper. + +For Requirement 5, students were to consider the population and economic growth of a Pseudo U.S. for the next 50 years and predict energy needs for providing phone service based on their analysis in the first three Requirements. Predictions were to be interpreted in terms of barrels of oil used. To their credit, the Lawrence Tech team had numerous appendices with data tables (but again without reference). + +# Recognizing Limitations of the Model + +Recognizing the limitations of a model is an important last step in the completion of the modeling process. The students recognized that their model failed to look at technological changes, including advances in battery and cellphone technology. They also acknowledged that assuming that every member of a population has a cellphone puts cellphones into the hands of infants and ignores the fact that some individuals have more than one cellphone. + +# Conclusion + +Although there were some deficiencies in Requirements 2-5, the quality of the mathematical modeling done in Requirement 1, coupled with the excellent use of resources to answer the questions posed throughout, made the Lawrence Technological University paper one that the judges felt was worthy of the Meritorious designation. The team is to be congratulated on their analysis, their clarity, and their use of the mathematics that they knew to create and justify their own model for the cellphone revolution problem. + +# About the Authors + +Marie Vanisko is a Mathematics Professor Emerita from Carroll College in Helena, Montana, where she has taught for more than 30 years. She was also a visiting professor at the U.S. Military Academy at West Point and taught for five years at California State University Stanislaus. While in California, she co-directed MAA Tensor Foundation grants on Preparing Women for Mathematical Modeling, a program encouraging more high school girls to select careers involving mathematics, and was also active in the MAA PMET (Preparing Mathematicians to Educate Teachers) project. Marie serves as a member of the Engineering Advisory Board at Carroll College, is on the advisory board for the Montana Learning Center for mathematics and science education, and is a judge for both the MCM and HiMCM COMAP contests. + +Peter Anspach was born and raised in the Chicago area. He graduated from Amherst College, then went on to get a Ph.D. in Mathematics from the University of Chicago. After a post-doc at the University of Oklahoma, he joined the National Security Agency to work as a mathematician. \ No newline at end of file diff --git a/MCM/2009/B/5261/5261.md b/MCM/2009/B/5261/5261.md new file mode 100644 index 0000000000000000000000000000000000000000..dc0a30c2de2c93c2f0d8077f5d21e3f54c6c7870 --- /dev/null +++ b/MCM/2009/B/5261/5261.md @@ -0,0 +1,350 @@ +The United States has undergone a massive transformation in the way it approaches telecommunications. In 30 years, it has gone from having an entirely land-line based phone system to one where $89.2\%$ of the population uses cell phones, with $16.4\%$ of households having replaced their landlines entirely. With these figures in mind, we set out to establish the key consequences and energy costs of this American phone system. + +By collecting data on wattages of cell phone rechargers and modeling likely American cell usage, we calculated that a cell might waste $86\%$ of its energy intake through its charger: that results in 753,500 barrels of oil being wasted per year total. Using this information, compared to the energetic costs of landline phones, we model two transition scenarios as cell phones replace land lines in the United States. We concluded that the faster landlines can be phased out, the more energy the U.S. will save, as cordless landline phones actually consume much more energy than cells. + +Considering these facts, we found that a full cell network, combined with voice over internet protocol technology (VOIP), would be the best way to provide phone service to a Pseudo U.S. completely lacking in telecommunications. This would save the cost of the implementation of a landline infrastructure that would be rendered mostly redundant as cell phones became more popular. Because all the cell rechargers in this Pseudo U.S. would be brand new models with recent energy conservation features, cell phone waste would only add up to 234,100 barrels of oil annually. We modeled the increase in cell phone energy consumption in this Pseudo U.S. for the next 50 years with two models: one that accounts for the growth of the country, and another that also factors in a reasonable rate of technological advancement. In the first model, cell energy consumption would reach 1.53 million barrels of oil per year by 2059, while in the second, it would actually decrease to 525,000 barrels of oil by that time due to increases in battery efficiency and a reduction in standby power. + +Cell phone rechargers are only one small part of a large picture of standby power waste in America. Using extensive wattage and usage data on consumer electronics, we calculated that these devices waste 99.4 million barrels of oil in energy annually. + +These models show that although a single cell phone charger may waste only a small amount of energy (one author estimates leaving a charger plugged in for a day is about equal to driving a car for one second), the shear magnitude of cell phone users means that this loss is significant and deserves recognition and concern in the public eye. + +# Energy Implications of Cellular Proliferation in the United States + +Since the introduction of cell phones to the United States, their usage among the American public has grown exponentially. They are convenient, portable, increasingly versatile, and in many instances essential to supporting the lifestyles of American citizens. That being said, there is a significant problem with our current system: overindulgent energy consumption. To address this issue, we will discuss the various electricity costs during a cell phone's life. Further, relative energy use of cell phones and landlines will be analyzed, and the transitional currently occurring between cells and landlines will be modeled. Using this information, we will develop an energetically optimized phone system for an emerging country much like the United States, and calculate the potential waste energy lost by cell phones in this new system. Next, we will project the costs of telecommunications in the country as it grows for the next fifty years. Finally, we will analyze power lost by other types of consumer electronics that rely on rechargers and plugs, like laptops, televisions, and DVD players. + +# SECTION 1: Wasted Electricity Due to Cell Phone Rechargers + +To approach the energy cost of mobile phones, we first considered the energy consumption of a single cell phone. This comes in two varieties – the energy needed to recharge the cell phone and the energy that leaks from the charger when it is left plugged into the wall. To find the energy lost as leakage from cell phone chargers, we first had to determine the wattage that is being lost during this state. David MacKay, the author of a book on sustainable energy use, convinced two members of the Cambridge University Engineering Department to measure a standard Nokia phone recharger in a calorimeter – a much more accurate technique than anything we could devise. This method reported 0.472 W being drawn while only the charger was plugged into the wall, 0.845 W wasted when a fully charged phone was left attached, and interestingly, 4.146 W lost as heat even while the phone was charging. MacKay also suspects that older phone chargers may use one to three watts1. Another source reported 2.77 W consumed by a phone charger while it was charging, and 0.45 W consumed while it was not2. A page on brand new Motorola chargers listed their standby wattage as about $0.2^{3}$ . As MacKay's experiment showed the charger drawing about twice as much power with the phone attached as it did without the phone, we assumed that these brand new chargers would do the same. + +Once we knew the wattages of old phone chargers (about 2 W without a phone attached and 3 W with a phone), fairly new phone chargers (0.472 W without a phone attached, 0.845 W with a phone), and brand new phone chargers (0.2 W without a phone attached, and probably about 0.4 W with a phone), we only had to make a few suppositions about the number of each type of phone charger in use in the U.S. today. This task was made easier by the knowledge that the average cell phone is replaced every 18 months4. It seems likely that the fairly new models + +and brand new models will be present in approximately equal numbers, while both will outnumber the older, leakier chargers. If we assume that $20\%$ of phone chargers are old or cheap, $40\%$ use about $0.472\mathrm{W}$ , and $40\%$ do not leak at all, then the average cell phone charger wastes about $0.589\mathrm{W}$ while left plugged in without a phone and $0.938\mathrm{W}$ while left plugged in to a fully charged phone. + +Next we need to consider how long the charger is in each of these states. We constructed a model using two equations. The first covers those who unplug their phone and their charger at the same time: (x)(y(0.938 W)), where x is the number of times they recharge their phone a year, and y is the average amount of time they leave their phone plugged in after it has reached full charge in hours. The second equation is aimed at those who unplug their phone from the charger, but leave the charger plugged in. We assume that those who don't unplug the charger right away once their phone is charged will rarely come back to unplug just their charger at a later date – perhaps a few times a year when they need the outlet, but not consistently. Thus, the equation is: (x)(y(0.938 W)) + (0.589 W)(8760-300-xy). Here we have kept the first equation (because this energy is still lost), and simply added on the rate of waste while just the charger is plugged in (0.589 W), times the number of hours in a year (8760), minus the average time a phone would need to charge a year (300 hours). We also had to subtract out (xy), to account for the hours the phone would still be attached to the charger after it was charged, leaking power at 0.938 W instead of 0.589 W. + +Now we just need to weigh the two equations according to how many people we feel fit in each category, decide how many times an American recharges their phone on average, and decide how long they usually leave their cell plugged in after it has reached full charge. We supposed that $75\%$ of Americans unplug only their phone and leave the charger in the wall, while the other $25\%$ unplug both once they realize their phone is charged. From browsing a few pages of cell phone specs, we found average talk time to be around 5 hours, and average standby time to be about 10 days5. If the average American uses their cell for talking, texting, or other uses an hour a day, in three days they would have used up $60\%$ of their battery through activity (3 hours/5 hours), and $30\%$ of their battery to standby (3 days/10 days). So, we assume that the average American would only have to recharge their battery about 100 times a year (121 if they recharged it every three days, but after three days it would still have $10\%$ of its charge left). On the other hand, many Americans plug in their phones every time they come home, in effect recharging their phone 365 times a year. We plugged both of these numbers into our model to see their effect on total energy consumption. + +We also chose 2 hours and 6 hours as two likely averages for how long Americans leave their phone plugged in after it is charged, as some might leave their phone plugged in all night, producing 6-9 hours of waste after an hour of charge, while others might unplug the phone within minutes. Because we needed a single number for our models in Section 2, we selected 175 recharges a year and 4.5 hours left plugged in after charging as suitable averages. In the table below, the average time the user left the phone plugged in after it was fully charged is recorded along the top, and the number of times the phone was charged is recorded along the left side. + +
2 hours4.5 hours6 hours
100 recharges1.186 TWh1.251 TWh
175 recharges1.281 TWh
365 recharges1.272 TWh1.509 TWh
+ +From these five scenarios, it appears that 1.281 TWh, or 753,500 barrels of oil (1700 kWh = 1 barrel), is a fair estimation of the average waste produced by American cell phone chargers in a year. Changing either of the two variables does not have a serious effect. But then how could we reduce waste? To find out, we constructed a second table, this one assuming that every single American cell phone user gets into the habit of unplugging their charger as they unplug their phone. + +
2 hours4.5 hours6 hours
100 recharges0.06 TWh0.179 TWh
175 recharges0.235 TWh
365 recharges0.218 TWh0.654 TWh
+ +Clearly this is the first tendency to change. The vast majority of cell phone standby power waste is coming from those who only unplug their phone. If everyone unplugged their phone and charger at the same time, waste would be cut by $65 - 95\%$ , depending on the other two factors. The constant power drain of the charger plugged in to the wall simply outweighs other factors like number of recharges or how long the phone is left plugged in. + +In terms of the energy required to keep a single cell phone charged throughout the year, we decided that on average, a phone would need to be recharged for 300 hours a year. This was estimated by multiplying our 100 recharges a year estimate by 3 hours per recharge – numbers we felt accurately reflected the needed recharge time. We determined that the average phone would charge at $3\mathrm{W}$ by averaging a table of modern phone wattages, many of which were in range of $1\mathrm{W}^6$ , and the wattages of older generation phones, which we assumed must have higher wattages, due to MacKay's measurement of over $4\mathrm{W}$ lost as heat during charging7. We multiplied this times $3\mathrm{W}$ , which we calculated as the average wattage at which cell phones recharge, to get $0.9\mathrm{kWh}$ used in a year to charge a single cell phone. When combined with the $4.713\mathrm{kWh}$ of waste per phone we determined above, this means that $84\%$ of the energy used on cell phones is wasted, which nicely splits the difference between 3-year old statistics we found $(95\%$ lost as waste8) and statistics from only two months ago $(67\%$ lost as waste9). This makes sense, as we already found that brand new phones lost less energy to standby power, while older phones are more wasteful. Our number appears to be a nice average that takes into account both old phones, phones of medium age, and new phones. + +Now that we know how much energy an average cell phone consumes in a year – 5.613 kWh – we can calculate the effect of the proliferation of cell phones through America, as they replace landlines. + +# SECTION 2: US Transition From Landlines to Cell Phones + +The United States presents an interesting case study of the relative energy efficiency and usage of landlines versus cell phones. Since the introduction of the cell phone, their popularity has grown exponentially (see graph below). As of February 7, 2009, there were 271,778,000 cell phone subscribers $^{10}$ in the United States, and while this number continues to grow, the number of households using landlines is on a sharp decline. Between 2004 and 2008, the percentage of cell-only households in the United States rose from $4.5\%$ to $16.4\%$ . $^{11}$ We will now analyze the transition from the current system of a mixture of cell phones and landlines to a system consisting of exclusively cell phones. In order to do this, we have developed two models for the transitional phase. Based upon these models we will evaluate the relative energy costs of each type of transition, and discuss the most efficient route for obtaining an exclusively cell phone communication system. + +![](images/d2b0899b4433b95d67f7f2b29681749079afea43db13959d7752ab528c4af41b.jpg) + +In considering the total energy costs of cell phones, there several different factors. The first, and most obvious, is the energy needed to keep the phone battery charged on a continual basis. There is also the production cost of each cell phone in terms of energy. This quantity is extremely variable and poorly documented in terms of energy usage. So, while we could roughly estimate the average energy needed to produce a cell phone or a cordless phone, we felt it was more worthwhile to concentrate on the legitimized data concerning the charging of cell phones and cordless phones. In terms of the relative costs of production, we do know that since each cell phone has a life of approximately 1.5 years $^{12}$ , compared to the 6 year average lifespan of a cordless telephone. If we assume the energy used to produce both types of phones is comparable, over a given time period, the production costs of cell phones will be four times that of cordless phones. The last major form of energy consumption due to cell phone usage is that of the cell towers themselves. We will delve into a discussion of how many additional cell towers will be needed to support the growing number of cell subscribers, and also analyze some of the energy costs of maintaining the land-line system. + +Our primary focus in the ensuing models will concern the relative energy costs of keeping cell phones and cordless phones charged and usable. We would like to incorporate the other aspects of energy costs into our model, but the lack of reliable data would create an error large enough to render the model extremely unlikely. Thus, we will tackle the problem at hand from the standpoint of energy usage when in the hands of the consumer. + +# -Basic Information and Assumptions + +Before we jump into our first model of this system, we need to establish the initial conditions for the transition from landlines to cell phones. For this problem we are asked to assume that the population of the United States is approximately 300 million. The actual population of the United States as of the end of 2008 is around $305\mathrm{~million}^{13}$ . For the sake of accuracy, we will use the actual population for all further calculations. + +We have found that approximately $2.5\%^{14}$ of US households use neither landline nor cell phones. Thus, for our models, we will assume that a complete transition from landlines to cell phones will in result in $97.5\%$ of the population having cell phones, since the variance in household size, based upon demographic information, is negligible on such a large scale. Put simply, everyone who wants to have access to a phone will have one at this percentage. What this means for us is that since there are 111.1 million households in the $\mathrm{US}^{15}$ , 108.3 million households require some form of phone line. In 2008, $16.4\%$ of total households opted to use only cell phones. Thus since we have $97.5\%$ of the population using some form of phone, the number of households using landlines in 2008 is: + +$$ +\mathbf {H} = (. 9 7 5 -. 1 6 4) ^ {*} 1 1 1. 1 = 9 0. 1 \text {m i l l o n} +$$ + +Furthermore, we have found that the average number of people per household is $\mathbf{m} = 2.745$ . However, this data is only part of what we need. + +The problem becomes a bit more complicated when we consider what it actually means to provide each member of every household with a cell phone. There are two primary concerns. First of all, with this transition, we are assuming that every man, woman, and child receives a cell phone. However, approximately $6.9\%^{16}$ of the US population is under the age of 5, which equates to 21 million children. There is little or no reason to provide these children with cell phones. Considering this data, if we remove these 21 million children from the number of subscribers, the United States would be very close to achieving a complete transition already. If we assume a starting population of 305 million and the number of cell subscribers as 272 million, that leaves us with 33 million US citizens to supply with cell phones. However, if we remove the 21 million children under five from this list, we need only supply 12 million more cell phones, which certainly could be done in a matter of a couple years if necessary. + +The second problem lies with the data for current cell phone users. We know that there are approximately 272 million cell phone subscribers currently in the United States. However, this number does not account for people who hold more than one account – namely people who have a personal cell phone and a work cell phone. In fact, there is hardly any data on the number + +of people who fall into this category, and the information we have found cannot be corroborated. Thus, we will assume that these multiple-phone users have a negligible effect on the total number of cell users in the US, and continue to use 272 million cell phone subscriptions as our starting point. + +# -Preliminary Data on Cell and Cordless Phone Energy Use + +In order to accurately model the energy costs of the phone system in the United States, we needed to establish a few important quantities. Throughout our research we have discovered a large amount of information pertaining to three types of phones - cell, cordless home phone, and corded home phone. In order to establish the energy "costs" of each of these three types of phone, we needed to calculate five specific pieces of data: + +$\mathrm{E}_{\text {production }} = \text {Energy Used to Produce Each Type of Phone }$ + +$\overline{\mathrm{E}_{\mathrm{support}}} =$ Energy to Support Each Type of Phone per Year + +$\mathrm{E}_{\text {charge }} = \text {Energy Used to Charge/Power Each Type of Phone per Year}$ + +LS = Average Lifespan of each Type of Phone + +N = Number of Each Type of Phone Being Used + +The first two pieces of data - $\mathrm{E}_{\text {production }}$ and $\mathrm{E}_{\text {support }}$ - are nearly impossible to find to any degree of accuracy. There is a great deal of information available concerning costs of materials and overall monetary cost to manufacture a given phone, but the energy that goes into actually producing a given phone is unknown. So, we will make the assumption that it takes the same amount of energy to produce a cell phone and a cordless phone. The number of corded phones currently being sold is negligible. + +Likewise, the energy that goes into phone support – energy costs of cell tower construction, tower and landline upkeep, signals, etc. – is not well documented. We found two rough estimates – the first, that $0.12\%$ of global primary energy use is used by telecom companies.[17] If this proportion holds true for the U.S., which uses 3,923 TWh a year,[18] then U.S. telecom companies consume 470 TWh per year. This does not tell us whether or not this energy is going towards landlines or cell phones, but it does give an idea of the scale – much larger than energy consumed by cell phones. A second estimate, gathered from quarterly reports, was that Japanese mobile telecommunications companies use 120 Wh per user per day[19]. If American mobile companies did the same, this would equal 11.9 TWh per year. These numbers contradict each other – we find it hard to believe that telecom company might spend about 40 times more energy on non-cellular aspects of its business as it does on mobile infrastructure. However, more accurate data is simply not available[20]. Since these two quantities cannot be accurately determined, we have taken the remaining three variables to use in our models of electricity utilization during the transitional phase as well as the subsequent steady state of equilibrium. KWh used by the average cell phone was found to be 5.613 kWh, as described in section 1, by + +adding the $4.713\mathrm{kWh}$ of waste per phone that we calculated in Section 1 to $0.9\mathrm{kWh}$ , representing 300 hours per year of charging at an average of $3\mathrm{W}$ . The $28\mathrm{kWh}$ used per cordless phone was taken from a TIAX LLC. study on the power consumption of consumer electronics which will be discussed in detail in the consumer electronics section at the end of the paper21. The $2.2\mathrm{kWh}$ for the corded phones was estimated by multiplying $0.25\mathrm{W}$ (every source we found said that corded phones used a "smidgen" or a "dab" of power running through the phone line, but this number must be greater than $0\mathrm{W}$ ) by the number of hours in a year, 8760. Average lifespan of a cell phone (determined by consumer replacement rate, not breakdown rate) was gathered from a number of sources22, while the lifespan of cordless and corded was estimated. We also estimated that there were two cordless landline phone for every one corded landline phone. + +
Echarge(kWh)LS(Years)N
Cell Phone5.6131.5x
Cordless2860.667(y)
Cored2.2100.333(y)
+ +As the reader can see from this table, the primary concern now becomes calculating the number of each type of phone in use at a given period in time. In order to help us with these calculations, we have developed two models to describe the relative changes in the number of cell phones to the number of landline phones. + +# -Model 1: Current Trends + +If we examine the current trends of cell phone usage in the United States, we can establish a reasonable timetable for the transition between landlines and cell phones. For our first model, we will assume that the current trends will continue unimpeded and unchanged until we reach a state of $97.5\%$ of people having cell phones (all those who desire a phone have one) and $0\%$ of people using landlines. Using data since 2000, we can construct the following trends23: + +![](images/b47000216c0db83a685318c89346d5aba6fea509aa7e6bacb6db2008b24e29d4.jpg) + +For the sake of consistency between our two models, we will calculate the total energy consumption due to phone usage by the American public between 2009 and 2015 for each model. + +In order to calculate the total energy used to charge all of the cell phones in the US as well as power all of the corded and cordless landline telephones, we need to first find the respective areas under the curve of our two lines between 2008 and 2015. To calculate the area under the curve, we plotted our most recent data and constructed a projection based upon the current trend of the respective percentages of phone usage. After doing this, we acquired the best-fit trend-line for each projection and determined its equation. So, with the equation we simply took the integral of the curve over our time period, with 2008 as the lower bound and 2015 as the upper bound. This integral gives us the average $\%$ of the population that has a cell phone each year, times the number of years we are considering (7). Now, if we consider that fact that the average cell phone uses $5.613\mathrm{kWh}$ per year and the average landline phone uses $19.4\mathrm{kWh}$ per year[24], our equation for total energy used by phones between 2008 and 2015 becomes: + +$$ +E _ {p h o n e} = (A v e r a g e P o p u l a t i o n) * \left[ \int_ {2 0 0 8} ^ {2 0 1 5} (\% E q u a t i o n) \right] * \left(\frac {\text {P h o n e E n e r g y U s e d}}{\text {Y e a r}}\right) +$$ + +$$ +E _ {c e l l _ {1}} = (3 1 3 1 0 0 0 0 0) * \left[ \int_ {2 0 0 8} ^ {2 0 1 5} (\% C e l l E q u a t i o n _ {1}) \right] * \left(5. 6 1 3 \frac {k W h}{Y e a r}\right) = 1 1. 4 T W h +$$ + +$$ +E _ {l a n d l i n e p h o n e _ {1}} = (3 1 3 1 0 0 0 0 0) * \left[ \int_ {2 0 0 8} ^ {2 0 1 5} (\% L a n d l i n e E q u a t i o n _ {1}) \right] * \left(1 9. 4 \frac {k W h}{Y e a r}\right) = 7. 9 T W h +$$ + +$$ +E _ {t o t} (M o d e l 1) = 1 1. 4 T W h + 7. 9 T W h = 1 9. 3 T W h +$$ + +Thus, the total amount of energy used by cell phones, corded phones, and cordless phones between now and 2015 is 19.3 TWh. If each person in the US (technically $97.5\%$ ) had a cell phone and no landline phones were used, the total energy used would be approximately 12 TWh. + +# -Model 2: Current Trends with Resistance to Extremes + +Model 1 works nicely if we assume that the current trends will continue until the landline becomes extinct, but in all probability, this is not what will actually happen. It is probable that cell phone usage will creep very close to our maximal $97.5\%$ of the population line, but there are several reasons why landline usage will not drop to $0\%$ . Many people feel more comfortable with the added security of a landline to use, if for some reason their cell phone is not working. Landline phones are generally easier to talk on for long periods of time, and there is also the current worry that cell phones may cause cancer to factor into the equation. Also, when we consider senior citizens, who tend to be slightly more resistant to technological changes, it becomes apparent that a substantial percentage of American citizens may opt to stay with a landline as long as it is available. Using information from several different polls concerning the public's feelings about having a landline for backup and practical purposes, we have established a second model that accounts for this resistance to change. + +![](images/657a693042f68cf6d3ca1beea36673d9180b612d0e54135248739a7f5e325ecd.jpg) + +Using this model, we can now calculate the total energy used by phones between 2008 and 2015. We will use the same equation as in Model 1 above: + +$$ +\begin{array}{l} E _ {\text {phone}} = \left(\text {Average Population}\right) * \left[ \int_ {2 0 0 8} ^ {2 0 1 5} (\% \text {Equation}) \right] * \left(\frac {\text {Phone Energy Used}}{\text {Year}}\right) \\ E _ {c e l l _ {2}} = (313100000) * \left[ \int_ {2 0 0 8} ^ {2 0 1 5} (\% C e l l E q u a t i o n _ {2}) \right] * \left(5. 6 1 3 \frac {k W h}{Y e a r}\right) = 1 1. 2 5 T W h \\ E _ {\text {landline phone} _ {2}} = (313100000) * \left[ \int_ {2008} ^ {2015} (\% \text {Landline Equation} _ {2}) \right] * \left(19.4 \frac {\text {kWh}}{\text {Year}}\right) = 17.6 \text {TWh} \\ E _ {t o t} (M o d e l 2) = 1 1. 2 5 T W h + 1 7. 6 T W h = 2 8. 8 5 T W h \\ \end{array} +$$ + +Thus, with Model 2, we see a total of 28.85 TWh being used in the next seven years with the current mixture of cell phones and landline cordless and corded phone. Obviously, this is an extremely wasteful practice to have both systems running simultaneously. For again, if the United States was restricted to only cell phones, the energy used over the same time period would be 12 TWh. + +Obviously, at some point if the number of landline users drops to a minimal level, it will no longer be profitable for landline service providers to stay in business. However, we feel that as long as there are those who are willing to continue to pay for their service, and the cost of upkeep on landlines remains minimal, there will continue to be some form of landline usage in the United States. From an energy standpoint, this does not make any sense, but in order for there to be a complete transition from landlines to cell phones, it would require some sort of governmental intervention, or a collapse of the landline industry. + +# -Summary of Section 2 + +These two models exemplify the predicted modes of transition from landline phones to entirely cell phones. We see that when the transformation is complete, the system using only cell phones is actually more energy efficient than the current system of overlapped usage of cells and landlines. If we consider the energy costs of cell phones and landlines put together based upon relative usage trends between 2008 and 2015, we find that: + +
Ecell(TWh)Elandline(TWh)Etotal(TWh)
Instant Transformation (All Cells)12012
Model 1 Current Trends Extended11.47.919.3
Model 2 Current Trends Corrected11.2517.628.85
+ +The longer it takes for landlines to be phased out, the more energy the U.S. loses. This is somewhat unfortunate, considering the undeniable efficiency and reliability of a corded phone. However, the cell is here to stay, and once nearly every American has a cell phone, it just doesn't make energetic sense to operate two redundant systems. + +# SECTION 3: Bringing Phone Service to the "Pseudo US" + +We now consider a second scenario – that of a “Pseudo US.” We assume this country has the exact same population, economic status, and infrastructure as the United States. However, there will be no initial phone system in place. Our goal is to devise a strategy for the implementation of a phone system that minimizes energy usage, while still compensating for the needs of the general population. Obviously, there are several socioeconomic factors at play, and in addition to providing a detailed analysis of our energy efficient phone system, we will discuss the consequences of all current types of phone systems: landlines, cell phones, satellite phones, voice over internet protocol technology (VoIP) and logical combinations of these services. + +Certainly, using only landlines connected to corded phones would be a very energetically cheap technique. As we showed in Section 2, corded phones use less energy than cell phones, drawing only a tiny amount of power through the phone lines. A landline only system would also have lower maintenance costs from the provider's standpoint – mostly just passive wiring instead of active cell towers to power and repair. Phones would also be broken or lost less often, and there would be far less social incentive to replace them with a more stylish or feature-filled newer model every eighteen months. There would also be fewer phones per American – many families could do with one or two instead of one for each family member. + +However, this is not a realistic model. Americans have already proven that they favor cordless phones over corded, and that they are willing to replace landlines entirely with cell phones in large numbers. It would seem inevitable that the same process we see occurring today in the real U.S. would result in the eventual replacement of millions landlines with cells. As consumers purchased more and more cell phones, cell towers would have to be built, and the existing landline network would quickly become redundant and obsolete. At the same time, Americans would be purchasing energy-sucking cordless phones for their land lines as well. Unless the government decided to make cell phones and cordless phones illegal – a rather draconian measure – we would soon be in the same fix we are in now, with millions of watt-draining cordless phones and cell phones. It would be better for the country, energetically, if everyone just used cell phones instead of just using 2/3rds cordless and 1/3 corded landlines. For these reasons, it would be better to simply build a cell network in the first place, to avoid building two nationwide communication networks that cover the same areas. + +If the additional side benefits of cell phones are considered, this option becomes even more attractive. Cell phones provide a sense of safety – parents don’t have to worry so much about their children being stranded without a way to call home, travelers whose cars break down can call for assistance, and citizens can call emergency services more quickly in the event of a fire, assault, robbery, or other catastrophe. Cell phones also have caused a revolution in the way business is carried out – the greater speed and efficiency of business communication on mobile phones leads to increased profits and a stronger economy. Socially, it’s also difficult to see a country of America’s wealth and technical sophistication getting by without cells. In only a few decades, cell phones have become status symbols, entertainment devices, and even companions, in a sense. + +However, there are a few weaknesses to an all-cell network. Businesses would have to issue employees cell phones to replace their desk phones (many employees would not want to use their personal cell phones, because it would allow any business contact to invade their privacy at any time), and it would be difficult to prevent them from using their minutes for personal calls. Families would no longer have a dedicated landline in the house that was reliable, + +always on, and inexpensive. And cells do have a few major security and safety concerns. Of course, there is the current debate about whether or not cell phones could cause cancer. There are plenty of sources arguing on either side of the debate, but we don't feel that there is enough information or time to draw even tentative conclusions. Still, it is clear that some would prefer to have a non-cell alternative to at least limit their exposure. On the security side of things, it is well established that cell phones can be vulnerable to hackers, interception, and jamming. The government and many corporations would probably not want to rely solely on cells for this reason, so a few landlines might have to be installed for them. On the other side of things, having every American citizen talking on a cell phone would allow the government to listen in on any calls it liked, without the physical evidence of a wiretap. Protectors of civil rights would no doubt be alarmed by this development. + +Luckily, there is a simple, cheap, and energetically conservative way to solve several of these problems: VOIP. Assuming that this country had about the same technology level as the U.S., an extensive network of internet cables would already be in place, snaking in between all but the smallest towns, and connecting to around $37\%$ of households across the country (the $24\%$ of houses that would have been on dial-up would not be connected, as there would be no phone lines)[25]. Attaching the remaining $63\%$ of homes to the existing DSL and cable hubs already scattered through neighborhoods across the country would be considerably easier than laying new phone lines underground, throwing up millions of phone poles, or constructing thousands of cell towers. The technology to send voice data over the Internet has already been mastered – in the fourth quarter of 2008, Skype reported 405 million accounts worldwide[26]. Even those homes without a computer could be served by a phone that plugs directly into an internet router. This technology would likely be simple to develop, considering the existing similarity between a phone jack and an internet jack – it just does not exist today because there is no need to develop it with a landline system already in place in the real United States. Modern internet cables can transmit such a large volume of information that the additional auditory data would be fairly trivial. VOIP would also use a very small amount of resources, relative to a full cell or landline network. Cordless VOIP phones might consume a fair bit of power, but the savings on construction and infrastructure costs would be enormous – the majority of the internet cable that would be required would already have been laid. In combination with a full cell network, VOIP would allow business to give their employees a phone which they could be held accountable for, but which would also preserve the employee's privacy. Families could have a backup VOIP line which would be comfortable to talk on for long conversations. And thanks to the cell network, Americans could communicate while traveling or living anywhere in the country. + +Along with the implementation of this system would come a mandatory cell phone recycling program. Much of the energy cost of production of new cell phones could be alleviated by simply recycling old cell phones. Specifically, if the Pseudo US were to recycle 100 million cell phones, this would save enough energy to power 19,500 households for a year $^{27}$ . This equates to 0.215 TWh. + +# - Covering the Pseudo US with Cell Towers + +With our implementation of an all-cell phone system, our primary obstacle is the construction of cell towers. In our research, we have found several different types of cell towers, but since we are looking for the most energy efficient way to implement this system and we have the advantage of starting from scratch, the obvious choice seems to be the newly released Ericsson Tower Tube cell tower. It has a radial range of approximately 4 miles and has a wind turbine attached, so it uses $40\%$ less energy than conventional towers. A modular design allows it to be erected in a matter of days (the foundation can be complete in eight hours), it has a small footprint of 19.6 square meters, it's resistant to vandalism and the elements due to an enclosed design, and it uses heat convection cooling to further reduce its energy cost[28]. It also has a shape aesthetically pleasing enough that it shouldn't drive down property costs unduly, especially since its concrete structure can be formed in a variety of colors, shapes, and heights. Also, if this type of tower is used exclusively, the relative cost, both monetarily and energetically, would be less than trying to manufacture several different types of towers and implant them into the same areas simultaneously. + +Now, it becomes our goal to cover the Pseudo US in the most efficient way possible. Knowing nothing about the geography of the pseudo nation, we will devise the optimal grid for cell tower placement based upon data from the current United States. We want to minimize the number of towers in the nation while maximizing coverage. Using some basic geometry, the most efficient way to obtain $100\%$ coverage of the nation is through a triangular lattice, as seen below, with each tower the vertex of an equilateral triangle, and 6.93 miles from each other tower (determined with simple trigonometry). This does assume a perfect scenario, as signal strength falls off with distance, and in reality towers would be placed closer to ensure coverage at the intersection points. + +![](images/f98d28bca363dacda6a24836fec114db87fab3b302ef2c74809c169e844a405a.jpg) + +![](images/4aaa626ebc51231088f461a5b9de2725e0a843e0f74056b59b9a9d41ed973f33.jpg) + +This lattice assures minimal wasted coverage; the circles' overlap is minimized. If we were to place this tiling of throughout the current United States, since each cell tower encompasses a unique hexagonal area of $41.57\mathrm{mi}^2$ and the land area of the US is approximately $3,540,000\mathrm{mi}^2$ , we would need approximately 85,150 towers. + +Naturally, this sort of cell tower arrangement cannot and should not be implemented everywhere. For cities, we have devised a system that gives an estimate of the necessary density of the cell tower population based upon the population of the city. Each cell tower has a limitation on the number of users it can support, so obviously additional towers must be introduced into most cities over the size of 10,000. + +Even though we are talking about a Pseudo US, the most tangible data concerns the real United States. According to the 2000 Census, there are currently 601 US cities over the size of 50,000 people. We analyzed the population densities of each of these cities and determined that in general, the density of cell towers in these metropolitan areas should be around 16 times the average density of the rest of our grid. This adds approximately 420 cell towers to our previously calculated total. Since there is no data on the remaining cities under 50,000 people, we recommend that for all cities between 10,000-50,000, the density of cell towers be increased by 4 times to meet the cell phone service needs. Information on the possible cost or energy consumption of this network is difficult to come by – modern telecommunication companies didn't get where they are today by giving out important data about their operations. The best estimate we can make involves the Ericsson's Tower Tube's $10\mathrm{kW}$ of signal strength. As our network would use 85,570 towers, this would mean energy consumption would be at least 0.312 TWh for the tower's themselves – a surprisingly low amount, but cooling, manufacture, and other operational costs would likely drive it up considerably. + +# -Wasted Electricity Due to Cell Phone Rechargers in Pseudo US + +Energy wasted by cell phone rechargers in this pseudo U.S. can be determined using the same calculations used to calculate recharger waste in the real U.S. We will simply discard the older phones with the more wasteful rechargers, and assume that every cell phone using American will have one of the new chargers that only wastes $0.2\mathrm{W}$ on standby power while left plugged in alone, and 0.4 while plugged in to a fully charged phone. As before, the top shows the average number of hours a pseudo American leaves their phone plugged in after it has reached full charge, and the left side shows the average number of time they recharge their phone in a year. + +
2 hours4.5 hours6 hours
100 recharges0.359 TWh0.386 TWh
175 recharges0.398 TWh
365 recharges0.395 TWh0.494 TWh
+ +From our simulations, our pseudo U.S. would waste $30\%$ less energy due to cell phone rechargers than the real U.S, even if the pseudo Americans left their phone plugged in just as long as we do. The 0.398 TWh estimate would equal 234,118 barrels of oil. This is an encouraging fact, as the real United States will basically reach this state in a few years, as more and more older cell phones are replaced. As before, we ran a second simulation to see what would happen if all pseudo American unplugged their chargers and phones at the same time. + +
2 hours4.5 hours6 hours
100 recharges0.022 TWh0.065 TWh
175 recharges0.086 TWh
365 recharges0.079 TWh0.238 TWh
+ +If the pseudo Americans could unplug both phones and chargers, refrain from recharging their cells until absolutely necessary, and unplug them within two hours of them reaching full charge, they could basically make standby power waste from cell phones disappear entirely (only 0.022 TWh/year, or $1.71\%$ of our current estimated waste). That would still be 12,941 barrels of oil wasted annually, but it would be a vast improvement. + +# SECTION 4: Increase in Phone Energy Needs for the Pseudo US + +Now that we have created a communication environment for the Pseudo US, we would like to model the electricity needs for the phone system over the next 50 years. This model will depend heavily on both population growth and economic growth. We have established reasonable trends for each and described why the population data is more significant than the specific economic data. + +# -Population Growth + +Starting with population growth, we have found that the growth rate will be approximately $0.9\%$ increase annually. The graph below represents our prediction of the population growth based upon the growth trend of the United States over the past several years. The CIA World Factbook has corroborated our prediction[29]. With our projection system, we will obtain maximum cell phone usage $(97.5\%)$ by the year 2015. We suspect that there will always remain a small number of holdouts – infants, those who eschew technology in general, or those who believe cell phones cause cancer. Our model, based on population growth, is helped by the fact that cell phones are not like T.V. sets or jeans – even if they become even more popular than they currently are, or if there is some huge economic boom in the next fifty years, it seems unlikely that the average American would buy three, four, or eight cell phones – the ratio will likely remain close to 1:1 (at least in terms of the phones that are actually used, and thus consuming energy). So we predict that the number of cell phone subscribers will run parallel to, but just below the number of Americans. + +![](images/e0692de1b425a9ca4e56f5df70f46f23a39d162b27270237fbb83e2d3d69c18f.jpg) + +# -Economic Growth + +It's an understatement to say that the economic growth of the United States over the next 50 years is difficult to predict. When we simply plotted the real GDP per capita of the country, however, we found that there was a surprisingly robust trend30. + +![](images/2724bc51cbbce7f664e93fbda231e1104655c4d96cdf61239b5d949545bf6099.jpg) + +The economic depression of the 1980s is clearly visible on this chart, yet there is still a strong upward trend in GDP. Even with the current economic instability, this data suggests there is a good chance that in the long term, over the next 50 years, the U.S. economy will continue to grow at approximately the same rate it has been growing for the last four decades. This should be enough to allow almost all Americans to own a cell phone. We also predict that demand will remain high. Mobile phones have become such an integral part of American life, like the + +television, that nearly all Americans will own one whether the economy is in recession or in boom. + +# -Discussion of Technological Advancement + +A cell phone network, combined with VOIP technology, would also have an advantage over a landline network in that it would be more easily expanded and upgraded over the next 50 years. Unlike landlines, which have for the most part reached their potential, cell and VOIP technology still have room for improvements in broadcast distance, bandwidth, voice quality, energy efficiency, and cost reduction. Landlines would have to be extended through trenches or on poles to any new houses built in the U.S., while the existing cell network described in Section 3 would be able to handle new customers in any part of the country. Perhaps a few more towers would have to be built to account for the increased signal load, but geographically the entire country is already covered. The existing cell towers could be upgraded at any time with more powerful transmitters, solar panels for increased energy efficiency, or any other technological advances that might come about in the next five decades. And if we connected these towers with high-bandwidth, advanced fiber optic cables when they were first constructed, there would be room for the subscriber base to grow. This could also lead to powerful mixed technologies, such as wireless Internet transmitters mounted on the cell towers, blanketing large sections of the country with wi-fi coverage. If wireless technology is improved, this might eventually allow Americans to choose to communicate using the cell network, VOIP, instant messaging, email, or web-cams from anywhere in the country. A landline network could not provide this level of expansibility and upgradeability for decades to come. + +One of the most exciting ways to upgrade would be adding a satellite network. If incremental improvements in satellite communications could be made, satellites could be an extremely attractive choice for covering the Rocky Mountain states and parts of the Midwest. We compiled a list of 15 contiguous states and Alaska, from Oregon to Kansas to New Mexico. This list contains fourteen of the fifteen most sparsely populated states, excluding Maine and also includes Arizona. Together, these states contain $52\%$ of the area of the United States but only $12\%$ of its population31, and as they are contiguous, they would be the easiest to cover efficiently with satellites. If this network could be made operational immediately in 2008, we could simply not build any of the 44,278 cell towers that would have been in that area, a huge amount of savings in materials, energy, labor, and money. Unfortunately, there would likely be too many complex technological problems to overcome at that time. First, though satellite phones can currently text cell phones, modern cell phones do not have nearly enough power to reach satellites. Secondly, current satellite phones have problems getting calls in doors or in mountainous areas due to line of sight problems. Finally, some satellite phones have a noticeable lag in their conversations. + +Using low Earth orbit satellites would limit the lag time in conversations and reduce line of sight problems (these satellites orbit the Earth at an incredible speed, making a complete rotation in about an hour and a half, so one will be in line of sight of almost any location shortly32). But using orbiting satellites instead of geostationary ones would mean the satellites would be over the U.S. only a fraction of the time – to be fully efficient, this satellite system would have to be international. There would also need to be an improvement to signal strength, + +both to improve performance, and to reduce the power the phone itself would have to burn through to transmit receive a signal, before satellite phones would really be ready to be used by millions. Satellites have the further downside of not being able to deal with the same call load as a cell network or a landline network, though this would be somewhat mitigated by the fact that the satellites would only be handling the sparsely populated areas. All in all, it would appear that attempting to create a hybrid sat/cell phone in 2008 would only result in a hideously expensive, complex, bulky piece of equipment. But if the technology has been improved by the time the first generation of cell towers are due for replacement anyways, it might be worth replacing 44,000 or so cell towers in the Rocky Mountain and Midwest areas with a handful of satellites. + +# -Prediction of Energy Needs Over the Next 50 Years + +Taking these predictions of population and economic growth over the next 50 years together with an assumed increase in the energy efficiency of cell phone technology, we have created the following energy costs due to consumer use. For the first model, we will assume that cell phone efficiency remains constant throughout the next 50 years. That is, phones will continue to require the same amount of energy to charge, throughout the entire time period. Essentially there is no technological advancement in terms of energy efficiency. + +For the second model, we will take into account the technological advancements concerning cell phone and their rechargers, as well as a transition toward other telecommunication options referenced above. We anticipate that standby power waste would be nearly eliminated, if improvements in this field continue as they have for the past few years. Batteries would take longer to run out, and phones would need to be recharged less often, as cell technology became more advanced. For this second type of value, we will incorporate an exponential decay constant, lambda, which accounts for a lambda percent decrease in energy used to power each cell phone each year. Our equation will be: + +$$ +E n e r g y (t) = . 9 2 2 * (1. 0 0 9) ^ {t} * (1 - \lambda) ^ {t} +$$ + +If we use these two models for the next 50 years, and use a value of .02 for lambda, we obtain the following data. The values in each box below are in terms of barrels of oil for the given period only, and account for consumer usage, not production costs or cell tower and satellite construction. + +
200920192029203920492059
All Cells – Holding Energy Efficiency Constant.9221.071.171.281.401.53
All Cells – Incorporating Technological Advancements.922.824.736.658.588.525
+ +*In million barrels of oil + +# SECTION 5: Energy Wasted by Consumer Electronics + +If cell phone chargers could waste over a terawatt hour of electricity in a year, how much energy might be leaking out of other chargers and plugs in the modern American household? Many devices, specifically, are known to draw standby power even while off – any device with a remote certainly does this, and others leak electricity as well, simply because manufacturers were too cheap to supply plugs and chargers that shut off fully when a device is turned off. Theoretically, any appliance with a plug, such as a lamp, is going to leak a small amount of power. However, we found that there was just not enough reliable data on the number and usage of light fixtures to study lighting. We were able to study the energy consumption of the other main group of plug-bearing devices, consumer electronics, using a variety of sources. The Energy Information Administration of the Department of Energy produced a lengthy study on the types of appliances and electronics in American homes33. Many other sources detailed wattages and average kWh/year consumed by various devices – a writer for the Wall Street Journal who had purchased a power meter, electric companies, a group of Canadian scientists, and the City of Seattle, among others34. But a report by TIAX LCC., commissioned by the Consumer Electronics Association, was by far the most thorough, combining data on electronics wattages and usage patterns into a comprehensive study35. The data was gathered together from their own studies, previous EIA and CEA studies, and information from electronics suppliers, scholarly articles, and Energy Star testing. The only major consumer electronics they did not report on were digital TVs, and they could also only provide yearly consumption estimates for component stereos, printers, and modems. We checked all of their wattage and usage data using the sources mentioned above, and filled in the gaps with corroborated data and best-guess estimates on wattages and usage from a variety of sources cited in the individual sections below. We also updated the study (completed in January of 2007) as best we could, especially when considering VCRs and game consoles, whose use has changed drastically in the last two years. A summary of data is presented below, by type in order of total consumption. Note that some devices, like TVs or modems, do not have an idle state – they either run at full power or are turned off. Also, the numbers in these tables are total TWh consumed by all of the electronic devices of that type in the country in a year – relative amounts of each device are taken into account. + +Analog TVs: The TIAX report was very thorough on this subject, utilizing data on TV usage even down to the sixth-most used TV in a house. As such, there was little we could do to improve on it, save to say that the historical increase in analog TV usage has probably begun to level off in the past two years due to the increasing popularity of digital TVs. + +
ActiveIdleOffTotal
43.6 TWhn/a6.4 TWh50.0 TWh
+ +Analog TVs appear to be fairly wasteful, due to the average of 4 Ws they leak while turned off. This does not even take into account the power they waste while turned on with no one watching them. + +Digital TVs: Estimating the TWh used and wasted by digital TVs proved difficult. It quickly became clear that there was a huge variation in wattages, from 100 W up to 400 or 500 W. A massive chart provided by CNET.com showed that the average wattage of new HDTVs averages out to be around 250 W, and that standby power on most new digital TVs has been reduced to about 1 watt36. This data was supported by wattage information on TVs from the other comprehensive appliance wattage lists cited above. For usage data, we assumed that in most cases the digital TV, being the newest TV, would be the television most-used in the household, so we used the usage data TIAX had gathered for the most-used TV. We then found that about 19.25 million flat-screen TVs had been sold since the study was done, meaning that there were now 59.25 million digital TVs in the country37. This data allowed us to estimate the power consumption of digital televisions as follows: + +
ActiveIdleOffTotal
37.8 TWhn/a0.36 TWh38.2 TWh
+ +Digital TVs actually provide a bright spot in this study of waste. They do use a huge amount of power when on, but at least their standby power has been minimized. They waste less than a percent of the total energy put into them, while analog TVs waste over $12\%$ . From our research, it appeared that the Energy Star program was at least partially responsible for this limitation. Unfortunately, electronics manufacturers have no inherent pressure on them to limit standby power, as its cheaper for them to make wasteful, leaky plugs, and the cost of the lost energy is stealthily passed on to the consumer. Programs like Energy Star make this hidden cost of electronics ownership more visible to the public, and reward manufacturers for bringing down their standby power by advertising their efficiency for them. + +Desktop computers: Again, the report to the CEA was incredibly complete on the subject of desktop computers: analyzing CRT and LCD monitors separately, then combining them into a weighted total, averaging out the wattages of many different types of desktops, and recording and taking into account the amounts of time spent in screensaver and standby modes. + +
ActiveIdleOffTotal
20.2 TWh0.090.99 TWh21.2 TWh
+ +Set-top boxes (Cable, satellite, and other TV boxes): Results on this electronics type were very interesting. Total consumption was fourth out of all electronics types, but over a third of that consumption was used while the box was not being used to watch TV. By TWh, set-top boxes waste more energy than any other type of electronics device that we studied – a surprising fact, as there area almost a million fewer set-top boxes than analog TVs. The boxes still use 15 W when off, presumably so that they can stay in contact with the service provider, and in some + +cases perform services like turning on at a certain time to record a show. If these boxes could be made more efficient when off, a vast amount of electricity could be saved. + +
ActiveIdleOffTotal
6.36 TWhn/a13.3 TWh19.7 TWh
+ +Compact audio: Compact audio systems had one of the widest variations in wattages between devices, so the TIAX study was invaluable in weighting these different wattages against the various numbers of these systems in use. Clearly, this is another area for improvement, with significant waste arising in both the idle and off stages. + +
ActiveIdleOffTotal
1.44 TWh0.912 TWh3.8 TWh6.16 TWh
+ +Component Stereo: TIAX only estimated that a component stereo might use 115 kWh per year, and claimed they had an installed base of 50 million units. To fill in the gaps, we figured that the stereo would have a usage pattern similar to a compact audio system, with wattage more like a home theater in a box. By using reasonable numbers for wattage and usage, we calculated a stereo would use about 105.17 kWh a year. + +
ActiveIdleOffTotal
1.47 TWh0.9132.88 TWh5.26 TWh
+ +Game consoles: This was one area where the TIAX report's age became an issue. They reported only 2.624 TWh used by game consoles, with 0.96 TWh going to their active state, 1.28 used while idle, and 0.384 consumed while off. But game consoles have not only become more popular since January 2007, but the proportion of older generation consoles to new consoles has also gone down. This becomes important because the newer, more advanced consoles are considerably more power hungry. TIAX reported an average wattage of $36\mathrm{W}$ for consoles, but multiple sources we found cited the new Xbox 360's wattage around $173\mathrm{W}$ and the PlayStation 3's at about $190\mathrm{W}$ . Nintendo's Wii apparently has a much lower wattage, at about 18 or $19\mathrm{W}$ , but the average wattage of consoles should still be significantly higher than $36\mathrm{W}$ . + +Also, TIAX reported 64 million consoles in the U.S, but since then Wiis in the U.S. have jumped from 1.5 million to 13.5 million, Xbox 360s have gone from 4.8 million to 11.9 million, and PS3s have increased from 0.8 million to 5.9 million $^{38}$ . With the influx of these 24 million new game consoles, we estimated that 12 million of the older game consoles would be removed from use (TIAX reported that there were 64 million in January 2007). So, there are now about 52 million older game consoles averaging around 36 W, plus 12 million new Wiis using 19 W, 7 million new Xboxes consuming 173 W, and 5 million new PS3s drawing 190 W. By weighting these numbers and wattages, we found that the new average wattage is likely closer to 56 W. + +
ActiveIdleOffTotal
1.72 TWh2.38 TWh0.711 TWh4.82 TWh
+ +Game consoles and home theater systems had a similar problem – they both use about the same amount of power when in use or when just sitting idle. This results in a high amount of power being wasted while these electronics are left on before, after, and in between uses. + +DVD players: Another problem area, DVD players wasted a staggering $87\%$ of the energy they used, the second most by percentage of any device we studied. This has probably only increased in recent years as consumers have purchased HD-DVD and Blu-Ray players as well. + +
ActiveIdleOffTotal
0.54 TWh1.15 TWh2.64 TWh4.33 TWh
+ +VCRs: This was another area where we felt we had to correct for the two years between the TIAX study and the present day. They cited a total of 4.9875 TWh used by VCRs, even more than DVDs. A breakdown does show that the vast majority of this power was used while idle (1.05 TWh) or off (3.675 TWh), which is what one would expect for a dying technology, but the number of VCRs in use has also dropped since the study was performed. Data from previous studies they cited indicates that the number of VCRs has decreased by an about $11.25\%$ a year from 2001 to 2005. We extended this trend to the end of 2008 for an estimate of 71 million VCRs operational in the U.S. today. We also adjusted their usage numbers downwards by about $15\%$ to account for more families preferring to use their new DVD player over their VCR, resulting in the following data: + +
ActiveIdleOffTotal
0.151 TWh0.574 TWh2.54 TWh3.27 TWh
+ +VCRs turned out to be the energy-wasting champion by percentage, wasting over $95\%$ of the energy that they consume. A great deal of electricity could be conserved if Americans would unplug their VCR during the long stretches they don't use it. + +Laptop computers: Surprisingly efficient, laptops used over seven times less electricity than desktop computers even though there are only twice as many desktops. However, this is apparently due to the fact that they only use 25 W while active instead of the 75 that desktops use, because they wasted $18\%$ of their energy compared to only $5\%$ wasted by desktops. + +
ActiveIdleOffTotal
2.3 TWh0.078 TWh0.429 TWh2.81 TWh
+ +Modems: Another interesting case, because they are basically left on all the time, yet thankfully have a fairly low wattage (about $7\mathrm{W}$ , we found39). Using these estimates, we found they might use $54.75\mathrm{kWh}$ a year, higher than TIAX's estimate of 53, but close. + +
ActiveIdleOffTotal
0.705 TWhn/a1.81 TWh2.52 TWh
+ +Assuming modems are used 6 hours a day (about $25\%$ less than computers, which we felt accounted for the many computers not attached to cable or DSL modems while still accounting for family modems that might be connected almost all day), this means that only 0.705 TWh a year are being used by modems while people are actually connected to the internet. The other 1.812 TWh lost as waste could be saved if people would unplug their modems when they were not in use. + +Home Theater in a Box: This was one of the most efficient devices when off, despite its size. This is probably due to the fact that like digital TVs, many home theater systems are fairly new, and thus affected by Energy Star standby power guidelines. If Americans could learn to turn their home theaters off when not in use more regularly, the waste of these electronics would be quite minimal. + +
ActiveIdleOffTotal
1.5 TWh0.625 TWh0.1 TWh2.23 TWh
+ +Printers: Printers were an interesting case, as they can suck vast amounts of energy while in use, but in most households would be actually printing for only a few minutes a day, on average. Unfortunately, their idling wattage also appears to be quite high, and the proportion of printers left on at all times is probably also substantial. We found wattage information varied considerably, but a reasonable average is $300\mathrm{W}$ while active and $12\mathrm{W}$ while on40. Along with the assumption that an average household printer is used 5 minutes a day and on 4 hours a day, we feel that a printer might use $33.79\mathrm{kWh}$ a year, close to TIAX's estimate of 30. + +
ActiveIdleOffTotal
0.27 TWh0.526 TWh0.218 TWh1.01 TWh
+ +Overall, a breakdown by device type reveals that it is actually the TV complex, not the computer complex, which is responsible for the majority of this waste. Together, DVD players, VCRs, TVs, game consoles, set-top boxes, and home theaters accounted for 4.73 TWh of the idling waste and 26.1 TWh of the electricity lost while devices are turned off. Monitors, printers, desktop computers, and modems, on the other hand, only counted for 0.706 TWh of idling waste and 3.47 TWh of off waste. If the TV and its related devices were all plugged in to a power strip which was turned off when the electronics were not in use, American households would use $18\%$ less energy on electronics (though one should be careful to turn off the TV before the power strip, as a sudden loss of power can damage a television set). As the average American household uses $11\%$ of its energy on household electronics41, this would represent a $2\%$ reduction in overall residential electricity usage. Power strips could even be fitted with a remote control power switch – they would consume a slight amount of standby power as they sat waiting for the remote signal, but the devices plugged in to them would not. This would actually be a more convenient way to turn off electronics that would also save valuable electricity. + +In all, we found that this selection of household electronics might consume around 169 TWh of electricity a year in the U.S, or approximately 99,411,765 barrels of oil, considerably more than the 1.281 TWh/year we estimated cell phone chargers waste. 125 TWh of this total would be used while the devices were actually in use, 7.34 TWh would be wasted as they sat on but idle, and 36.7 TWh would be lost as leakage even while they were off but still plugged in. By percentage, $26\%$ of a house's energy spent on electronics is wasted: 21,588,235 barrels of oil wasted by standby power and 4,317,647 barrels wasted by electronics left on but sitting idle. For confirmation, David MacKay attempted to minimize his standby power waste by unplugging everything he could, and found that he could save $1.1\mathrm{kWh}$ per day[42]. Our data, when totaled, suggests that the average American could save $1.03\mathrm{kWh}$ per day (376 kWh per year) by doing the same. + +# Citations + +Bavdek, Maureen. "Nearly 1 in 5 U.S. Households Have No Phone." 17 Dec 2008. Reuters. 7 Feb 2008 . +Brightman, James. "Wii U.S. Installed Base Now Leads Xbox 360 by Almost 2 Million." 14 Nov 2008. GameDaily. 8 Feb 2009 . +Burritt, Chris. "Super Bowl May Spur Fewer TV Sales as Retailers Fight Slump." 30 Jan 2009. Bloomberg . 8 Feb 2009 <(http://www.bloomberg.com/apps/news?pid=20601103&sid=a4OnWOI6j7.c&refer=news>. +"Call My Cell: Wireless Substitution in the United States." Sep 2008. The Nielsen Company. 7 Feb 2009 . +"Cell Phone Recycling Facts." 9 Aug 2008. Recycling for Charities. 8 Feb 2009 . +"Cell Phone Subscribers in the U.S., 1985-2005." Infoplease. 7 Feb 2009 . +"Cell Phones and Devices." AT&T. 6 Feb 2009 . +"Charger Energy Efficiency." Motorola. 8 Feb 2009. . +"Common Household Appliance Energy Use." Ames City Government. 6 Feb 2009 . +Dixon, Kim. "U.S. cell-only households keep climbing." 17 Dec 2008. Reuters. 7 Feb 2009 . +"Electricity Basic Statistics." Jan 2009. Energy Information Association. 9 Feb 2009 < http://www.eia.doe.gov/ basics/quickelectric.html>. +"Ericsson Tower Tube." Ericsson. 9 Feb 2009 . +"Estimated Current US Wireless Subscribers." CTIA The Wireless Association. 6 Feb 2009 . +Etoh, Minoru. “A Power Consumption Ratio, 1:150.” 23 March 2008. A Wandering Engineer's Remarks. 7 Feb 2009 . +Fry, Jason. "A Hunt for Energy Hogs." 18 Dec 2006. The Wall Street Journal. 6 Feb 2009 . +Fung, Alan S. Adam Aulenback, Alex Ferguson, and V. Ismet Ugursal. "Standby power requirements of household appliances in Canada." 24 May 2002. Energy and Buildings, Volume 35, Issue 2. 7 Feb 2007 . +"ICT Power Consumption Reference Tables." Dot-Com Alliance. 7 Feb 2009. . + +MacKay, David. "Sustainable Energy - without the hot air." 2009. 7 Feb 2009 . +"Population Clocks." U.S. Census Bureau. 6 Feb 2009 . +"Power consumption of common home electronics devices." 19 Nov 2007. Afterdawn.com. 6 Feb 2009 . +"Power Consumption Table." ABS Alaskan. 6 Feb 2009 . +"Recycle Your Cell Phone. It's an Easy Call." EPA. 7 Feb 2009 +Richard, Micheal Graham. "Treehugger Homework: Unplug Your Cellphone Charger." 26 Nov 2005. Treehugger. 6 Feb 2009 . +Roth, Kurt W., and Kurtis McKenney. "Energy Consumption by Consumer Electronics in U.S. Residences." Jan 2007. TIAX LLC. 6 Feb 2009 . +"Skype Fast Facts Q4 2008." Skype. 8 Feb 2009 . +Stover, Dawn. "Cell phone recycling for cash a win-win, or is it?" 23 Jan 2008. MSNBC. 8 Feb 2009 . +"Stretch your energy dollar." Seattle City Light. 6 Feb 2009 . +"Sustainable energy use in mobile communications." Aug 2007. Ericsson. 6 Feb 2009 < http://www.ericsson.com/technology/whitepapers/sustainable_energy.pdf>. +"Table HC2.11 Home Electronics Characteristics by Type of Housing Unit, 2005." Energy Information Administration 7 Feb 2009 . +"Table US-1. Electricity Consumption by End Use in U.S. Households, 2001." Energy Information Administration. 6 Feb 2009 < http://www.eia.doe.gov/emeu/recs/recs2001/enduse2001/enduse2001.html>. +“The chart: 139 HDTV's power consumption compared.” CNET. 7 Feb 2009 . +"United States." 5 Feb 2009. CIA World Factbook. 9 Feb 2009 . +"US Real GDP Per Capita." Measuring Worth. 7 Feb 2009 . +"U.S. State." 4 Feb 2009. Wikipedia. 8 Feb 2009 . +Virki, Tarmo. "Cellphone industry eyes charger power savings." 19 Nov 2008. Reuters. 8 Feb 2009 < http://www.reuters.com/article/technologyNews/idUSTRE4AI2T520081119>. +"Wireless Recycling Frequently Asked Questions." ReCellular. 8 Feb 2009 . +"Zero power consumption from battery chargers when not in use." IP.com. 6 Feb 2009 . \ No newline at end of file diff --git a/MCM/2009/B/5265/5265.md b/MCM/2009/B/5265/5265.md new file mode 100644 index 0000000000000000000000000000000000000000..5adbde91e7085d93bcb782cdd0ad6ad415f198d5 --- /dev/null +++ b/MCM/2009/B/5265/5265.md @@ -0,0 +1,490 @@ +# Wireless Networks: An Easy Cell PROBLEM B: Energy and the Cell Phone + +Team #5265 + +February $9^{th}$ , 2009 + +# Abstract + +The sheer number of cell phones worldwide has raised concerns about their energy usage, even though individual usage is typically very low (j10 kWh/year). We first model the change in population and population density until 2050 with an emphasis on trends in the urbanization of America. The current cellular infrastructure and distribution of cell sites (based on actual site locations) in the US is developed. By relating infrastructure back to population density, the number and distribution of cell sites through 2050 is identified. The energy usage of individual cell phones is then calculated based on average usage patterns. The behavior of individuals during phone charging is found to play an important role, and greatly affects yearly power consumption. The power usage of phones consumes a large part of the overall idle energy consumption of electronic devices in the US. Finally, the total power usage of the US cellular network is calculated to year 2050. If poor phone usage remains, the system will require 400 MW, or 5.6 million barrels of oil per year. If ideal charging behavior is adopted, this number will fall to 200 MW, or 2.8 million barrels of oil per year. + +# Contents + +1 Introduction 5 +2 Approach 5 + +3 US Population Growth 6 + +3.1 Total Population 6 +3.2 Number of Households 6 +3.3 Population Density Models 7 + +3.3.1 Assumptions 7 +3.3.2 Population Density Data 8 +3.3.3 Observed Growth Rates 8 +3.3.4 Predicted Distribution 9 + +4 Current Cellular Network Model 10 + +4.1 Assumptions 10 +4.2 Communication Standards 11 +4.3 Network Model and Component Power Usage 11 +4.4 Cell Site Registration Databases 11 +4.5 Tower Location 12 +4.6 Antennas per Cell Site 12 +4.7 Tower-Antenna-Population Density Relations 13 +4.8 Coverage Overlap 14 + +5 Model for Cellular Phone Usage 15 + +5.1 Basic assumptions 15 +5.2 Cellular Phone Information and Usage Behavior 16 + +5.2.1 Battery Capacity 17 +5.2.2 Number of Cell Phones Per Person 17 +5.2.3 Average Talk-Time Per Person 17 +5.2.4 Recharge Probability and Duration 18 + +5.3 Calculation of Average Energy Consumption 18 +5.4 Energy Usage of Cellular Phones 21 + +6 Pseudo US Model 21 + +6.1Assumptions 21 +6.2 Comparison of Fiber-optics to Wireless Networks 22 + +7 Energy to Oil Conversion 22 +8 Overall Charger Power Usage 23 + +# 9 Cellular Network Growth Through 2050 24 + +9.1 Assumptions 24 +9.2 Technology Improvements 25 +9.3 Infrastructure Improvements 25 +9.4 Overall Energy Usage 26 + +# 10 Conclusion 30 + +# List of Figures + +1 Predictions for the total US population as reported by the US Census Bureau. By 2011, when $95\%$ of the population will own a cell phone, the total population will be 313 million. By 2050, the limits of our analysis in this report, the total population will be 439 million. 6 +2 Predictions for the number of US households obtained from the US Census Bureau. While the number of households is observed to be steadily increasing, it is not keeping pace with the rising US population. As fewer people live in each household, the benefits of collective communications (landlines) will increase. 7 +3 Population density for 2005 for $\sim 25\mathrm{km}^2$ segments across the continental US. As expected, the population is focused east of the Mississippi. Major cities are easily identifiable by peaks in density. 8 +4 Relation of the mean annual growth rate for regions with local population densities. Several major effects are observed. Regions with extremely low population density are uninhabitable (such as the Rocky Mountains). Rural regions may (or may not) experience large growth as urbanization occurs. Stable communities, towns, and small cities experience reasonable steady growth, until limits on density are reached (in the case of huge metropolitan areas). 9 +5 Comparison of the density distributions for 2015 (based on [2]) and 2050 (based on the methods growth methods presented). Much of the rural areas are seen to be developed into higher density regions. Areas with population density of about $1 - 10\mathrm{km}^{-2}$ are seen to differ little as competing growth of lower density regions offsets growth into higher density regions. A small increase in the number of areas with very high density is observed. 10 +6 Simplified network model for infrastructure calculations. Each component (cell sites and MSC's) were assumed to be identical for all carriers and geographies. 12 +7 Distribution of towers around the US, as well as the population density as obtained from above. 13 +8 Distribution of the number of antennas per tower. 13 + +9 Relations between cell site and antenna density and the surrounding population density, formed by comparing the population density distribution around towers with the overall population density distribution. Below 150 people/km², there is a relatively constant increase in towers with respect to population. Above 150 people/km², the tower density is approximately constant, but the number of antennas per cell tower increases to compensate. 14 +10 Illustration of algorithm to determine overlapping number of overlapping cell sites for a given latitude and longitude on the population density grid. The figure does not represent the eccentricities of the grid to due changing longitudinal lengths. This process was repeated for every latitude/longitude combination in the original grid. 15 +11 Results of overlap calculations for the known grid of cell sites as reported by the FCC. Most urban regions have higher overlap of cell towers to cope with an increased population load. 16 +12 (a) Sigmoidal fit for the average number of cell phones per person in the United States (b) and predicted growth and saturation of cell phone owners in the United States. 18 +13 Historical behavior of Land-line and cellular phone usage in the United States. 19 +14 Predicted saturation behavior of average daily mobile cell phone usage. 20 +15 Fitted Gaussian distribution for recharge behavior of cell phone users. 20 +16 Typical charge profile for Lithium-ion battery. 21 +17 Yearly energy consumption of regular and ideal user assuming different user saturation times (15, 20, 25, and 30 min person $^{-1}$ day $^{-1}$ ). 22 +18 Heat content and thermoelectric efficiency data and extrapolations 23 +19 Total electrical energy produced per barrel of oil. 24 +20 Trends in U.S. Electricity production from oil. 25 +21 Usage of various electronics according to [11]. Cell phone energy usage was updated as per our model for 2008. 26 +22 Characterization of technological improvements in cellular infrastructures on energy usage [3]. Information is presented for two different sets of technology, and corresponding exponential fits (of the form $a\exp (bx) + c$ ) are included to 2050. 27 +23 Predicted number of towers from 2007 to 2050 using the population density predictions and the observed distribution of cell towers in the US. Back calculations to 1990 are included (if similar technology and usage statistics were in place then). Created 28 +24 Predictions for the full energy usage of the US cell phone network for two different handset charge scenarios. If all people use the charger as efficiently as possible (a) overall usage will remain at approximately 200 MW. If current charge patterns are used, consumption will rise to 400 MW. 29 + +# List of Tables + +1 Different power consumption states of a cellular phone adapter 16 +2 Average values of capacity and energy consumption for popular U.S. cell phones 17 +3 A comparison of the electricity needs per household for fiber optic and wireless approaches. Comparison for 2.5 members, one computer per member, one TV, and one phone per person. 23 +4 Improvements in network technology efficiency from Figure 22 26 + +# 1 Introduction + +As energy becomes a growing issue, we are evaluating our current infrastructure to locate inefficiencies in power consumption. The recent spike in cellular phone usage in the past decade is of growing interest due to concerns over increased energy consumption compared to land-line phone networks. In this report we investigate total energy consumption of cellular and land-line phone industries. + +As mobile communication technology develops, there are inefficiencies in the way people use their mobile devices. By modeling subscriber growth and trends, we can get a clearer picture of the energy consequences of our mobile network. + +By correlating the growth of mobile subscribers with changes in our mobile infrastructure, we can strategically develop our current communications network to meet energy efficient guidelines. + +# 2 Approach + +1. Collected and extrapolated housing, population, and population density statistics for the United States through the year 2050. +2. Determine the United States' total yearly energy consumption resulting from cellular phone infrastructure. Model takes into account changes in the U.S. population density, the number of subscribers, and the number of cell towers/power requirements necessary to meet consumer demands. +3. Determine the United States' total yearly energy consumption as a direct result of cellular phone use. Model takes into account typical cell phone battery and charger properties, phone manufacturer market share, U.S. population growth, increases in cell phone use, and the behavior of cell phone users. +4. Demonstrate considerable energy savings within the cell phone industry as a direct result of phone user behavior ("regular" behavior versus "ideal" behavior). +5. Compare the total yearly energy consumption of the cell phone industry (commercial and consumer energy expenses) against that of other technology industries (Television, Computing, Radio, etc.). +6. The total volume of oil (in barrels) needed to provide energy to the cellular phone industry on a yearly basis is determined. The analysis takes into account transient changes in the heat content of oil, the efficiency of converting heat to electricity, and the United States' use of oil in electricity production. + +# 3 US Population Growth + +Estimating the overall power consumption of the cellular network, a location based service, requires a detailed understanding of how the overall US population is distributed. Although detailed information for the current population total and number of households is available from US Census, predictions for power usage in the future required extrapolations. + +# 3.1 Total Population + +Growth of the overall US population is an important statistic and is well studied by the US Census Bureau [8]. Total population data and the relevant quadratic fit are shown in Figure 1. + +![](images/266bba8ad09729dff90cce0fd04eddc451a7784166995ef195d54aa332b33642.jpg) +Figure 1: Predictions for the total US population as reported by the US Census Bureau. By 2011, when $95\%$ of the population will own a cell phone, the total population will be 313 million. By 2050, the limits of our analysis in this report, the total population will be 439 million. + +# 3.2 Number of Households + +Landline services, which are delivered to homes and not individuals, requires knowledge of the changing number of households in the US. Simultaneously, the number of persons + +per household directly affects the tradeoffs between the two services. Historical data was obtained from the US Census [5] and is shown in Figure 2. + +![](images/a70f97e13d49acea739d372d01ff1c4f3c928f46ee5c807f5d7eb69af8565861.jpg) +(a) + +![](images/3870481a4a01562632b5983d9f1463d198859cb27e08930e2bf892d73f5de009.jpg) +(b) +Figure 2: Predictions for the number of US households obtained from the US Census Bureau. While the number of households is observed to be steadily increasing, it is not keeping pace with the rising US population. As fewer people live in each household, the benefits of collective communications (landlines) will increase. + +# 3.3 Population Density Models + +The population density across the US is a primary factor in the placement of cell sites and the relative power usage of both landline and cellular options. Regions that have very low population density may be better served by landlines than by cellular networks. At the same time, the population density distribution across the US will not remain constant as further urbanization occurs and previously rural regions catch up to more urbanized ones. The population density across the US are thus predicted over the time period relevant to the analysis in this paper (until 2050). + +# 3.3.1 Assumptions + +- Population density can be accurately modeled at the kilometer length (variations at smaller length scales, most noticeable in cities, will be unimportant). +- Population growth for a particular area is primarily linked to the current population density. +- Trends seen in urbanization from 1990 through 2008 will continue until 2050. + +- Geographical distributions of population density are significantly symmetric around the center of latitude to neglect effects on latitude/longitude cell sizes. + +# 3.3.2 Population Density Data + +Maps of population density from 1990 to 2005 and predictions through 2015 were obtained from [2] and are shown for 2005 on a log scale in Figure 3. + +![](images/84a4c5f372b68ebdc376a099e26a962b0aee9ac9f4b09c5d1bddf10aeb209702.jpg) +Figure 3: Population density for 2005 for $\sim 25\mathrm{km}^2$ segments across the continental US. As expected, the population is focused east of the Mississippi. Major cities are easily identifiable by peaks in density. + +# 3.3.3 Observed Growth Rates + +The changing distribution of population density was identified by comparing the predicted density distribution for 2015 against the distribution for 1990. The mean annualized growth rate was then calculated via the simple formula: + +$$ +\% \text {Growth} = \left(\frac {\rho_ {2015}}{\rho_ {1990}}\right) ^ {\frac {1}{25}} - 1 \tag{1} +$$ + +The US is grouped by binning data according to population data. For each range of population densities, the mean growth rate of the regions (and corresponding standard deviations) is obtained. These mean growth rates are then related to population densities as shown in Figure 4. Population is thus seen to be an excellent predictor of population growth rate, with the exception of regions with very low population density (less than one person per $100\mathrm{km}^2$ ). This likely arises due to the behavior of developing rural regions with regions + +that are simply inhospitable (such as the Rocky Mountains). Regions that are historically extremely low in population density will likely never see population growth (1 person per $10000\mathrm{km}^2$ ) and developing rural regions will only experience accelerated growth until they form stable communities or towns. Extremely large cities like New York City and Los Angeles will experience lower density growth than the rest of the country because they are already near their sustainable or established limits (few cities worldwide have managed population densities much higher than that of New York City). + +![](images/8c81d1964d20c369540aa70c8dafc2d523ac2555155bb86f1f18dd3973a461d9.jpg) +Figure 4: Relation of the mean annual growth rate for regions with local population densities. Several major effects are observed. Regions with extremely low population density are uninhabitable (such as the Rocky Mountains). Rural regions may (or may not) experience large growth as urbanization occurs. Stable communities, towns, and small cities experience reasonable steady growth, until limits on density are reached (in the case of huge metropolitan areas). + +# 3.3.4 Predicted Distribution + +The relationship developed in the previous section was used to predict density across the US until 2050. For each year, beginning with the 2015 predictions [2], the population density + +growth for each region is calculated using the fit shown in Figure 4. As the population of regions grow, the growth rate is continually adjusted. All results are scaled by the overall population growth statistics from Figure 1. The practical effect of this method is to slow density growth as populated regions increase in size. At the same time, rural regions which saw explosive growth from 1990 to 2015 will only continue growing rapidly until they reach a stable size. The population distribution thus becomes slightly more level. In order to show the impact of this population growth, the distribution of population densities in 2005 and 2050 are shown in Figure 5. + +![](images/81e93090162a6a7a0ad83205e17a5cc39e32d0456300104da82224b81b686c23.jpg) +Figure 5: Comparison of the density distributions for 2015 (based on [2]) and 2050 (based on the methods growth methods presented). Much of the rural areas are seen to be developed into higher density regions. Areas with population density of about $1 - 10\mathrm{km}^{-2}$ are seen to differ little as competing growth of lower density regions offsets growth into higher density regions. A small increase in the number of areas with very high density is observed. + +# 4 Current Cellular Network Model + +# 4.1 Assumptions + +- The FCC database contains all relevant and major cell sites in the US. + +- Cell sites serve areas of homogeneous population density, characterized by the population density at the exact location of the site. +- All cell sites can communicate to $50\mathrm{km}$ (approximately the limit of modern technologies). +- The strength of a cell tower is primarily dependent on the number of antennas (due to a lack of transmission power information). + +# 4.2 Communication Standards + +CDMA and GSM are the two primary standards for mobile phones in the United States. These primary standards require different antennas to transmit cellular communications, and therefore different cell sites exist for each standard. We estimate however that all mobile phones are covered one way or the other. In order to simplify our models, we assume that all mobile phones utilize one generic standard rather than multiple standards. + +# 4.3 Network Model and Component Power Usage + +Due to the huge variety of transmitters and strategies used in the US cell network today, constructing precise energy profiles is difficult. However, sources concerned with reducing the power usage of these networks provide data on the approximate usage of various pieces of the cellular network [3]. The simplified cellular network model and corresponding energy usage requirements are shown in Figure 6. Cellular phones connect directly to cell sites, which may or may not be mounted on antenna towers. Each antenna mounted on a tower was considered a separate cell site in later calculations. These towers can handle a range of calls at once (about 200-500 users, using 600-1000 W[3]) and pass the information along to Mobile Switching Stations (MSC's). Communication between MSC's and cell sites can be accomplished through fiber optic networks or microwave connections. Each MSC can handle approximately 1.5 million subscribers, and consumes about $200\mathrm{kW}$ . MSC's connect directly into the communications backbone of the country. Since the fiber optic network comprising the backbone will be necessary in any usage scenario (or in any pseudo-US), it was not considered in energy estimates. + +# 4.4 Cell Site Registration Databases + +Information about the current distribution of cell sites across the US was obtained by examining the FCC Universal Licensing System Databases [10]. All cellular radio transmitters located above $200\mathrm{m}$ are required to be registered in the system, ensuring that a majority (but not all) cell sites are included. The database contained approximately 20,000 cell site locations comprising about 130,000 individual cell sites. + +![](images/47fab7095880c9eb70682bbf4d61f700c59d7f9dc3081fc7fcc0d3eb0ff1e617.jpg) +Figure 6: Simplified network model for infrastructure calculations. Each component (cell sites and MSC's) were assumed to be identical for all carriers and geographies. + +# 4.5 Tower Location + +The cell site location data obtained above was investigated by plotting against the population density map from above, as shown in Figure 7. Cell towers are seen to be positioned throughout the US. Interestingly, several seem to be positioned off of the coast in the Gulf of Mexico and in the Atlantic Ocean (either due to errors in registrations or for the use of ships and/or oil rigs). Since only towers above $200\mathrm{m}$ are required to be registered in such a manner. Also interesting is the single tower at the center of Dallas (northern Texas), which contains 25 antennas and suggests a series of smaller sites spread throughout the city. + +# 4.6 Antennas per Cell Site + +Many cell sites in more urban areas use more antennas and higher transmission powers. Although some Effective Radial Power (ERP) data is included in the FCC database [10], many sites have no published information, and several had negative ERP's (impossible). In addition, many sites have similar transmission powers, likely due too FCC regulations. In order to quantify the power of each cell site, the number of antennas is used instead. The distribution of antennas is seen in Figure 8. While most sites have only a single antenna, had many several, and a few had as many as 9. + +![](images/e273b8a3bdff75161d8da1c45340ea269e7da290b61446ed91a9f41cbb7bde20.jpg) +Figure 7: Distribution of towers around the US, as well as the population density as obtained from above. + +![](images/44329424088d8ee5445c2a93b4e258805b409b2794a61772c58132d0f2d324b4.jpg) +Figure 8: Distribution of the number of antennas per tower. + +# 4.7 Tower-Antenna-Population Density Relations + +In order to calculate how many cell sites are used on average in regions of varying population density, the site locations is used to interpolate densities from the maps of Section 3.3.2. + +Binning this data to provide a distribution of population densities and dividing by the distribution of population densities across the US, the relationship between cell site and antenna densities and population density is identified as shown in Figure 9. The initial portion of the graph approximately shows a steady increase in the number of towers, with one antenna per tower. However, above 150 people/km², the number of towers levels off and the number of antennas per tower begins to rise to compensate for increased populations. As expected, these relation will contribute with the square of their radii. + +![](images/6d11f216df20997f0943747e163e913c6731ea0ab0dd02f3283883f827314990.jpg) +Figure 9: Relations between cell site and antenna density and the surrounding population density, formed by comparing the population density distribution around towers with the overall population density distribution. Below 150 people/km², there is a relatively constant increase in towers with respect to population. Above 150 people/km², the tower density is approximately constant, but the number of antennas per cell tower increases to compensate. + +# 4.8 Coverage Overlap + +The overlapping range of various cell sites was investigated by determining the number of nearby cell sites at a range of locations within the US. The method for this process is illustrated in Figure 10. For each cell in the population density grid (shown above), a trial list of all towers within a reasonable range was constructed (towers within 1 degree latitude, 3 degrees longitude or approximately $100 - 200\mathrm{km}$ in each direction). For each of these candidate towers, the great circle distance between the location (latitude $\delta_{1}$ , longitude + +$\lambda_{1}$ ) and the tower (latitude $\delta_{1}$ , longitude $\lambda_{1}$ ) was calculated as follows [14]: + +$$ +d = 6 3 7 8 [ k m ] \cdot \cos^ {- 1} [ \cos \delta_ {1} \cos \delta_ {2} \cos (\lambda_ {1} - \lambda_ {2}) + \sin \delta_ {1} \sin \delta_ {2} ] \tag {2} +$$ + +If the great circle distance is found to be less than the maximum range of the towers (approximately $50\mathrm{km}$ ), then the region is considered to be in the tower's plausible range. For each location, the number of cell sites within range is thus calculated, and is shown in Figure 11. While some cities have a large degree of overlap, others accomplish full connectivity by using many smaller rooftop sites or higher power antennas. Also noticeable is several regions in the Western US with no current conductivity. + +![](images/e8c6e26764f503059964deead4b190ab51e0f1df77aefcf5946734dbee486f15.jpg) +Figure 10: Illustration of algorithm to determine overlapping number of overlapping cell sites for a given latitude and longitude on the population density grid. The figure does not represent the eccentricities of the grid to due changing longitudinal lengths. This process was repeated for every latitude/longitude combination in the original grid. + +![](images/5a2f989dfa5c792c2988695aaa7282c4a294502cfa79cdd1014b12e59f6c2e09.jpg) + +Location of Interest + +![](images/1d46415f4248034fb222c82aaf5c79a208de6c6d6b7d25f55ab936896e052b99.jpg) + +Trial Sites, No Overlap + +![](images/35f294be06929a4407ee595d7ae7f2298570864b287328ea30e82dedaee4376b.jpg) + +Trial Sites, Overlap + +# 5 Model for Cellular Phone Usage + +# 5.1 Basic assumptions + +Our investigations uncover three main components of electricity consumption regarding the direct use of cellular phones. Electricity is consumed not only by powering the cellular phone during talking and standby, but also by powering the charge-adapter with and without a + +![](images/9a2a9dc70fe35b29a8a21e5bdd23fa6718aaa9b0f965a4c8390c06fc1c24aa6c.jpg) +Figure 11: Results of overlap calculations for the known grid of cell sites as reported by the FCC. Most urban regions have higher overlap of cell towers to cope with an increased population load. + +phone attached. Therefore, the cellular phone usage of an average person is modeled as a function of three different characteristics as follows: (1) at what remaining battery level $(0 - 100\%)$ does the phone user decide to recharge his/her cell phone, (2) how long the cell phone remains connected to the charge after the battery is completely charged, and (3) whether or not the person unplugs the charge-adapter from the outlet upon completion of battery charging. The possible power consumption states of a phone adapter are displayed in Table 1. + +Table 1: Different power consumption states of a cellular phone adapter + +
Adapter stateConsumption, W
Unplugged0
Plugged in, no phone0.5
Phone attached, not charging0.9
Phone attached, charging4.0
+ +# 5.2 Cellular Phone Information and Usage Behavior + +The following sections describe the collection and analysis of information and historical data for cellular phones and cellular phone users. This information is vital to the creation of a model for predicting the growth of energy consumption of the cellular phone industry. + +# 5.2.1 Battery Capacity + +Table 2 displays the average battery capacity, power consumption during talking, and standby power consumption for batteries of the top nine largest mobile phone manufacturers in the United States. Averages are determined using manufacturer information of over 150 popular cellular phones, approximately 15 phones per provider [7][1]. Power consumption is calculated using battery capacity and estimates of talk-time and standby-time for individual phones assuming each phone has a $3.7\mathrm{V}$ lithium ion battery. Global averages for capacity, talk power consumption, and standby power consumption (found at bottom of table) are calculated as an average of values for individual manufacturers weighted using their present percent U.S. market share in 2008. + +Table 2: Average values of capacity and energy consumption for popular U.S. cell phones + +
RankManufacturerMarket Share, %Battery Capacity, mAhTalk-power, WStandby-power, W
1Samsung22.0980 ± 2280.0138 ± 0.00510.875 ± 0.293
2Motorola21.6826 ± 1220.0108 ± 0.00230.655 ± 0.292
3LG20.7890 ± 1060.0116 ± 0.00360.923 ± 0.242
4RIM9.01216 ± 2760.0145 ± 0.00601.065 ± 0.348
5Nokia8.51066 ± 1920.0122 ± 0.00320.735 ± 0.334
6Sony Ericsson(7.0)1015 ± 2140.0085 ± 0.00390.431 ± 0.110
7Kyocera(5.0)900 ± 0.000.0200 ± 0.00300.970 ± 0.080
8Sanyo(4.0)810 ± 89.40.0161 ± 0.00370.908 ± 0.152
9Palm(2.2)1500 ± 3460.0167 ± 0.00421.402 ± 0.353
960 ± 1660.0127 ± 0.00390.829 ± 0.263
+ +# 5.2.2 Number of Cell Phones Per Person + +The average number of cell phones owned per a person is determined using historical population and mobile phone data and extrapolated to the year 2050 [8] [9]. Figure 12a displays the total number of cellular phone subscribers normalized by the population of the United States. The historical data is fit to a sigmoidal curve, assuming that the ratio will eventually reach a value of 1 cell phone per 1 U.S citizen (complete saturation). Figure 12b compares the yearly increase in U.S. population to that of cellular phone users. We see that by year 2015, the predicted number of cell phone owners has reached the total number of people in the United States and continues to grow with the population. + +# 5.2.3 Average Talk-Time Per Person + +The average talk time of an individual user between years 1991 and 2050 is determined in a similar fashion as the average number of cell phones per owner. Figure 13 displays the trends + +![](images/2916b51977cd2a9aa9e34d25e5079d27018a5d6d9608ef2990a99a6407c2665d.jpg) +(a) + +![](images/9fde4f47bc8eac43261bc8a5e5f4d714a1b8b8a70f9a5ae1df6d53ccc9f3fc10.jpg) +(b) +Figure 12: (a) Sigmoidal fit for the average number of cell phones per person in the United States (b) and predicted growth and saturation of cell phone owners in the United States. + +in LAN line and cellular phone usage in terms of total minutes used per year between the years 1991 and 2007[6][9]. The data is extrapolated by normalizing with the population of the United States to determine the average minutes used per person per day. It is assumed that this average usage will eventually saturate to some value of minutes per person per day and a first order exponential growth function is employed to model this behavior. Figure 14 displays the predicted growth of cell phone usage assuming saturation at 15, 20, 25, and 30 minutes per person per day. + +# 5.2.4 Recharge Probability and Duration + +The battery level at which a person is likely to charge their phone was modeled as a Gaussian distribution based on cell phone behavior data collected by Zhong et al [12]. The data and fitted Gaussian distribution are found in Figure 15. It is observed that users tend to recharge their phone batteries between 25 and $75\%$ of full capacity. A second Gaussian distribution is created with mean battery level of approximately $15 - 20\%$ in order to compare with the fitted distribution and determine which scenario is more ideal (not shown). + +The time it requires to charge a lithium ion battery is typically not linearly proportional to the remaining charge [add citation]. Therefore, it is assumed that the battery charge increases exponentially as a function of charge time, as depicted in Figure 16. + +# 5.3 Calculation of Average Energy Consumption + +The energy consumed by the average cell phone user over the course of a year is calculated employing the battery and usage behavior extrapolations discussed earlier. It is assumed + +![](images/2f0d5e12ec1b213ae98c4c6b5da6b2441482badb1d522273d1080bf242b08108.jpg) +Figure 13: Historical behavior of Land-line and cellular phone usage in the United States. + +that the full distribution of remaining battery charge (0 - 100%) can occur before charging is initiated. Therefore, given a remaining battery charge for an individual user, the total energy consumption can be calculated from battery capacity and different power states of a charge-adapter. The duration the adapter stays in a particular power state is determined by the frequency of charging (number of charge cycles per year) which is approximated by the power consumption during periods of cell phone talking and standby. Furthermore, the power consumption during talking/standby is weighted by the average number of minutes a person talks on the phone per day (see Figure 13). Finally, the average energy consumption across the entire population of cell phone users is determined using a weighted sum of energy at each remaining battery level and the probability distribution that charging starts at that battery level. + +As mentioned earlier, three characteristics are defined for a given cell phone user, giving rise to a possible eight different types of users. For ours studies, it is assumed that there are only two types of cell phone users, (1) the "regular" user who charges his or her cell phone for 8 hrs at a time at the probability given by the fitted Gaussian distribution. The "regular" user also always leaves the charge-adapter plugged in, and (2) the "ideal" user who charges his or her cell phone for only the time it takes to reach $100\%$ charge at the probability distribution centered at $15 - 20\%$ battery levels. Additionally, the "ideal" user also never leaves the charge-adapter plugged in when not charging. All further comparisons are made between populations of "regular" and "ideal" users. + +![](images/05c9413f50a1f4e2800f08d1763793100ccfcb286a07032698bddc5322cd4ab3.jpg) +Figure 14: Predicted saturation behavior of average daily mobile cell phone usage. + +![](images/9f7021b9b1556f75ffd84b74400fc967dabb85d9c7e69d4b71db457f3a9fee57.jpg) +Figure 15: Fitted Gaussian distribution for recharge behavior of cell phone users. + +![](images/0a8b7cfff3f67776a1ef0c4f4df3464e2ec81bb51b053ca9fa9d6e574e8d86ac.jpg) +Figure 16: Typical charge profile for Lithium-ion battery. + +# 5.4 Energy Usage of Cellular Phones + +The yearly energy consumed by cellular phone charging between the years 1991 and 2050 for the "regular" and "ideal" user (as defined earlier) is displayed in Figures 17a and 17b, respectively. It is observed that the yearly consumption of the ideal user is less than $1/5$ th of that of the regular user ( $80 - 90\%$ energy savings in the year 2050 depending on usage). This drastic difference is primarily a consequence of unplugging the cell phone adapter from outlet after charge completion. As a result of the increased energy savings of the ideal behavior, we see an increased sensitivity to the cellular usage saturation at different values of minutes per person per day. These trends are more difficult to see with the regular behavior since the majority of energy consumption is caused by the charge-adapter. + +# 6 Pseudo US Model + +# 6.1 Assumptions + +- A communication infrastructure is entirely non-existent +- A power grid already exists +Each household must have television and internet service + +![](images/8bac0f04a06adc62f39ecce99fe13ade3adfa24486527bf6ce2c295096cbbf2b.jpg) +(a) Regular User + +![](images/c6713a8c843cdcd63238a5f99efa374dd63e3d045f538ed523d4615be2148668.jpg) +(b) Ideal User +Figure 17: Yearly energy consumption of regular and ideal user assuming different user saturation times (15, 20, 25, and $30\mathrm{min}$ person ${}^{-1}\mathrm{day}^{-1}$ ). + +Each household has a phone, or a cell phone per person + +# 6.2 Comparison of Fiber-optics to Wireless Networks + +In order to identify an ideal communications system for the US, the energy usage per person for an entirely wireless network was compared to the cost of running a competitive fiber optic network. Since the choice of wireless versus fiber optic affects the energy usage of TV's, computers, and phones in a household, all three communication methods were considered. The estimated power usage for each system is summarized in Table 3. Based on current estimates for each electronic [11], a completely wireless approach could be energy competitive against a fiber optic solution, due to the energy inefficient link necessary in every household. + +# 7 Energy to Oil Conversion + +The amount of electrical energy available per barrel of oil is determined using historical data [4] [13]. Figure 18a shows the heat content per a barrel of oil from 1949 to 2007 with linear extrapolation out to 2050. It appears that the heat content is decreasing, possibly due to decreasing supply of energy-rich oil in the global market. The thermoelectric efficiency (i.e. the efficiency of converting heat created by burning fuel into electricity) is displayed in Figure 18b with extrapolation. Using the heat content and thermoelectric efficiency data, the total electricity produced per barrel of oil is obtained and displayed in Figure 19. From the extrapolation, it is found that one barrel of oil will produce approximately $628\mathrm{kWh}$ of electricity in year 2050. + +Table 3: A comparison of the electricity needs per household for fiber optic and wireless approaches. Comparison for 2.5 members, one computer per member, one TV, and one phone per person. + +
CategoryFiber Optic UsageRadio Usage
General TVFiber Optic Link (16W)DTV Converter (5W)
Internet2.5x WIMAX Card (1W)
2.5x Transmission (0.75W)
Phone2.5x Cell Phone (0.75W)
2.5x Transmission (0.75W)
Total16W13W
+ +![](images/73b55f5d9905e94ca589263d7491342930354ddfd3aab428ffb7843606838d1c.jpg) +(a) Heat Content +Figure 18: Heat content and thermoelectric efficiency data and extrapolations + +![](images/9442bb8aecc139e15720a6caba2f27d27140bac974b924855a356f346e455213.jpg) +(b) Thermoelectric efficiency + +While it is a considerable amount oil needed to create a TWh or more of electricity, it is very unlikely that oil will be used to create this electricity. From Figure 20 we see at its peak use (1977), oil only accounted for approximately $17\%$ of the electricity produced in the United States. Today oil accounts for less than $4\%$ of the electricity and this value appears to be decreasing slowly. + +# 8 Overall Charger Power Usage + +The power consumption of electronics while not being used (including cell phones) is a major concern. In order to gauge the inefficiency of cell phones compared to other electronics, + +![](images/f0cbe03b148218260e1e9c21e2b7309b81270a73f2e9d70a91ced18aebc8ea25.jpg) +Figure 19: Total electrical energy produced per barrel of oil. + +results in this analysis were compared with a comprehensive study completed in 1999 [11]. The distribution presented in that analysis is shown in Figure 21. Although the energy usage of cell phones chargers is significant (2TWhr/yr), it is only a small portion of the overall energy wasted by idle electronics (34TWhr/yr), or 54 million barrels per year using the conversions established above. + +# 9 Cellular Network Growth Through 2050 + +# 9.1 Assumptions + +- No new (radically disruptive) technologies will be introduced past 3G. Currently technology will improve until a minimum energy usage is achieved. +- Population density growth will follow similar trends to 2050. +- The number of towers necessary for a given population density will remain constant through 2050. + +![](images/649e732e986891886769031afd58129c58b9c6156c99b65fb4535efe21e0aa60.jpg) +Figure 20: Trends in U.S. Electricity production from oil. + +# 9.2 Technology Improvements + +The power requirements of cellular networks have fallen drastically from their introduction in the 1980's as technology improved. Until 1950, similar reductions in power usage will be likely, either through improvements in the electronics of cell sites (computers and such) or more efficient communication strategies (antenna transmissions). In order to characterize this reduction in energy, information is used regarding the energy usage of past technologies [3], as shown in Figure 22. Technologies following the primary upgrade path (1G to 2G and beyond) are leveling out in their minimum energy usage. Although the introduction of 3G initially caused a large increase in power consumption, it seems to have a greater potential for reducing energy consumption. Since future such disruptive technologies cannot be accurately quantified, it is assumed that all future networks will be based on a variation of 3G architecture. The relevant efficiencies are shown for each decade in Table 4. + +# 9.3 Infrastructure Improvements + +As the population grows and the use of cell phones increases, more cell sites and related infrastructure will be necessary. To model the increasing number of towers, the tower density/population density relations of Section 4.7 is combined with the population density predictions of Section 3.3.4. The resulting increase in towers is seen in Figure 23. These predictions assume that tower capacity will not grow directly, but instead improve through + +![](images/542c034f40fe9c487f81e46d2589cf5275a2e9f67d56c7ecc9edf0343cc59bd0.jpg) +Figure 21: Usage of various electronics according to [11]. Cell phone energy usage was updated as per our model for 2008. + +Table 4: Improvements in network technology efficiency from Figure 22 + +
YearRelative Power Usage
20051.0
20100.85
20200.66
20300.63
20400.62
20500.62
+ +energy efficiency (shown in the next section). + +# 9.4 Overall Energy Usage + +The total energy usage of the US cellular network was calculated using the predicted increase in number of cell sites, observed trends in technology improvements, predicted usage patterns + +![](images/bd3dee77c476d4e702b693718d543df6cf23be6510ff31f83b3a77e8eb81b5c7.jpg) +Figure 22: Characterization of technological improvements in cellular infrastructures on energy usage [3]. Information is presented for two different sets of technology, and corresponding exponential fits (of the form $a\exp (bx) + c$ ) are included to 2050. + +for handsets, and energy usage statistics from recent years. Final predictions are shown for two usage scenarios in Figure 24. If chargers are used inefficiently power consumption will grow to approximately $400\mathrm{MW}$ , or 5.6 million barrels per year. However, if consumers choose to utilize their chargers efficiently, overall consumption by 2050 will be approximately 200 MW (2.8 million barrels per year). + +![](images/4ee2f45a9a0a3d89c3a8c1a143a52b89a0078923022e9ed6c8d05621001d933d.jpg) +Figure 23: Predicted number of towers from 2007 to 2050 using the population density predictions and the observed distribution of cell towers in the US. Back calculations to 1990 are included (if similar technology and usage statistics were in place then). Created + +![](images/fb798e821580a91427587ecb59159c5a3ff61126a8652b0eedf9cb918909b6a4.jpg) +(a) Inefficient charger usage + +![](images/192270cb7e68ef79022b5b3f744d5d6095183a6243bf3b2dafca7eb985a0c632.jpg) +(b) Ideal charger usage +Figure 24: Predictions for the full energy usage of the US cell phone network for two different handset charge scenarios. If all people use the charger as efficiently as possible (a) overall usage will remain at approximately $200\mathrm{MW}$ . If current charge patterns are used, consumption will rise to $400\mathrm{MW}$ . + +# 10 Conclusion + +Our estimates of the power consumption of the US cellular network, based on models of normal cell phone usage and the current infrastructure, was applied to the various requirements for the problem solution as follows: + +1. Requirement 1: Electricity utilization during a switch to a fully cellular communications network + +(a) Important population statistics of the US were derived (population, population density, and households) necessary for electricity. +(b) We calculated the increase in demand for cell phones over the next few years based on increasing population and current usage trends. +(c) The need for increased cellular infrastructure based on future was established. +(d) The overall energy usage over the next several years was identified. + +Recent technology improvements will cause energy to decrease until 2015, after which increasing population will demand more power usage + +2. Requirement 2: Estimate the optimal communications network for a country similar to the US + +(a) A fiber optic backbone will be necessary for any potential communications network. +(b) The energy usage of radio based network (to houses) comprising voice, data, and TV service was found to draw less electricity than a fiber optic approach. + +A radio (cell/wifi/DTV) based approach to communications is optimal for a pseudo-US nation, as long as wireless communication can provide sufficient bandwidth (likely). + +3. Requirement 3: Estimate the energy loss due to inefficient charging of cellphones + +(a) Several usage models were constructed, based on how many minutes per day a cell phone was used. +(b) Charging strategies were based on probabilities (experimental) that an individual would charge a phone. +(c) Energy consumption for regular and ideal users were then calculated. + +A regular citizen today wastes $4.8\mathrm{kWh} / \mathrm{yr}$ through inefficient charging strategies. + +4. Requirement 4: Model energy wasted by various idle household electronics + +(a) Average energy consumption of a wide range of electronics was identified. + +(b) The more detailed cellular network usage was compared to that of other electronics + +A regular citizen today wastes $125\mathrm{kWh} / \mathrm{yr}$ through the use of various idle electronics. + +5. Requirement 5: Model energy needs for phone service until 2050 + +(a) The change in population density from the present until 2050 was predicted +(b) Current tower density distributions (number of towers necessary to serve an area of particular population density) were calculated. +(c) Number of new towers and infrastructure was calculated, as well as benefits from possible technological improvements. +(d) Cellular handset usage statistics were estimated. + +If inefficient charging strategies are used, cellular networks in 2050 will require $400\mathrm{MW}$ of electricity (5.6 million barrels of oil per year). If more efficient chargers are introduced or is citizens change their habits, only 200 MW of power (2.8 million barrels of oil per year) will be required. + +# References + +[1] Cell phone technical specifications. Technical report, Samsung, Motorola, LG, RIM, Nokia, Sony Ericsson, Kyocera, Sanyo, Palm. +[2] Gridded population of the world version 3 (gpwv3): Population density grids. Technical report, Columbia University, 2005. +[3] Sustainable energy use in mobile communications. Technical report, Ericsson, August 2007. +[4] Annual energy review 2007. Technical report, Energy Information Administration, June 2008. +[5] Average population per household and family: 1940 to present, July 2008. +[6] Ctia survey mid year 2008. Technical report, CTIA: The Wireless Association, June 2008. +[7] Idc's worldwide quarterly mobile phone tracker. April 2008. +[8] Projected population by single year of age, sex, race, and hispanic origin for the united states: July 1, 2000 to july 1, 2050. Technical report, U.S. Census Bureau, August 2008. +[9] Trends in telephone service. Technical report, Federal Communications Commission, August 2008. +[10] Antenna structure registration: Cellular - 47 cfr part 22: Licenses, February 2009. +[11] Alan Meier Karen Rosen. Energy use of u.s. consumer electronics at the end of the 20th century. Technical report, Lawrence Berkeley National Laboratory, 1999. +[12] Mark Corner Sami Rollins Lin Zhong Nilanjan Banerjee, Ahmad Rahmati. Users and batteries: Interactions and adaptive power management in mobile systems. University of Massachusetts, Rice University, University of San Francisco, October 2007. +[13] Nathalie Trudeau Michel Francoeur Peter Taylor, Olivier Lavagne d'Ortigue. Energy efficiency indicators for public electricity production from fossil fuels. Technical report, International Energy Agency, July 2008. +[14] Wolfram Research. Great circle. http://mathworld.wolfram.com/greatcircle.html. \ No newline at end of file diff --git a/MCM/2009/B/5522/5522.md b/MCM/2009/B/5522/5522.md new file mode 100644 index 0000000000000000000000000000000000000000..ab6d307d4494033158cc4a84b982aac1a1c7f36a --- /dev/null +++ b/MCM/2009/B/5522/5522.md @@ -0,0 +1,408 @@ +# America's New Calling + +BY + +Stephen R. Foster, J. Thomas Rogers, Robert S. Potter + +Southwestern University + +Georgetown, TX + +Adviser: + +Rick Denman + +# Abstract + +The ongoing cell phone revolution warrants an examination of its energy impacts - past, present, and future. Thus, our model adheres to two requirements: it can evaluate energy use since 1990; and it is flexible enough to predict future energy needs. + +Mathematically speaking, our model treats households as state machines and uses actual demographic data to guide state transitions. We produce national projections by simulating multiple households. Our bottom-up approach remains flexible, allowing us to: 1) model energy consumption for the current United States, 2) determine efficient phone adoption schemes in emerging nations, 3) assess the impact of wasteful practices, and 4) predict future energy needs. + +We show that the exclusive adoption of landlines by an emerging nation would be more than twice as efficient as the exclusive adoption of cell phones. However, we also show that the elimination of certain wasteful practices can make cell phone adoption $175\%$ more efficient at the national level. Furthermore, we give two forecasts of the current United States, revealing that a collaboration between cell phone users and manufacturers can result in savings of more than 3.9 billion Barrels of Oil Equivalent over the next 50 years. + +# Problem Background + +In the year 1990, less than 3 percent of Americans owned cell phones [ITU]. Since then, a growing number of households have elected to ditch their landline in favor of acquiring cellular phones for each household member. Our task is to develop a model for analyzing how the cell phone revolution impacts electricity consumption at the national level. + +Such a model ought be able to: + +Assess the energy cost of the cell phone revolution in America. +- Determine an efficient way of introducing phone service to an nation like America. +- Examine the effects of wasteful cell phone habits. +- Predict future energy needs of a nation (based on multiple growth scenarios.) + +# Assumptions + +- The population of the United States is increasing at a rate of roughly 3.3 million people per year (according to the U.S. Census Bureau). +- The relatively stable energy needs of business landlines, government landlines, payphones, etc. have a negligible impact on energy consumption dynamics during the household transition from landlines to cell phones. +No household member old enough to need phone service is ever without phone service. +- Citizens with more than one cell phone are rare enough to have a negligible energy impact. +- The energy consumption of the average cell phone remains constant. + +We justify the last assumption on the grounds that future changes in cell phone energy requirements depend largely on changes in user habits and changes in manufacturing efficiency. Thus, they are difficult to predict. However, we drop this assumption in our final section. + +# Energy Consumption Model + +Our approach involves three steps: + +- We model households as state machines with various phones and appliances. +- We use demographic data to determine the probability of households changing state. +- By simulating multiple households, we extrapolate national energy impacts. + +# Households + +The basic component of our model is the household. Each household has the following attributes: + +- $m$ : A number of members old enough to need a telephone. +- $t$ : A number of landline telephones. +- $c$ : A number of members with cellular phones. + +The state of each household can be described in terms the above values. We will generate $m$ from available demographic data and hold it constant. + +A household can exist in one of four disjoint states at a time. Each state has two associated conditions. + +- Initial State - When a household only uses landline telephones. + +$t > 0$ +$c = 0$ + +Acquisition State - After a household acquires its first cell phone. + +$t > 0$ +$0 < c < \mathrm{m}$ + +Transition State - After all household members have their own cell phone but the landline is retained. + +$t > 0$ +$c = m$ + +- Final State - After the household abandons their landline telephones. + +$t = 0$ +$c = m$ + +These states are disjoint, but we do not assume that all states must be reached during the timeline of a household. We do assume that cell phones, once acquired, are never lost; and we assume that landlines, once dropped, are never readopted. Thus, a household will never reenter a state that it has left. Thus, a household will reach one or more of the above states in the order listed. + +Suppose a household with three members ( $m = 3$ ), one landline telephone ( $t = 1$ ), and no cell phones yet ( $c = 0$ ). The graph below shows the complete timeline of a hypothetical household with each of the four phases labeled. + +![](images/136e87ed1e03e34a73c82fce62a448a33c9b6a4aab6ffaf7b98f6c42caa9218c.jpg) +Figure 1. + +Note that our model will generate household state transition probabilities from available demographic data. However, this process is simulation dependent; and we discuss it later, in the context of simulating the current United States. + +# Nations + +Households are only part of the story. We model the national timeline during the country-wide transition from landlines to cell phones as a composition of multiple overlapping household timelines. Furthermore, the decisions that households make regarding when to acquire cell phones and when to abandon their landlines are dependent on the larger national context. For example, a household would be much more likely to acquire its second or third cell phone in 2008 than it would have been in 1990. + +A hypothetical nation with only three households might have the following timeline composition: + +![](images/4a12fec701db32e3a3828d0efdfc7fcd5b4b4d1ccd45e6c15178a09bde09050a.jpg) +Figure 2. + +The fact that the three household power usages converge is a result of there being 3 members in each of the three randomly selected houses monitored here. For every day, we aggregate the total rate of energy consumption for each household, generating a national timeline like so: + +![](images/8f7f8e803cf617657b5a6337b17d14254897da6d08775731aca846a2eecd5b38.jpg) +Figure 3. + +We now proceed to construct such a timeline for the current United States. + +# The Current U.S. + +# Using Technological Data + +In order to use our model in conjunction with relevant data, we have to calculate the following values: + +- $C_{\text{wattage}}$ : The average rate of energy consumption of a cell phone over its lifetime. +- $L_{\text{Wattage}}$ : The average rate of energy consumption of a landline phone over its lifetime. + +Note that we only deal with cordless landline phones because corded phones use minimal levels of energy and are ignored in the literature we have reviewed (Frey, Rosen, and Watts). + +We derive $C_{\mathrm{wattage}}$ as follows: + +$$ +C _ {\text {w a t t a g e}} = \text {C h a r g e r} _ {\text {w a t t a g e}} + \left(\frac {C _ {\text {u p f r o n t}} (\text {j o u l e s})}{C _ {\text {l i f e t i m e}} (\text {s e c o n d s})}\right) \tag {1} +$$ + +With this formula, we incorporate the upfront energy cost in joules of manufacturing a cell phone ( $C_{\mathrm{upfront}}$ ) into the overall average wattage of a cell phone by dividing the upfront cost by the lifetime of a cell phone ( $C_{\mathrm{lifetime}}$ ) in seconds. We add to this the wattage of the average cell phone charger - which is what consumes energy during the use-phase of a cell phone's life cycle. (Note: The vast majority of cell phone energy consumption occurs in the manufacturing-phase and the use-phase [Frey], so we ignore the rest of a cell phone's life cycle.) + +By analogy: + +$$ +L _ {\text {w a t t a g e}} = \text {C o r d l e s s} _ {\text {w a t t a g e}} + \left(\frac {L _ {\text {u p f r o n t}} (\text {j o u l e s})}{L _ {\text {l i f e t i m e}} (\text {s e c o n d s})}\right) \tag {2} +$$ + +The following table lists values obtained from research done by Frey et al.. + +$$ +C _ {\text {u p f r o n t}} = 1 4 8 \mathrm {M J} +$$ + +$$ +C _ {\text {l i f e t i m e}} = 2 \text {y e a r s} +$$ + +$$ +\text {C h a r g e r} _ {\text {w a t t a g e}} = 1. 8 3 5 \text {w a t t s} +$$ + +Table 1. + +Though there exist many different kinds of cordless phones, we choose to use the values for cordless phones with integrated answering machines, as determined by Rosen. + +$$ +L _ {\text {u p f r o n t}} = 1 6 7 \mathrm {M J} +$$ + +$$ +L _ {\text {l i f e t i m e}} \quad = 3 \text {y e a r s} +$$ + +$$ +\text {C o r d l e s s} _ {\text {w a t t a g e}} = 3. 5 3 9 \text {w a t t s} +$$ + +Table 2. + +Thus, our simulation uses the following values: + +$C_{\mathrm{wattage}} = 4.182$ watts +$L_{\mathrm{wattage}} = 5.304$ watts + +# Using Demographic Data + +We need demographic data to help guide the transition of household states over the course of a simulation. We could allow houses to decide randomly when and whether to adopt new cell phones as well as when and whether to drop their landline. However, we prefer to use actual penetration data to probabilistically weight household decisions. + +Consider the household decision of whether to purchase a cell phone in month $M$ . We use a three-step process to produce the cell phone acquisition probability function $a(M)$ employed in our simulation: + +Find historic data about the number of of cell phone owners over time. +Interpolate between the data points. +- Define $a(M)$ , the probability of a simulated household acquiring a cell phone in month $M$ . + +For step one, we used the following data obtained from the International Telecommunication Union. In step two, we use a linear interpolation between available data points to make a continuous function from 1990 (the start of our simulation) to 2009. + +![](images/b8bae944bb212c147519226eff9161b9179c3d51932518cc61b5cb117cc1190f.jpg) +Cell Phone Penetration Demographics +Figure 4. + +Then, we use a linear regression to extrapolate the function between 2009 and 2040. Call this function $f$ . Then, for step three, + +$$ +a (M) = f (M) - \left(\frac {\sum_ {H \in \text {H o u s e s}} c (H , M)}{\sum_ {H \in \text {H o u s e s}} m (H , M)}\right) \tag {3} +$$ + +Where $c(H, M)$ is the number of cell phones owned by members of simulated household $H$ in month M; and $m(H, M)$ is the number of members in simulated household $H$ in month M; and 'Houses' is the set of all households in the simulation. In essence, Equation 3 subtracts the current simulated cell phone penetration during month $M$ from the approximated market penetration, $f(M)$ , which is derived from available data. + +Using $a(M)$ , the households in our simulation make decisions that approximate historical data. As the second term in Equation 3 approaches the historical value returned by $f(M)$ , the chances of a simulated household buying a cell phone decreases to zero. + +We perform an almost identical process with historic landline ownership data in order to determine the probability of a household dropping their landline in month $M$ . Because the process is the same, we omit it. Mnemonically, however: $a(M)$ shall be the probability of acquiring a cell phone; $d(M)$ shall be the probability of dropping a landline. + +# Simulating the Current U.S. + +The historical demographic data will help guide our simulation, and the technological data will help us calculate the rate of energy consumption at any point during the simulation. With that said, we algorithmically generate household timelines like so: + +While month M is before end date + +For every house $H\in \mathbf{H}$ oases do if $H$ is in 'initial' or 'acquisition' state get a new cell phone with probability $a(M)$ if $H$ is in transition' state get rid of landline with probability $d(M)$ End For + +Calculate power consumption using $C_{\mathrm{wattage}}, L_{\mathrm{wattage}}$ , and current phone ownership. + +Let $M = M + 1$ month end while + +Here is the national timeline detailing the rate of energy consumption for the current United States over the past nineteen years. + +![](images/86c22da7d4cb9942edf8494a86253d56c17427cf38310c1864195c17db2c1beb.jpg) +Figure 5. + +Interesting features of this graph are: + +- The steep energy consumption as Americans acquire cell phones yet retain their landlines. +- The drop after cell phone penetration slows and landlines are abandoned. +- The rising slope after households have dropped their landlines and the population grows. + +At first, most households tend to be in an Acquisition State, having both landlines and an increasing number of cell phones. Next, households begin to progress to a Transition State, slowly dropping their landlines while retaining their cell phones - hence, the overall consumption drop. The final upward slope represents the steady state, in which population growth (and associated cell phone acquisition) is the only factor affecting energy consumption. + +# Optimal Telephone Adoption + +Imagine an emerging nation without phone service but with an economic status roughly similar to the current United States. We now examine two hypothetical scenarios for introducing phone service to this nation: + +Cell phones Only +- Landlines Only + +Because it took Russia roughly 6 years for cell phone penetration to go from 2 percent to 105 percent [ITU], we assume a similar timescale for introducing cell phones to our hypothetical nation. Furthermore, a country with the same economic status as the U.S. should be capable of making a similarly quick adoption of either cell phones or landline phones, even though landline phone infrastructure involves the extra complexity of laying cables. + +# Cell Phones Only + +For our cell phone introduction plan, we assume that 0 percent of the population in 2009 has been given cell phones and that 100 percent of the population in 2015 has been given a cell phone. If we interpolate linearly between these two dates, we can derive the number of people that will be given a cell phone in any month during the 6 year period. If we assume that the rate at which cell phones consume energy remains roughly the same between 2009 and 2015, then we have all the information we need to run our simulation. + +The only major change we make to our model is that the Initial State of a household now involves having no phones at all, and the Final State involves each household member owning a cell phone. + +![](images/113d5135164b6ad0e08c732a4d292e8a24650d9eb52c76ade63abe9d60634403.jpg) +Figure 6. + +The steep slope levels off when cell phone market penetration reaches 100 percent, and the only relevant factor after that is the population growth. + +# Landlines Only + +Now we alter our model such that the Initial State of a household still involves having no phones, and the Final State involves having one landline. We take the previous graph and overlay a graph generated from a simulation that assumes the nation's households will adopt landlines instead of cell phones. + +![](images/f0960e836e502ee9dd608978bbe6c67e885fbaf82ce26794cff9f2eacdbc3789.jpg) +Figure 7. + +Based solely on this graph, the Landlines Only plan seems optimal, since it requires less than half the power of the Cell Phones Only plan. However, we prefer to delay our recommendation. First, we examine a way to make cell phone adoption more energy efficient. + +# Wasteful Charging and "Vampire" Chargers + +Although the above comparative analysis of the two plans shows Landlines Only to be a clear winner, we should take into account that the rate at which cell phones consume energy varies depending on the practices of cell phone users. Until now, we have assumed that the energy consumption of a cell phone is equal to the consumption of its charger - even though many people do not use their charger as conservatively as they could. We now relax this assumption and assess the total cost of certain wasteful practices by supposing that our hypothetical nation's citizens never + +- charge a cell phone after it is finished charging. + +- leave their charger plugged in when not charging it. + +The value for $C_{\mathrm{wattage}}$ that we calculated earlier was based on Frey's assumption that cell phone chargers spend their lifetimes plugged in – mostly in standby (vampire) mode. We now derive a new value for $C_{\mathrm{wattage}}$ based on a different study by Roth and McKenney, which shows that the average cell phone only needs to spend a minimum 256 hours charging per year. In short, we make $C_{\mathrm{wattage}}$ dependent strictly on its minimum battery requirements and assume that users only charge their phones enough to keep them charged for the entire day. Roth also suggests that chargers require 3.7 watts when charging. + +$$ +C _ {\mathrm {w a t t a g e}} ^ {\prime} = \text {B a t t e r y} _ {\mathrm {w a t t a g e}} + \left(\frac {C _ {\mathrm {u p f r o n t}} (\mathrm {j o u l e s})}{C _ {\mathrm {l i f e t i m e}} (\mathrm {s e c o n d s})}\right) \tag {4} +$$ + +$$ +\text {B a t t e r y} _ {\text {w a t t a g e}} = \frac {\text {T i m e S p e n t C h a r g i n g}}{\text {L i f e t i m e}} \times \text {W a t t a g e W h e n C h a r g i n g} \tag {5} +$$ + +Thus, Battery $_{\text{wattage}} = \frac{256(\text{hours})}{8760(\text{hours})} \times 3.7 \text{ watts} = 0.108 \text{ watts}$ . The second term in Equation 4 is the same as in Equation 1. So, + +$C_{\mathrm{wattage}}^{\prime} = 2.455$ watts. + +Recall that our previous value was + +$C_{\mathrm{wattage}} = 4.182$ watts. + +We now show the effects of this new, lower energy expenditure on the simulation. Also pictured is our previous analysis of the phone adoption timeline in the hypothetical nation. + +![](images/2c0cfc328e23fb71bc749e721745e6364bcb6992d13f916723c8e8f49680bec3.jpg) +Figure 8. + +The following graph shows the amount of energy wasted each month by vampire charging in Barrels of Oil Equivalent (BOE). + +![](images/30d544e6abb523c987f965a8905ea1d4abce794abb8bb81a9ae4aa4498e9458b.jpg) +Figure 9. + +By eliminating vampire charging, this power can be conserved, resulting in $175\%$ more efficient energy consumption. + +# Other Household Appliances + +Generalizing our previous analysis, we now assume that households do not simply use cell phones and/or landlines. They also each have the following common appliances: + +Zero or one computer (50 percent having a computer [census.gov 2]). +Zero or one DVD player (84 percent having a DVD player [neilsenmedia.com]). +- Two or three televisions [neilsenmedia.com] + +We selected these appliances because they are responsible for a significant amount of household energy consumption [Floyd]. We derive the "vampire" energy leakage from these appliances from various sources + +Computer 2.63 watts [Roth] + +DVD Player 3.64 watts [Roth] + +Television 6.53 watts [Floyd] + +Table 3. + +The graph of a single household might look like this, then: + +![](images/0f3870cd392fbfc430dc6b93b6cb447b7b3735e1a41217d1dfa67b6d50192116.jpg) +Figure 10. + +We now graph our hypothetical nation's wasted power, interpreted in Barrels of Oil Equivalent. + +![](images/b2774061e8d40ccf02f9243a4d55fb2f80c18aceb558ff34c30c24f2fd2e61c4.jpg) +Figure 11. + +Clearly, then, telephone-related energy loss is a significant contributor to the overall energy consumed by the U.S.. However, there exist electrical appliances that have a larger impact. + +# Predictions + +Here we tie our previous work together into a predictive simulation that investigates the energy impact of the following eventualities: + +- Cell phone efficiency stays the same. +- Cell phone efficiency decreases (i.e. with the introduction of smartphones.) +- People save 50 percent of energy currently lost to "vampire" charging. +People do not stop "vampire" charging. + +In all cases the population of the nation is assumed to grow at a rate of about 3 million people per year - a rate comparable to that of the current United States. + +# Optimistic Prediction + +For our optimistic prediction, we assume that cell phone energy requirements continue to remain constant with each successive generation of cell phones. We also assume that the population manages to eliminate 50 percent of its energy consumption due to "vampire" charging. + +Recall that our best case value for the use-phase energy consumption of a cell phone (i.e. no vampire charging) was: + +Battery $_{\text{wattage}} = 0.108 \, \text{watts}$ + +And our worst case scenario (i.e. a charger that is always plugged in) was: + +$\mathrm{Charger}_{\mathrm{wattage}} = 1.835 \mathrm{~watts}$ + +We now choose a use-phase value half-way between Chargerwattage and Batterywattage. + +Realisticwattage $= 0.9715$ watts + +As in Equations 1 and 4, we add this to the manufacturing phase energy cost to obtain an optimistic (but not too optimistic) average cell phone wattage. With this value, we graph the rate of energy consumption over the next 50 years. + +![](images/b7e6736119a144085f75d5f338cc11b9ee104894f3bd069d6feb2f2550856fb3.jpg) +Figure 12. + +As can be seen, landline telephone usage still contributes significantly to the total power consumption of the nation until the year 2030. The cell phone power consumption trend may not be meaningful until looked at alongside the pessimistic prediction. + +# Pessimistic Prediction + +We assume here that cell phone energy requirements increase with each successive generation of cell phones at a rate comparable to the increase from regular cell phones to smartphones. In short, we are modeling the transition from landlines to cell phones to smartphones. We also assume that the population does not manage to avoid "vampire" energy loss. + +Because smartphone technology exists in a state of relative infancy, technical information about it is scarce. Thus, we make an estimate of the average wattage of a smartphone based on the fact that for all tasks (emailing, text messaging, idling, etc.) a smartphone requires more than twice as much power than a regular cell phone [Mayo]. Endeavoring to be conservative, we assume that smartphone manufacturing costs are the same as cellphones, even though they are likely much higher. Thus, we borrow most values from Equation 1 to calculate average smartphone wattage: + +$$ +S _ {\text {w a t t a g e}} = 2 \times \text {C h a r g e r} _ {\text {w a t t a g e}} + \left(\frac {C _ {\text {u p f r o n t}} (\text {j o u l e s})}{C _ {\text {l i f e t i m e}} (\text {s e c o n d s})}\right) \tag {6} +$$ + +With $S_{\mathrm{wattage}} = 6.017$ watts, and smartphones becoming widespread at around 2025, we are ready to make our comparison. + +# Comparison + +The two predictive scenarios above are represented together in the graph below. Only the nation's total power consumption is graphed. + +![](images/190c40a395715d9fbf99e34b0e209457077085bcc1c987318bbb9e7ade103759.jpg) +Figure 13. + +It goes without saying that our model leads us to recommend the adoption of conservative practices (on the part of cell phone users) and research into greater phone efficiency (on the part of cell phone manufacturers). A $50\%$ reduction in vampire phone charge and a dedication to energy efficient phones, according to our simulation, would result in the conservation of 3.9 billion Barrels of Oil Equivalent over the next 50 years. And it is worth noting that even our pessimistic scenario is not as pessimistic as it could be, since we chose a deliberately low value for the energy cost of smartphones. And our optimistic scenario is not as optimistic as it could be, since we assumed only a 50 percent reduction in vampire energy losses. + +# Conclusion + +Modeling the cell phone revolution can benefit from a bottom-up approach. The basic components of this approach are households undergoing a series of transitions such that each of their members acquires a cell phone and, eventually, the household abandons their landline. + +For the emerging nation adopting a new telephone system, we found that landline adoption would be twice as efficient as cell phone adoption. However, if the nation enforces conservative cell phone energy use, the cell phone plan can be almost comparable to the landline plan. + +Also, our model is capable of showing a vast divergence between an optimistic future scenario and a pessimistic one. This being the case, we must recommend a concerted energy conservation effort on the part of cell phone makers and cell phone consumers. Doing so would result in savings of over 3.9 billion Barrels of Oil Equivalent over the next 50 years. + +# Strengths & Weaknesses + +# Strengths + +- Uses Demographics. Our model simulates the decisions of households based on historic data, making it a good model for assessing the energy consumed to-date. +- Incorporates Manufacturing. We incorporate the energy cost of a phone's manufacturing-phase into the phone's use-phase wattage, thereby increasing the simplicity of our model without ignoring the significant energy consumption during manufacturing. +- Retains Flexibility. Because our model is a bottom-up approach, various details at the household level can easily be incorporated into national simulations. We did this, for example, to assess the cost of "vampire" chargers and to assess the cost of non-telephonic appliances. + +# Weaknesses + +- Ignores Infrastructure. We do not examine the energy cost of cellular infrastructure (towers, base stations, servers, etc.) as compared to the energy cost of landline infrastructure (i.e. telephone lines and switchboards.) +- Extrapolates Naively. Though we use demographic data to guide household decisions before 2009, we use simple regression techniques to forecast future demographic information. Using better forecasts would make predictions more accurate. A list of data we extrapolated are: cell phone energy-use changes, cell phone penetration dynamics, and landline abandonment rates. +- Simplifies Households. Our model doesn't examine all household member dynamics – i.e. members getting born, growing old enough to need cell phones, moving out, starting households of their own, etc. + +# References + +[celeus.gov 1] United States Census Bureau. Home Computers and Internet Use in the United States: August 2000. census.gov. +[ census.gov 2] http://quickfacts.census.gov/qfd/states/06000.html +[Floyd] Floyd, David B. Leaking Electricity: Individual Field Measurement of Consumer Electronics. +[Frey] Frey, S.D.; Harrison, David J.; Billet H. Eric. 2006. Ecological Footprint Analysis Applied to Mobile Phones. Journal of Industrial Ecology. MIT. http://mitpress.mit.edu/jie +[ITU] ITU World Telecommunication/ICT Indicators Database. http://www.itu.int/ITU-D/ICTEYE/Reports.aspx +[Mayo] Mayo, Robert N. 2005. Energy Consumption in Mobile Devices: Why Future Systems Need Requirements-Aware Energy Scale-Down. Power-Aware Computer Systems. +[ Roth] Roth, Kurt W.; McKenney, Kurtis. 2007. Energy Consumption by Consumer Electronics in U.S. Residences. Final Report to Consumer Electronics Association. +[Rosen] Rosen, Karen B.; Meier, Alan K.; Zandelin, Stephan. 2001. Energy Use of Set-top Boxes and Telephony Products in the U.S. (Available from: http://eetd.lbl.gov/ea/reports/45305/) +[Singhal] Singhal, Pranshu. 2005. Integrated Product Policy Pilot Project. Nokia Corporation. \ No newline at end of file diff --git a/MCM/2009/B/5717/5717.md b/MCM/2009/B/5717/5717.md new file mode 100644 index 0000000000000000000000000000000000000000..f9b801ecc181110172d086c61e41fce277533279 --- /dev/null +++ b/MCM/2009/B/5717/5717.md @@ -0,0 +1,464 @@ +# Table of Contents + +Introduction. 2 +Problem Statements 2 +Assumptions 2 +Important Variables: 3 +Part 1: Existing Infrastructure 3 +Transition 3 +Steady State. 7 +Part 2: No Existing Infrastructure 7 +Optimal State 7 +Additional Factors. 9 +Part 3: Effects of Cell Phone Charger Negligence 9 +Part 4: Effects of Battery Charger Negligence 11 +Part 5: Effects of Economic and Population Growth 12 +Analysis of the Model 14 +Verification 14 +Strengths 14 +Weaknesses 14 +Conclusion 15 +Glossary 15 +Works Cited 17 +Other References. 20 +Appendix 21 + +# Introduction + +Over the past fifteen years, cellular telephone subscriptions in the United States have increased dramatically. At the same time, growing concerns over oil supplies have increased public consciousness of energy efficiency. By comparing the energy use of cell phones to that of traditional landlines, the most energy efficient type of telephone can be determined. Major factors include: + +Power used while charging +Power used while idle +Time charging each day +- Time idle each day +Energy to manufacture and transport the phone +- Lifespan of phone +Total number of phones + +These values, many of which depend on the type of telephone, allow for a comprehensive analysis of the energy consequences of the cell phone revolution. This model quantifies the effects of cellular and landline telephones on power consumption. Several aspects are investigated, each with its own specifications. + +# Problem Statements + +1. Model the energy consumed due to telephone use in the current United States during the transition to a predominantly cell phone based society, and during the following steady state. +2. Determine the most energy-efficient way to provide telephone service in a country without existing communications infrastructure, and describe social influences on phone preference. +3. Estimate the number of barrels of oil wasted in the United States when fully charged cell phones are plugged into the charger, as well as when the chargers are plugged into the wall but not the phones. +4. Estimate the number of barrels of oil wasted by all battery chargers that are left plugged in when they are not actively charging a device. +5. In the optimal situation from #2, project the telephone energy requirements over time as the population and economy change. In particular, find the number of barrels of oil used each decade for the next half-century. + +# Assumptions + +- Cell phones and landline phones compete for the same market. +- Residential, commercial, nonprofit, and government telephones are included in the total number of phones. +- The total number of phones is averaged by household. +- Every cell phone comes with a charger and lithium ion battery [1]. +- A cell phone's battery will not be replaced, it will be discarded with the phone. +Overcharging or undercharging a lithium ion battery does not affect its life or performance [6]. +- Nickel hydride batteries are used in cordless phones [7]. + +- The total energy used in manufacturing a landline phone is half that of manufacturing a cell phone. +A person may own more than one telephone. +- In any household with cell phones, each of its m members will have their own cell phone. +- Every person within the population is part of a household. +- A charger is an item used to recharge batteries, including those within electronic devices such as laptop computers, cell phones, and cordless phones. Appliances such as televisions, refrigerators, and microwaves are not included, as they are not rechargeable devices. +- The fixed energy required to construct telephone infrastructure, when averaged over the duration of the phone system, is negligible. + +# Important Variables: + +The important variables within this model include : + +H, the number of households in the country +- $Z_{\text{Cell}}$ , the number of cell phones per hundred people +- $Z_{\text{Landline}}$ , the number of landlines per hundred people +- $\mathrm{N}_{\mathrm{Cell}}$ , the number of cell phones +- $\mathrm{N}_{\text {Landline }}$ , the number of landline phones +Population +- Power drawn by each type of phone when idle +Power drawn by each type of phone when charging or active +- $\mathrm{W_p} = 3.0128$ , ratio of primary energy input at a power plant to energy drawn off the grid [9] +- On average, a cell phone's battery must charge for one hour a day [2] +75% of landline phones are cordless and 25% are corded +The average lifespan of a corded landline phone is 20 years +The average lifespan of a cordless landline phone is 10 years +- Cell phones last 1.5 years [3,4], whereas lithium ion batteries and chargers last 3-4 years [5] +Each landline connection will have an average of m phones connected to it +- $m = 2.37$ , members in the average household [8] + +# Part 1: Existing Infrastructure + +# Transition + +The United States has a mixture of cell phones and traditional landline phones. Currently, $84\%$ of the United States population has a cell phone subscription with $15.8\%$ of U.S. households owing only cell phones [10]. The U.S. is currently in a period of transition from the exclusive use of landlines toward the exclusive use of cell phones. During this transition, cell phones and landline phones compete for consumers. The target market is the entire population, which grows over time. As such, the number of cell phones and landlines per hundred people is time dependant, as seen in Figure 1. + +![](images/e0f1743860ca41922e80efd5015bf085b92c1b33ec7bcaf9b0b8a5de68cca44d.jpg) +U.S. Telephone Lines per 100 Population +Figure 1 : Historical Data for Phone Ownership in the United States + +As cell phones became popular, the number of landlines decreased. This suggests the data can be described with a differential Competing Species model [11, 12, 13]. The competing species model describes two species that both require a single finite resource, and impede each other from acquiring it. The system of equations for this model appears in Equations 1 and 2, which cannot be solved analytically. In these equations, $x$ is the population of one species, $y$ is the population of the other species, and $a_1$ and $a_2$ are the unconstrained growth rates of the populations. The ratios $a_1 / b_1$ and $a_2 / b_2$ are the maximum populations for each species. The final coefficients, $c_1$ and $c_2$ , are competition factors accounting for the negative effect each species has on the growth of the other. + +Eq. 1 + +$$ +\frac {d x}{d t} = x (a _ {1} - b _ {1} x - c _ {1} y) +$$ + +Eq. 2 + +$$ +\frac {d y}{d t} = y (a _ {2} - b _ {2} y - c _ {2} x) +$$ + +For the purposes of this model, the two species are cell phones and landlines. The resource in question is market share, the proportion of the population of the United States which is willing to purchase phones. When total phone ownership exceeds the equilibrium value, one of the two types of phone will have to die out, or become obsolete. The competition model can be applied by taking $Z_{\text{Cell}}$ as the number of cell phones per hundred people in the U.S. and $Z_{\text{Landline}}$ as the number of landlines per hundred people in the U.S.. Appropriate coefficients for this model were determined graphically by solving the equations numerically in MATLAB [14]. The results can be seen in Equations 3 and 4. + +Eq. 3 + +$$ +\frac {d Z _ {C e l l}}{d t} = Z _ {C e l l} [. 3 1 5 - \left(\frac {. 3 1 5}{1 1 0}\right) Z _ {C e l l} - (4. 7 7 \times 1 0 ^ {- 4}) Z _ {L a n d l i n e} ] +$$ + +Eq. 4 + +$$ +\frac {d Z _ {L a n d l i n e}}{d t} = Z _ {L a n d l i n e} [ . 2 1 - \left(\frac {. 2 1}{1 0 0}\right) Z _ {L a n d l i n e} - (2. 5 0 \times 1 0 ^ {- 3}) Z _ {C e l l} ] +$$ + +With these coefficients, the model fits the historical data accurately from 1995, when cell phones reached a penetration level of ten per hundred people, to 2006, the last year data for both phone types was available. The graphical solution and its projection through 2030 appears in Figure 2. This model predicts that the market will support up to 1.1 cell phones per capita, or up to 1.0 landlines per capita. Based on an average of 2.37 telephones connected to each landline, there is a maximum of 2.37 landline phones per person. Included in these numbers are residential, commercial, nonprofit, and government owned phones. The "Cell Phone Revolution" is taken as the time period from 1995, when cell phones first reached a saturation of 0.1 per capita, through 2025, when landlines drop below a saturation of 0.1 per capita. + +![](images/c4a59f4d6853d58d918af7bcb1d6cc69855c305abcd3108fdd0a18aee0fd257a.jpg) +Figure 2: Historical data and Competition Model + +Using this model, the total number of cell phones and the total number of landline telephones can be predicted for any future year. Appendix A shows the projected number of cell phones and landlines phones per capita from 1995 through 2035. These numbers are used to determine the energy requirements in terms of gigawatt-hours per day (GWh/day). The power needed for each type of phone is listed in Table 1. For these purposes, a cordless phone is a landline telephone with either batteries or electronics, which draws constant power from the electrical grid. A corded phone gets all its power from the telephone line. The energy to manufacture, ship, and dispose of a cell phone equals 180-MJ, or 50,000-Wh. All power quantities are listed in terms of rate of primary energy use, which accounts for the fact that for every watt-hour drawn from the grid, 3.0128 watt-hours worth of fuel were used to produce it [9]. This accounts for inefficiencies in the power generation and distribution systems [15]. The three cell phone types listed correspond to the power required for their chargers when idle. + +Table 1: Primary Power Levels + +
Idle (W)Active (W)Active Hours (h)Fixed (Wh)Life (days)Daily Energy (Wh)
Coreded Phone00.45212500072003.92
Cordless Phone5.1210.242250003600140.11
Cell Phone (avg)0.90415.06150000540128.44
Cell Phone (new)0.30115.06150000540114.59
Cell Phone (5-star)0.09015.06150000540109.74
+ +The number of active hours corresponds to call time for corded phones, charging time for cell phones, and the charging time for cordless phones. The manufacturing energy for a landline phone is assumed to be half that of a cellular phone due to its less-complex circuitry. A cordless phone has half the life of a corded phone because it is more likely to get lost or broken. The final column of Table 1, showing lifetime average power per device in watt-hours per day, is calculated using Equation 5. + +$$ +P = (I d l e W a t t s) (2 4 - H o u r s A c t i v e) + (A c t i v e W a t t s) (H o u r s A c t i v e) + \frac {F i x e d}{L i f e} \tag {Eq.5} +$$ + +The daily energy use for each type of phone is calculated using equation 6. + +$$ +D a i l y E n e r g y = \big (P _ {C e l l A v g} \big) (N _ {C e l l}) + \big (0. 7 5 (E _ {C o r d l e s s}) + 0. 2 5 (E _ {C o r d e d}) \big) (N _ {L a n d l i n e}) +$$ + +Eq. 6 + +At present, there are 271,856,247 cell phones [20] and 276,867,152 land phones, meaning the United States produces 64.3 GWh per day for telephones. Figure 3 shows the total power produced for telephones during the transition period from 1995 through 2030. A baseline projects what power levels would have been needed if cell phones had not become popular. + +![](images/cfab8da691e787a657792e23d4f03ab5f2a2f6b0f23cafb29c4a5872504d1acb.jpg) +Figure 3: Power Produced for Telephones during Transition Period + +As seen above, the power used by landlines begins to decline as cell phone power usage grows. The net change in power production during this transition is initially positive. After the year 2021, the transition state becomes more energy efficient than the projected baseline, as seen in Figure 4. This occurs because cell phones require less primary energy per day than landlines. + +![](images/71b84affeb3b81b18ec74d08d010a2c9f2a2b0c8a53cc76d737d66c42af68fe8.jpg) +Figure 4: Difference in Power Generated for Telephones during Transition + +Over the course of the transition period from 1995 to 2025, an additional 84,040 GWh of energy must be produced for telephones. However, starting in 2022, annual energy savings will result. + +# Steady State + +The steady state occurs when the entire market for telephones is satisfied. Based on the model of the transition period, this will only include cell phones. When that occurs, and the two types of phones are in equilibrium, the limiting value of 1.1 cell phones per person. + +Table 2: Population and Household Data + +
FactorValue
H126,316,181 households in US [8]
m2.37 members / household
+ +Using current population figures shown in Table 2 above, the total power requirements for the steady state are calculated based on the data in Table 1. The steady state power needs for the United States are shown in Table 3. More energy efficient chargers decrease the load. + +Table 3: Power Requirement for Steady State by Charger Efficiency + +
DeviceEnergy Cost (Gigawatt Hours/Day)
Cell Phone (Average)42
Cell Phone (New)38
Cell Phone (5-star)36
+ +# Part 2: No Existing Infrastructure + +# Optimal State + +To determine the optimal system for providing telephone service in a country roughly the same size as the United States, but lacks existing communications infrastructure, the power requirements of each type of phone are compared. The fixed energy required to construct telephone infrastructure, averaged over the duration of the phone system, becomes negligible. The limiting values for landline and cellular phone penetration are 2.37 and 1.1 phones per person respectively, the same as in the United + +States. The energy needed per day is the population multiplied by the phone penetration factor, and the energy per day per phone. Figure 5 shows the projected power requirements for the country over time. + +![](images/3e97eb4b5028d25fc93da70ea92373428f045e75dd2ca561798564d4f086ba80.jpg) +Figure 5: Telephone Power Need Forecast for Saturated Market + +From these data, corded landline phones are the most energy-efficient, using about 3.2 gigawatt-hours of energy per day. However, universal use of corded phones is not a realistic scenario. When landline infrastructure is present, $75\%$ of landline phone are assumed to be cordless, and $25\%$ corded. Also, there are three levels of cell phone chargers to consider: the current average charger in the U.S., the more-efficient chargers currently being manufactured, and the energy-conserving 5-star chargers which are not yet common [21]. Calculating the energy use of these in the saturated market results in the power requirements shown in Figure 6. + +![](images/54a8a22fd5043b2ffc5fad483a3408a430314eea4e0c29e791950e481294fb37.jpg) +Figure 6: Accurate Telephone Power Forecast for Saturated Market + +From an energy perspective, it is most beneficial to create the infrastructure for a cell phone communication system. Passing legislation to decrease the amount of waste that chargers create could be used to make this state even more energy efficient. + +# Additional Factors + +Outside of impacts on energy consumption, there are still numerous other factors that determine which type of phone will be favored by the general population. Cell phones provide greater mobility while increasing safety, especially while travelling alone or in a small group. It also allows older children to have increased independence, without putting themselves in danger [22]. Impromptu scheduling changes and emergencies can be more easily handled with immediate communication available. From a business standpoint, cell phones can make employees easier to reach. This can increase productivity by allowing employees to perform their jobs while not physically in the office. Cell phones are also used to replace watches, cameras and alarm clocks, which may impact their overall energy usage and price in comparison to the energy used by a separate phone, watch, camera and alarm clock [23]. + +Cell phones also have negative consequences. In particular, it is suspected that cell phones contribute to brain cancer and tumors due to the radiation that leaves both the cell phones and the cell towers [24, 25]. In addition, cell phones can interrupt family life, straining relationships between different generations. Adults who use cell phones for work sometimes let it interfere with their family life, while children become attached to their cell phones as a means of contacting peers, leading to more peer-based and fewer family-based activities [26]. The nature of a cell phone can also limit the ability to contact a group, such as a family. Instead of making a single call, it may be necessary to call each member separately, wasting time and effort, since there is no universal means of communication. In addition, cell phone rings often interrupt important events, such as family dinners, movies, classes, sporting events, and concerts, decreasing people's enjoyment of the experience. They are also a distraction to people while at work [27]. Cell phones can increase response time for emergency vehicles, because the cellular position is much more difficult to track than that of a landline [28]. Cell phones generally have higher prices, more expensive plans, and a shorter lifespan than landline phones [29]. They are also more likely to be lost or stolen due to their transportable nature. In addition, battery life is limited, and with more cell phones in existence, there will likely be fewer pay phones or landlines in public places in case of emergencies. + +# Part 3: Effects of Cell Phone Charger Negligence + +Often, cell phones are left overnight to charge, and in the morning when they are unplugged the charger is left plugged into the wall, still drawing current. This practice wastes energy as cell phones only need to be charged for a portion of the night. In order to determine the maximum amount of wasted energy by cell phone users in the United States, both of these negligent practices were taken into account, as shown in Equation 7. $\mathrm{W_t}$ is the total amount of wasted energy generated by cell phones through overcharging $(\mathrm{W_o})$ and failing to unplug the charger when not in use $(\mathrm{W_u})$ . + +Eq. 7 + +$$ +W _ {t} = W _ {v} + W _ {u} +$$ + +In order to quantify this, it is necessary to create models for both types of waste. The general format for the equation for any type of waste from a charger can be written as seen in Equation 8. The waste (W) is based on the number of households (H), the average number of chargers per house (C), the + +average amount of power drawn during this time (P), and the hours per day of wasteful practice (h). There are also conversion factors for the waste due to power plants $(\mathrm{W_p})$ , and the conversion of watts to barrels of oil. The value of L is 1.1phones per person at the steady state. + +$$ +W = H C B W _ {p} P h L +$$ + +Eq. 8 + +In order to use this equation to determine the amount of waste due to the over-charging of cell phones, the amount of time that the cell phone users wasted must be calculated using the difference in the time charged and the charging time needed as shown in Equation 9. The power also needs to be customized for this type of waste $(\mathrm{P_v})$ . + +Eq. 9 + +$$ +W _ {v} = H C B W _ {p} P _ {v} \big (h _ {c h a r g i n g} - h _ {n e e d e d} \big) L +$$ + +The second form of waste can be modeled in a similar manner as shown in Equation 10, and will require the number of hours to be the number of hours that the charger is in the idle state $(\mathrm{h_{idle}})$ . The power consumption will also need to be specified $(\mathrm{P_u})$ . + +Eq. 10 + +$$ +W _ {u} = H C B W _ {p} P _ {u} h _ {i d l e} L +$$ + +This results in an overall model for the waste from cell phones in terms of barrels of oil, as shown in Equation 11. + +Eq. 11 + +$$ +W _ {c} = H C B W _ {p} \left(P _ {u} h _ {i d l e} + P _ {v} \big (h _ {c h a r g i n g} - h _ {n e e d e d} \big)\right) L +$$ + +In order to calculate the total waste of cell phones, each of the following values had to be assigned. The appropriate value for each can be seen in Table 4. + +Table 4: Cellular Charger Waste Components + +
FactorValue
H126,316,181 households in US [8]
C2.37 Cell Phone Chargers/ household (one per person) [8]
B1 barrel of oil /(1.6998x 10^6Wh) [30, 31]
Wp3.0128 [9]
Pu0.3 Watts [9, 17,18]
hidle16 hours
Pv0.845 Watts [32]
hcharging8 hours
hneeded1 hour
L1.1 (cell), 2.37 (landline)
+ +As noted, many of these values were reported or were calculated from reported data. The only values that were approximated were the values for the time spent charging per day. It was noted that many times people leave their cell phones charging all night while they sleep. Assuming that this is always the case, all cell phones would be charged for approximately eight hours a night, as noted above. As assumed earlier, each cell phone will only require 1 hour of charging per day. If the charger is left plugged in, all of the remaining time, 16 hours a day, would also be wasted as the charger is in the idle state. Using these values and assumptions, all of the cell phones within a country the size of the United States would waste the equivalent of 6254 barrels of oil per days due to careless cell phone use. + +# Part 4: Effects of Battery Charger Negligence + +Waste due to battery chargers applies to many types of electronics beyond cell phones. In order to measure overall waste due to battery chargers, many similar types of waste need to be taken into account. In this case, three types of chargers were considered. First, the cell phone charger, as previously discussed was analyzed. Additionally, the waste due to a cordless phone charger was analyzed, as well as the waste due to various other forms of chargers, such as those of laptops, and MP3 players. The overall waste for all types of chargers $(\mathrm{W_T})$ was modeled as the sum of the wastes of the waste due to one type of phone charger and all other chargers; cell phone $(\mathrm{W_c})$ , cordless phone $(\mathrm{W_l})$ and other types of chargers $(\mathrm{W_o})$ , as shown in Equations 12a and 12b. + +Eq. 12 + +$$ +W _ {T} = W _ {c} + W _ {o} +$$ + +$$ +W _ {T} = W _ {l} + W _ {o} +$$ + +The waste due to the cell phones can be applied as calculated in Part 4. The waste due to cordless phone and other chargeable items are applied in similar manners, although only waste due to the charger being left idle should be accounted for, as seen in Equations 13 and 14. The values of power usage and hours left charging differ from those in the previous equations. + +Eq. 13 + +$$ +W _ {l} = H C _ {l} B W _ {p} P _ {l} h _ {l} L +$$ + +Eq. 14 + +$$ +W _ {o} = H C _ {o} B W _ {p} P _ {o} h _ {o} L +$$ + +Therefore, the overall waste can be modeled, in terms of barrels of oil, as in Equation 15. The top equation corresponds to the waste in a cordless phone dominant state, while the second models waste in a cell phone dominant state. + +Eq. 15 + +$$ +W _ {T} = H C _ {l} B W _ {p} P _ {l} h _ {l} L + H C _ {o} B W _ {p} P _ {o} h _ {o} L +$$ + +$$ +W _ {T} = H C B W _ {p} \left(P _ {u} h _ {i d l e} + P _ {v} \big (h _ {c h a r g i n g} - h _ {n e e d e d} \big)\right) L + H C _ {o} B W _ {p} P _ {o} h _ {o} L +$$ + +The numerical solution for this model at the current time can be determined by substituting the values found in Table 5. + +Table 5: All Charger Waste Components + +
FactorValue
H126,316,181 households in US [8]
B1 barrel of oil /1.6998x 10^6Wh [30,31]
Wp3.0128 [9]
C12.37 Cordless Chargers / Household [8]
P11.7 Watts [16]
h121 hours / day
Co3.318 Chargers per household
Po0.3 Watts
Ho16 hours
C2.37 Cell Phone Chargers/ household (one per person) [8]
Pu0.3 Watts [9, 17,18]
Hidle16 hours
Pv0.845 Watts [32]
Hcharging8 hours
hneded1 hour
L1.1 (cell), 2.37 (landlines)
+ +The majority of the data seen in the above table is actual data from studies and reports. A few pieces of data were reasoned. For instance, the number of hours that a cordless phone would be idle was based on the assumption that the phone would be charging for 2 hours a day, and used for 1 hour a day. It would therefore be idle for 21 hours /day [2]. In addition, the amount of power drawn by all other adapters was assumed to be constant and to be approximately the same as that of the average cell phone charger (0.3W) [9, 17, 18]. The number of chargers per household was based on the product of the average number of people in the household and an approximation for the number of chargers that would be present and used within the house on a normal basis. This data resulted in an average waste of about 48,600 barrels of oil per day for all chargers within a cordless phone dominated U.S., compared to 9,956 barrels of oil in a cell phone dominated U.S.. This data is shown in Table 6 below. + +Table 6: All Charger Total Waste + +
Charger TypeBarrel of Oil/day
Cell Phone6,254
Cordless Phone44,895
Other3,702
Total (Cell Phone State)9,956
Total (Cordless State)48,600
+ +# Part 5: Effects of Economic and Population Growth + +Based on the assumption that all m members of H households within the Pseudo U.S. owns a phone, the changes in economic status will not affect the total energy used by phones. In order to determine the energy usage projected over the next fifty years, a model was developed for the population of the Pseudo United States, which is assumed to equal the population of the actual United States. Based on data from the U.S. Census Bureau[33, 34, 35, 36, 37, 38, 39, 40], Microsoft Excel was used to create a regression describing population, $\mathrm{T_{Population}}$ , as a function of time, $\mathrm{X_{Year}}$ , as seen in Figure 7 and Equation 16: + +![](images/510741ea847fee6221240ab4879cfbfd00e462dd09fd7f80fe92d9404b083abd.jpg) +Figure 6: Accurate Telephone Power Forecast for Saturated Market + +Eq. 16 + +$$ +T _ {P o p u l a t i o n} = 3 2 1 6 9 8 0 X _ {Y e a r} - 6 1 5 6 7 5 2 7 3 2 +$$ + +With an $\mathbf{R}^2$ coefficient of .9999, this model accurately matches the U.S. Census Bureau's predictions for population growth into the future. This model of population growth can be used in conjunction with the energy equations developed in Part 1 to determine the total energy used by the Pseudo U.S. at any given time $\mathrm{X}_{\mathrm{Year}}$ . In order to find the total energy used over each 10 year period, we can integrate the population function from $\mathrm{X}_{\mathrm{Year}\mathrm{n}}$ to $\mathrm{X}_{\mathrm{Year}\left(\mathrm{n} + 10\right)}$ and multiply by the energy equations, denoted as $\mathrm{E}_{\mathrm{Phone}}$ , as seen in Equation 17. + +Eq. 17 + +$$ +E _ {U s e d} = 3 6 5 (E _ {P h o n e} \int_ {X _ {Y e a r n}} ^ {X _ {Y e a r (n + 1 0)}} T _ {P o p u l a t i o n} d X _ {Y e a r}) +$$ + +Under the optimal scenario where cell phones with 5-star chargers saturate the market, the energy used for phone service each decade is listed in Table 7 + +Table 7: Total Phone Energy per Decade + +
DecadeBarrels of Oil
10s84,038,439
20s92,237,469
30s100,562,475
40s109,025,120
50s117,424,011
+ +The total number of barrels of oil that must be provided to power plants over the next 50 years for this scenario is then 503,287,514. + +# Analysis of the Model + +# Verification + +In order to truly verify this model, one would have to obtain data for the next ten years, and compare the actual results to the predicted. Historical data cannot be used to verify this model, because data regarding the consumption of energy due to phone usage is not readily available. It was possible to verify the competition model for the data graphically. Most of the statistics used could be verified through additional research. + +# Strengths + +- Simplicity - This model is simple enough that it entails a small amount of mathematical skill to operate. In addition, it is easily converted into an electronic form, such as Microsoft Excel or MATLAB, and can therefore be visually displayed so that nearly no mathematical knowledge is necessary to understand the model. +- Developed from historical data – Population trends and the competition model were based off of real data from the US Census Bureau and the CTIA Wireless Association. +- Extendable - In order to include additional factors, the model could be extended with additional terms with little impact on the functionality in the energy equations. +- Flexibility - The equations used in this problem could be applied to other competing products which use energy, if an appropriate competition model can be created. +- Closed form solution- With the appropriate data, this model will generate numerical and graphical solutions. +- Calculation time- Due to the simplicity of the calculations, this model can be solved in a relatively short amount of time. +- Includes Variations – This model accounts for cell, corded, and cordless phone usage, as well as a combination of the three. This allows for a more complete analysis. +- **Considers Outside Factors** - This study considers the implications of mobility and convenience for a realistic approach to energy efficiency. +- Energy Production Costs- The costs from this model took into account the inefficiencies of power generation by looking at total energy produced instead of looking at total energy consumed. This signifies that the number of barrels of oil would be the actual number of barrels that would need to be burned in order to power these phones. + +# Weaknesses + +- Forecasting - does not account for any changes in technology over the time period. +- Infrastructure Costs - The initial infrastructure cost was assumed to defray to zero over time in order to decrease the number of inputs needed for the model. In reality these costs could potentially have an effect, especially in the short term. +- Infrastructure Maintenance Costs - The infrastructure maintenance and operations costs were not accounted for due to lack of data. For a more robust model, another term could be added to the energy equation to account for this energy consumption. Examples include the power used by each cell phone tower, approximately 1kW-10kW, and the average power used to repair telephone lines damaged by storms. + +- Assumptions – Due to lack of data availability, several simplifying assumptions had to be made in order to create a solvable model. In addition, some values used in the calculations had to be estimated. +- Inputs - This model requires a large amount of data, some of which is difficult to obtain + +# Conclusion + +The compilation of these factors suggests that landlines are the most energy-feasible option only when all phones are corded phones running power straight off of the phone line. Otherwise, the most efficient means of providing telecommunication is through cell phones. This is based on: + +- The steady state of the country with existing infrastructure would be between 36 and 42 gigawatt hours per day of energy. +- With no established infrastructure, it would be more energy beneficial to have corded phones running off of phone lines (3.2 gigawatt hours / day), but other factors, such as the preference for cordless technology, suggests that a cell phone infrastructure may be a safer investment. +- Cell phone and other charger negligence would cause a maximum of 9,956 kWh per day to be wasted. Cell phone charger negligence would cause a maximum of 6,254 kWh per day to be wasted, while cordless phone negligence would result in a waste of 44,895 kWh per day. +- The analysis of the telecommunications industry for the future shows that cell phones will be the most viable option, as they will only require 503,287,514 barrels of oil for the next fifty years. + +Due to the social benefits of cell phones, as well as their energy efficiency relative to cordless phones, a cell phone dominant state should be accepted in the current infrastructure. Despite the fact that cell phones are less efficient than corded landline phones, they are more accepted by the general public. + +# Glossary + +Chargers – Devices used to help batteries regain their power supply. These include, but are not limited to those used for cell phones, laptop computers, rechargeable batteries, cordless phones, and MP3 players. + +Coreded Phone – A phone that only uses a small amount of power from the phone line to function, but is not mobile and is hindered by the length of the cord. + +Cordless Phone - A phone that has a dock that is connected to a landline. It can only be used over a small distance and has no cord and therefore must rely on a battery to run. + +Household- As described by the US government census report, "a household includes all the persons who occupy a housing unit. A housing unit is a house, an apartment, a mobile home, a group of rooms, or a single room that is occupied (or if vacant, is intended for occupancy) as separate living quarters. + +Separate living quarters are those in which the occupants live and eat separately from any other persons in the building and which have direct access from the outside of the building or through a common hall. The occupants may be a single family, one person living alone, two or more families living together, or any other group of related or unrelated persons who share living arrangements" [41]. + +Idle State (charger) – encompasses the time when a charger is plugged in but has no electronic device connected to it. While in this state, it will be using power, but will not be performing any desired function. + +Landline - A physical phone line that runs into a house to provide service. + +Steady State – The state at which cell phone usage and landline usage is in equilibrium. In these cases, this is expected to occur at the extremes where there are only cell phones or only landline phones. + +Transition period - The amount of time during which both cell phones and landline based phones will be in use. During this time, it is possible that some people may have both. The Competing Species Model used here predicts a transition period from 1995 to 2025. + +# Works Cited + +[1] "Cell Phone Batteries." 2008. Discount Cell. 8 Feb. 2009 . +[2] "Energy Consumption of Household Products." Residential Equipment and Appliances. Ace3. 6 Feb. 2009 . +[3] “The Effectiveness of Cell Phone Reuse, Refurbishment and Recycling Programs.” OSWER Innovation’s Pilot. September 2002. United States Environmental Protection Agency. 6 Feb. 2009 . +[4] "Environmental Data." Recellular. 6 Feb. 2009 . +[5] Broussely M, Herreyre S, Biensan P, Kasztejna P, Nechev K and R. J. Staniewicz. "Aging Mechanism in Li ion cells and calendar life predictions." Journal of Power Sources. July 2001: 13-21. Science Direct. Clarkson University Library, Potsdam, NY. 6 Feb. 2009 . +[6] “Frequently Asked Questions about Lithium Ion Batteries.” Green Batteries. 2008. Responsible Energy Corporation. 6 Feb. 2009 . +[7] “Search Results for Cordless Phone.” Walmart.com. 2009. Walmart Stores, Inc. 7 Feb. 2009 . +[8] “State and Country Quick Facts.” US Census Bureau. 2007. U.S. Census Bureau. 6 Feb. 2009 . +[9] Singhal, Pranshu. "Stage I Final Report: Life Cycle Environmental Issues of Mobile Phones." Integrated Product Policy Pilot Report. April 2005. Nokia. 6 February 2009 . +[10] "Wireless Quick Facts." Advocacy. 2008. CTIA. 6 Feb. 2009 . +[11] Giordano, Frank R., Maurice D. Weir, and William P. Fox. Mathematical Modeling. 3e.Pacific Grove, PA: Brooks/ Cole. 2003. +[12] Mesterton-Gibbons, Michael. A Concrete Approach to Mathematical Modelling. New York, NY: John Wiley & Sons, Inc. 1995. +[13] Farlow, Jerry, James E. Hall, Jean Marie McDill and Beverly H. West. Differential Equations and Linear Algebra.2e. Upper Sadle River, NJ: Pearson Education, Inc. 2007. +[14] MATLAB R2007b: MathWorks. , 2007. +[15] De Decker, Kris. “The Right to 35 Mobiles.” Low-tech Magazine. 13 Feb 2008. Low-tech Magazine. 9 Feb. 2009 . +[16] MacKay, David. Sustainable Energy – without the hot air. 2009. UIT Cambridge Ltd. 6 Feb. 2009 . + +[17] “Corporate Responsibility Report 2007.” Motorola. 2008. Motorola, Inc. 6 Feb. 2009 . +[18] "Motorola Eco Facts." Motorola. 2009. Motorola, Inc. 6 Feb. 2009 . +[19] “Draw 150mW Of Isolated Power From Off-Hook Phone Line.” Maxim. 6 July 1998. Maxim Integrated Products. 9 Feb. 2009 . +[20] "CITA." 2006. CITA- The Wireless Association. 6 Feb. 2009 . +[21] Begun, Daniel A. "Phone Makers Monitor Charger Energy Consumption." Hot Hardware, 19 Nov. 2008. David Altavilla and HotHardware.com. 6 Feb. 2009 . +[22] Tobin, Declan. "Teens and Cell Phones." 31 March 2005. Ezine Articles.com. 8 Feb. 2009 . +[23] Kwan, Michael. “Feature: Common Uses for Cell Phones Beyond Voice Calls.” Mobile Magazine. 22 October 2008. Mobile Magazine. 8 Feb. 2009 . +[24] Levitt, Blake B. "Cell-Phone Towers and Communities : The Struggle for Local Control." *Orion Afield*. 1998. The EnvironLink Network. 6 Feb. 2009 . +[25] Quiring, Lynn. “Beacons of Harm: Cell Phone Towers and Mobile Phone Masts.” 4 May 2008. 6 Feb. 2009 . +[26] “Study Links Family Problems With Excessive Cell Phone Use.” 2006. Cell phones etc. 8 Feb. 2009 . +[27] Gulli, Cathy. "Help! My office is ring tone hell." Maclean's 26 Jan. 2009: 52. Masterfile Select. EBSCOhost. Clarkson University Library, Potsdam, NY. 6 Feb. 2009 . +[28] "Rock County 911 Dispatchers Say Cell Calls More Difficult to Pinpoint." 2 May 2008. Channel 3000. 8 Feb. 2009 . +[29] Coombes, Andrea. "Cutting the Phone Cord? Not so Fast." MarketWatch. 11 October 2004. Market Watch, Inc. 8 Feb. 2009 . +[30] Cooper, Alan H. "Part III – Administrative, Procedural, and Miscellaneous." Internal Revenue Service. 6 April 1999. Department of the Treasury. 6 Feb. 2009 . +[31] "Appendix H Conversion Factors." Annual Energy Outlook 2001. 2001. Energy Information Administration. 6 Feb. 2009 . +[32] MacKay, David. "Phone chargers – the truth." 14 Nov. 2008. University of Cambridge. 8 Feb. 2009 . +[33] "National Population Predictions Released 2008 (Based on Census 2000)." U.S. Population Projections. 2008. US Census Bureau. 8 Feb. 2009 . + +[34]"Table 1. Projections of the Population and Components of Change for the United States: 2010 to 2050." 14 August 2008. US Census Bureau. 8 Feb. 2009 . +[35] "Population Estimates." 22 Dec. 2008. US Census Bureau. 8 Feb. 2009 . +[36] "Annual Population Estimates 2000 to 2008." National and State Population Estimates. 19 Dec. 2009. US Census Bureau. 8 Feb. 2009 . +[37] “Table 1: Annual Estimates of the Resident Population for the United States, Regions, States and Puerto Rico : April 1, 2000 to July 1, 2008.” 22 Dec. 2008. US Census Bureau. 8 Feb. 2009 . +[38] “1980s.” 3 Dec. 2004. US Census Bureau. 8 Feb. 2009 . +[39] “1990s.” 19 Aug. 2008. US Census Bureau. 8 Feb. 2009 . +[40] “Table 1. Projections of the Population and Components of Change for the United States: 2010 to 2050.” 14 August 2008. US Census Bureau. 8 Feb. 2009 . +[41] “Persons per Household, 2000.” State & County Quick Facts. 2000. US Census Bureau. 8 Feb. 2009 . +[42] “Wireless Competition Bureau Statistical Reports (formerly FCC-State Link).” *Statistical Reports.* 8 Dec. 2008. Federal Communications Commission. 6 Feb. 2009 . +[43] “Statistical Trends in Telephony.” Statistical Reports. 8 Dec. 2008. Federal Communications Commission. 6 Feb. 2009 . +[44] “Trends in Telephone Service.” August 2008. Federal Communications Commission. 6 Feb. 2009 . + +# Other References + +"Cell Phone Recycling: 'An Easy Call.'" STAT Safety Talk and Tips. March 2008. NOAA's National Weather Service. 6 Feb. 2009 . +"Energy Consumption of Various Appliances in Watts." Abaris.net. 6 Feb 2009 . +"Fact Sheet: Management of Electronic Waste in the United States." July 2008. Environmental Protection Agency. 6 Feb. 2009 . +"How you can save energy." Nokia. 2009. Nokia. 6 Feb. 2009 . +"Number of Cell Towers Expected to Grow Significantly by 2010." Cell Tower Info Blog. 19 May 2006. Steel in the Air. 8 Feb. 2009 . +"Plug Power Fuel Cells Provide NY State Police Radio Tower with Clean, Reliable Backup Power." Greenjobs. 6 Sept. 2007. Greenjobs.com. 6 Feb. 2009 . +"Qualified Product Search Results." Energy Star. Energy Star. 6 Feb. 2009 . +Severance, Charles. "Phone Line and Power Line Networking." 11 Dec. 2004. Dr. Chuck. 6 Feb. 2009 . +"Statistics on the Management of Used and End-of-Life Electronics." Wastes - Resource Conservation - Common Wastes & Materials -eCycling. 3 November 2008. Environmental Protection Agency. 6 Feb. 2009 . +Sullivan, Daniel E. "Recycled Cell Phones - A Treasure Trove of Valuable Metals." 2006. USGS. 6 Feb. 2009 . +"United States." ICT at a Glance. 2007. World Bank. 8 Feb. 2009 . +Xia, Yongping. "Get Power from a telephone line without disturbing it." Electronics Design Strategy News. 1 Dec. 2006. Reed Business Information. 6 Feb. 2009 . + +Appendix + +
YearPopulationCell PhonesModel cells per 100 popLandlinesModel landlines per 100 pop
1980226,542,250102,216,367
1981229,465,744105,559,222
1982231,664,432107,519,214
1983233,792,014110,612,689
1984235,824,908112,550,739
1985237,923,734204,000115,985,813
1986240,132,831500,000118,289,121
1987242,288,936884,000122,789,249
1988244,499,0041,609,000127,086,765
1989246,819,2222,692,000131,504,568
1990248,765,1704,369,000136,114,201
1991252,153,0926,390,000139,412,884
1992255,029,6998,893,000143,341,581
1993257,782,60813,067,000148,106,159
1994260,327,02119,284,000153,446,946
1995262,803,27628,154,00011.00159,658,66260.00
1996265,228,57238,195,00014.12166,445,58063.04
1997267,783,60748,706,00017.93173,866,79965.30
1998270,248,00360,831,00022.47179,849,04566.68
1999272,690,81376,285,00027.76185,002,91167.13
2000282,171,93697,036,00033.74188,499,58666.64
2001285,039,803118,398,00040.29185,587,16065.25
2002287,726,647134,561,00047.24180,095,33363.06
2003290,210,914148,066,00054.35173,140,71060.20
2004292,892,127169,467,00061.36165,979,93856.83
2005295,560,549194,479,00068.04157,037,50353.13
2006298,362,973219,652,00074.21146,848,92649.27
2007301,290,332243,428,00079.7345.40
2008304,059,72484.5641.63
2009307,146,36288.7038.03
2010310,233,00092.1834.68
2011313,232,00095.0831.58
2012316,266,00097.4728.74
2013319,330,00099.4326.17
2014322,423,000101.0323.85
2015325,540,000102.3421.75
2016328,678,000103.4119.86
2017331,833,000104.2918.16
2018335,005,000105.0216.63
2019338,190,000105.6315.25
2020341,387,000106.1414.00
2021344,592,000106.5612.88
2022347,803,000106.9311.85
2023351,018,000107.2410.93
2024354,235,000107.5110.08
2025357,452,000107.749.31
2026360,667,000107.958.61
2027363,880,000108.137.97
2028367,090,000108.287.38
2029370,298,000108.436.85
2030373,504,000108.556.35
2031376,708,000108.675.90
2032379,912,000108.775.48
2033383,117,000108.875.09
2034386,323,000108.954.74
2035389,531,000109.034.41
2036392,743,000
2037395,961,000
2038399,184,000
2039402,415,000
2040405,655,000
2041408,906,000
2042412,170,000
2043415,448,000
2044418,743,000
2045422,059,000
2046425,395,000
2047428,756,000
2048432,143,000
2049435,560,000
2050439,010,000
2051441,273,248
2052444,490,228
2053447,707,208
2054450,924,188
2055454,141,168
2056457,358,148
2057460,575,128
2058463,792,108
2059467,009,088
2060470,226,068
+ +Population data 1980-2008, 2010-2050 [33, 34, 35, 36, 37, 38, 39, 40] + +Population data 2009 and 2051-2060 found by linear regression + +Cell phone data 1985-2007 and landline data 1980-2006 [42, 43, 44] + +Cell phones and landlines per 100 population found by competition model \ No newline at end of file diff --git a/MCM/2009/B/5898/5898.md b/MCM/2009/B/5898/5898.md new file mode 100644 index 0000000000000000000000000000000000000000..3f966a7c76fc4d6d02c222f45eea2eaa0ac598c6 --- /dev/null +++ b/MCM/2009/B/5898/5898.md @@ -0,0 +1,428 @@ +# Modeling Telephony Energy Consumption + +Control Number: 5898 + +February 9, 2009 + +# Abstract + +The energy consequences of rapidly changing telecommunications technology are a significant concern. While interpersonal communication is ever more important in the modern world, the need to conserve energy has also entered the social consciousness as prices and threats of global climate change continue to rise. Only twenty years after being introduced, cellular phones have become a ubiquitous part of the modern world. Simultaneously, the infrastructure for traditional telephones is well in place and the energy costs of such phones may very well be less. As a superior technology, cellular phones have gradually begun to replace the landline but consumer habits and perceptions have slowed this decline from being an outright abandonment. To evaluate the energy consequences of continued growth in cellphone use and a decline in landline use we present a model which describes three processes: landline consumption, cellular phone consumption, and landline abandonment as economic diffusion processes. In addition, our model describes the changing energy demands imposed by the use of the two technologies and considers the use of companion electronics and consumer habits. Finally, we use these models to determine the energy consequences of the future uses of the two technologies, an optimal mode of delivering phone service, and the costs of wasteful consumer habits. + +# Introduction + +In these first few years of the new millennium, energy policy has taken a center ring in political discourse, and considerable time and research has been devoted to developing sound energy policy. Intimately connected to this new drive is an increasing awareness of the impact of our lifestyle habits on energy consumption. The telephone has become a fundamental part of our social fabric, and in the past couple decades we have seen a shift from fixed landline telephones, generally one per household, to individual ownership of cell phones. It is natural to consider what impact this has on American energy consumption, and in this paper we attempt to determine just that. The factors that go in to accurately modeling telephony energy consumption are complex. An accurate picture of the energy used by landline and cell phones needs to take into account energy consumption by peripheral devices, such as answering machines for landline phones and chargers for cell phones. Moreover, landline phones are not a uniform product. Cordless phones consume considerably more energy than their corded counterparts. Likewise, the total energy cost of cell phone usage is complicated by such factors as recharging, replacement, and battery recycling. Our model takes all of these factors into account, and additionally attempts to use the limited real-world data available to chart the changes in each of these factors over time. + +Perhaps the most complex factor to model is, most generally, adoption of technological innovations in a population. This is relevant not only to landline adoption and cell phone adoption, but additionally deadoption of landline phones in the face of cell phone usage + +can be considered an independent innovation and modeled accordingly. Research into the phenomenon indicates that it can be modeled globally by the differential equation + +$$ +\frac {d P}{d T} = r P \left(1 - \frac {P}{K}\right), +$$ + +where $P$ is the proportion of the considered population which has adopted the new innovation at time $t$ , $r$ is the adoption rate and $K$ is the saturation point for the innovation within a population. A justification of this is given, but an apt summary of the reasoning can be found in this statement: + +Irrespective of the particular account of the diffusion process, the stylized diffusion path of most innovations results from the fact that initially, during an embryonic phase, only a few members of the social system adopt the innovation. Over time, though, an increasing flow of new adopters is observed as the diffusion process unfolds. This is the phase of rapid market growth. Finally, during a maturing phase, the trajectory of the diffusion curve gradually slows down, and eventually reaches an upper asymptote or saturation level. + +Using the descriptions of such a model, we arrive at an accurate fit to the available data and are able to predict future demand for cell phones and landlines. Determining the cost for these respective technologies we arrive at the total energy burden. Briefly, we explore how this question relates to the energy consumption of other household electronics, and how much waste is generated therein. Additionally, we explore the caveat that technological development has been and continues to be wildly unpredictable, and the consequences of this reality. + +A separate question is to consider how best to distribute landline and cell phones throughout a population committed to neither so as to minimize energy consumption, optimally while not violating social preference. This problem is explored through an optimization with respect to energy usage, in which we discover that a country, here the 'Pseudo-USA', which supports a cell phone-only communicative infrastructure minimizes its total energy consumption, and also does not violate social demand for novel technologies. Finally, we estimate the total energy consumption by such a nation over the next fifty years. + +# Model Overview + +We examine two approaches to modeling technology diffusion through a population. The first attempts to gauge technology adoption at the household level and aggregate these results to model global trends. However, this approach is unsuccessful, and the reasons for this are explained. The second approach models technology adoption at the global level. This model: + +1. Accurately models past and present telephony energy consumption, +2. Makes future predictions of cell phone saturation and landline deadoption consistent with previous technological replacement paradigms. +3. Encompasses a broad range of pertinent factors in telephony energy consumption. + +# Model Derivation + +Adoption of Innovations Our model describes the United States' usage rates for landlines and cell phones as three diffusive innovation curves. Consider the adoption of an innovation $Y$ . At small times after the development of this innovation, adoption of $Y$ throughout a population is minimal. As the innovation spreads, demand increases until a saturation point is reached. Thus, the spread of $Y$ throughout a population is proportional to its synchronous prevalence, but is checked from exponential growth by an upper bound to its saturation in a population. At its simplest, we can model this as + +$$ +\frac {d Y}{d t} = Y (1 - Y). +$$ + +Of course, adoption is not uniform between different technologies, and saturation rates likewise vary. By introducing constants $r$ for adoption rate and $S$ for saturation rates, we can refine our model to + +$$ +\frac {d Y}{d t} = r Y (1 - Y / K), +$$ + +which has a solution in form of the logistic equation. Therefore, for each of the processes we assume a model of the form + +$$ +\frac {A}{1 + B e ^ {- C t}}. +$$ + +The sigmoidal form assumed by adoption processes is well known and has been observed in the specific case of cellphone adoption and wireless only life-style adoption. Quoting from the literature: + +Several behavioural theories have traditionally been set forth to explain the S-shaped nature of diffusion processes. For example, Griliches (1957) proposed an 'epidemic', demand-induced explanation for the emergence of an S-shaped diffusion curve; Mansfield (1961) sought to explain the observed patterns of diffusion in terms of the expected profitability of the innovation, and the dissemination of information about its technical and economic characteristics; Rogers (1983) employed a communications-based model for explaining diffusion patterns; Artle and Averous (1973), analysing the telephone system, offered a 'network consumption externality' explanation wherein the value of the network for a subscriber increases with the number of adopters of the system. + +![](images/2dd0105467228d2300d2201e053fa157a7d62c99ed6857a75edb5f51b3f4f609.jpg) +Fig. 2. Average cell phone penetration. +Figure 1: Sigmoidal Growth of Cell Phone Adoption in Various Countries + +Proceeding globally, we initially model the consumption of telephones from their inception by the equation: + +$$ +p _ {l} (t) = A \left(\frac {1}{1 + B e ^ {- C (t - D)}} + \frac {1}{1 + E e ^ {F (t - G)}} - 1\right), \tag {1} +$$ + +where the $D$ and $G$ paramters are chosen so that time is shifted relative to the onset of cell phone adoption. This is essentially the addition of two sigmoid curves. The first models the adoption of the landline phone as a new innovation. We model the deadoption of landlines as an independent innovation of a 'wireless-only' lifestyle, which has a subtractive effect total landline usage. + +Likewise the consumption of cellphones is given by: + +$$ +p _ {c} (t) = \frac {J}{1 + e ^ {- K (t - L)}}, \tag {2} +$$ + +where again $L$ is a time shift chosen to make the model coincide with cell phone adoption. + +Initial attempts were made to model this at the microscopic level, but this proved to be an intractable approach. From census data, the number of $H$ household with $m$ members over the course of history is readily available. Equally accessible are the rates of penetration and average costs of cellular and land-line communications penetration. With this abundance of data one may be tempted to propose an econometric forecast of telephony usage which is driven by the marginal cost-benefit analysis a household performs. However determining the functional form which defines the behaviors that are muddled by habits and irrationality are troubling. When reduced to a first order approximation such a model still requires the calibration of numerous parameters[KOYCK Citation]. After attempting such an approach several times it was abandoned. We believe the above model captures the data equally well without making undue assumptions. + +Energy Cost of Landlines Together these two functions model three processes: landline adoption, wireless adoption, and wireless only adoption. Additionally they describe the long-term behavior of these processes as they reach a steady state. To approach the question of annual energy consumption by telephony products, we combine these functions with models for energy expenditure by landline phones and their peripherals, as well as cell phones and their peripherals. The formula for energy consumption by landline phones and peripherals is + +$$ +E _ {l} (t) = P p _ {l} h \left(\pi_ {a} e _ {a} + \pi_ {b} e _ {b} + \pi_ {c} e _ {c} + \pi_ {d} e _ {d}\right). \tag {3} +$$ + +The following table delineates the variables and their explanations. Note that the time variable $t$ is normalized such that $t = 0$ denotes 1960. + +Due to the limitations imposed on the model by a lack of relevant data, several assumptions were made. First, all yearly energy consumption functions are assumed to be constant in time. Because corded phones draw their energy solely from phone lines, there is little room for variation in their power draws, so this at least seems reasonable. However answering machines, cordless phones, and combinations of the two do not have this restriction, and it seems likely that they are becoming more energy efficient with time. However, no data was available to support this hypothesis, so YEC has been fixed based on available sources. The adoption of cordless vs. corded phones and answering machines no doubt follow their own sigmoidal curves, but again no data was available to accurately gauge this. So, the variables $h$ , $\pi_{a}$ , $\pi_{b}$ , $\pi_{c}$ , $\pi_{d}$ are all modeled as first-order linear + +
VariableDescription
P(t)Population of the US in year t
pl(t)Landlines per person in the US
h(t)Handsets per landline
πa(t)Percentage of landline owners with corded phones
ea(t)Yearly Energy Consumption in kWh (YEC) by corded phones
πb(t)Percentage of landline owners with cordless phones
eb(t)YEC by cordless phones
πc(t)Percentage of landline owners with combination cordless phone/answering machines
ec(t)YEC by combination cordless phone/answering machines
πd(t)Percentage of landline owners with separate answering machines
ed(t)YEC by separate answering machines
+ +Figure 2: Table 1 + +approximations. Regardless, results produced by the model agree well with available data for energy consumption. + +Energy Cost of Cell Phones The energy cost for cell phones is likewise complex. It can be modeled as + +$$ +E _ {C} (t) = P p _ {c} \left(E _ {c 1} + E _ {c 2}\right), \tag {4} +$$ + +where + +$$ +E _ {c 1} (t) = f _ {C} \left(C _ {\text {c h a r g e}} t _ {\text {c h a r g e}} + C _ {\text {s t a n d b y}} t _ {\text {s t a n d b y}}\right) +$$ + +and + +$$ +E _ {c 2} (t) = R _ {c e l l} R (t). +$$ + +The following table lists and describes each relevant variable: + +
VariableDescription
P(t)Population of the USA in year t
pc(t)Number of cell phones per person
Ec1YEC by cell phones and chargers
Ec2YEC by cell phone recyclers
fCFrequency of cell phone charging
CchargeCharger wattage during charging
tchargeDaily charger time spent charging
CstandbyCharger wattage during standby
tstandbyDaily charger time spent in standby
RcellEnergy needed to recycle one cell phone battery
R(t)Percentage of cell phones recycled in year t
+ +Figure 3: Table 2 + +The most immediate contributions to cell phone energy consumption are charging the phone at home and leaving the charger plugged in the outlet with no phone attached. It proved difficult to find data on cell phone charging frequency. In their 1999 study, Rosen + +et al. argued that people charge their phone 50 times each year at their residence, noting that many people charge their phone in their car. Even when considering the effect of people charging in their car, this figure seems very low. Especially newer phones with a multitude of different features require more frequent charging. Since charging their cell phone has developed into a habit for most people, we decided to assume that people charge their phone every night and keep their charger attached to an outlet all the time. + +Another figure we drew from that study is the average charging time for cell phones. Rosen et al. observe that the average time required to charge a cell phone is 2 hours. We note that the 2 hour charging time seems low in comparison to other data we found, which suggested 3 to 4 hours to charge to $80\%$ and an additional 8 hours to charge to $100\%$ . However, since we are assuming that people charge their phones every night, their phones will probably not have an empty battery capacity when they charge them overnight. It also important to note that we are also assuming that the possibility of people charging their phone overnight does not effect the 2 hour charging time. The fact that $50\%$ of all cell phone batteries are Lithium-Ion batteries, which do not allow for overcharging, justifies this assumption (Fishbein and Rosen). Once a Lithium-Ion battery is charged the power being drawn differs negligibly from the power drawn by the charger hen no phone is connected to it (Rosen). Therefore, we feel justified in adopting Rosens statistic. + +When modeling the energy cost of recycling used cell phone batteries, we only considered the batteries being recycled by the Rechargeable Battery Recycling Corporation. This assumption is justified by their significant market share and the fact that we could confirm that the Rechargeable Battery Recycling Corporation recycles their batteries in the United States. + +Energy Optimization Given the above functions which give the energy costs for cellular and land-line telephone usage, it is possible to optimize the delivery of these communication methods so that energy consumption is minimized. Consider a pseudo-USA with the approximate size of the United States. It is likely that such a population would have a similar distribution of households by household size as the current United States. Here the average household size is small and there are very few families with more than seven members. Indeed it is the case that many other modern countries have very similar average household sizes which are centered near 2-3 members. Let $H_{m}$ be the number of households with m members. Then let $l_{m}$ be the fraction of households with m members which have a land-line service. If we assume that the communication needs of every family are satisfied by either having a land-line or by each member possessing a cellphone, the number of required cellphones and land-line phones can be simply calculated as follows: + +$$ +T _ {l} = \sum_ {m = 1} ^ {7} l _ {m} H _ {m}, \tag {5} +$$ + +$$ +T _ {c} = \sum_ {m = 1} ^ {7} m \left(1 - l _ {m}\right) H _ {m}, \tag {6} +$$ + +where $T_{l}$ is total number of landlines and $T_{c}$ is total number of cell phones. + +We believe it is justified that in the absence of a landline every member that members of a household will not share one cellphone as this corresponds well to the authors' everyday experiences. + +Then given the cost functions $E_{l}(t), E_{c}(t)$ , we can calculate the total telephony energy demand of the proposed plan for pseudo USA as: + +$$ +E (t) = E _ {l} (t) + E _ {c} (t). +$$ + +Clearly using only landlines will minimize the number of telephone units required for a population, however landline phones and their companion technologies are much less energy efficient than cellular phones. Simultaneously utilizing only cellphones will maximize the number of telephone units required for a population, and though the energy cost per unit is reduced, the overall increase in units may have deleterious consequences. Therefore we may proceed by an optimization of the variables $l_{m}$ 's which will then yield the best communications strategy from an energy perspective. Additionally, it is possible to modify the above summations to consider the fact that there may be roles played by cellphones which are not achievable by a landline. For example, suppose it is the case that a single landline cannot serve a large family. If $n$ is the number of people a single landline can serve in a household, we may assume that a family of $m$ with one landline will need to purchase $m - n$ cellphones. Then, + +$$ +T _ {c} = \sum_ {m = 1} ^ {7} m (1 - l _ {m}) H m + \sum_ {m = n + 1} ^ {7} l _ {m} H _ {m} (m - n), +$$ + +where the second term gives the fraction of families too large to be served by their single landline. Implicit in this formula is an assumption that no family obtains a second landline to meet their communications needs. This is reasonable as the average number of landlines per household in the current United States is only 1.118. Likewise we could further complicate the cost function for a proposed plan for communication by asserting that not every family member requires a cellphone if a landline is absent. However we found that such a modification did not enrich the conclusions of our optimization. + +# Results + +Energy Consumption Using the above information, we can create an energy consumption function $E(t)$ . Specifically, + +$$ +E (t) = E _ {c} (t) + E _ {l} (t). \tag {7} +$$ + +To do so, we first had to derive exact parameters for equations (1) and (2). Using an optimization algorithm described in the methods section below, we arrived at the conclusions listed in this table: + +
ParameterValue
A1.1263
B1.0924
C0.0423
D27
E0.0109
F0.1587
G30
+ +Figure 4: Table 3 + +Moreover, functions can be described for parameters for $E_{l}$ and $E_{c}$ as follows: + +
VariableValue
P(t)Population growth as predicted by the Census Bureau
pl(t)As defined in Eq(1)
h(t)1.89E-3t+1.076, t ≤ 40; -1.20E-3+1.152, t > 40
πa(t)1 - πb(t) - πc(t)
ea(t)20 kWh
πb(t)max(0,1.45E-2t - 1.45E-1), t ≤ 40; .44,t > 40
eb(t)28 kWh
πc(t)max(0,1.07E-2t - 1.07E-1), t ≤ 40; .32,t > 40
ec(t)36 kWh
πd(t)max(0,2.31-2t - 2.31E-1), t ≤ 40; .69,t > 40
ed(t)36 kWh
+ +Figure 5: Table 4 + +
VariableValue
pc(t)As defined in Eq(2)
Ec1.365(4·2 + .6·24) kWh
Ec2-0.0283e-(t-1993)/17.1573 + 0.00037 kWh
fC365
Ccharge4 W
tcharge2 hr
Cstandby0.6 W
tstandby24 hr
Rcell.0037 kWh
R(t)-7.639e-(t-1993)/17.1573 + 0.0999
+ +Figure 6: Table 5 + +From our model, it is expected that by 2050, cell phones will have completely replaced landlines in the USA. Thus, we can estimate our steady-state energy consumption as $E(90) = 2.99$ TWh, which is equivalent to 1.7 million barrels of oil, using a conservative population estimate. + +Energy Optimization Results From our optimization results for the distribution of telephone types in the Pseudo-USA, we find that it is almost always preferable to have a cell phone-only state, in terms of energy efficiency. Even assuming a landline can service an unlimited number of people in a household, our optimization finds that it is only energy efficient for families of size 7 or larger to own a single landline and peripherals in place of a cell phone for each family member. In the case that a landline can only effectively service six people in a household, a reasonable assumption, our optimization finds that it is always preferable for a household to purchase a cell phone for each member. + +Additionally, we find that the cost of leaving cellphone chargers on standby whenever they are not active would amount to approximately $62\%$ of the total YEC, which amounts to 862000 barrels of oil a year. + +Energy Waste by Other Household Electronics We also discuss the impact of leaving devices plugged in when the device is not in use. From Rosen et al., we adopt + +the following approach to the model. First, we investigate the average wattage used in standby mode by the devices under consideration and the time spent in standby mode, respectively. Then, we find saturation and penetration values to find the total energy expenditure in the current United States. The devices we consider are the computer, TV, set top boxes (digital and analog), wireless set top boxes and Video Game Consoles. + +We took the data for the three types of set top boxes and the video console data from Rosens study. Furthermore, the average American spends an average of 4.66 hours watching television and 4.4 hours using a computer every day. Average power drawn by computers and television sets turns out to be 4 and 5.1 watts respectively. The table illustrates our data set. + +
DeviceStandby TimePower Drawn (W)
Set Top Box, Analog78%10.5
Set Top Box, Digital.7822.3
Wireless Receiver.7810.2
Video Game Console.981.0
TV.805.1
Computer.814
+ +Figure 7: Table 6 + +We can then use this information, along with saturation rates and household penetration rates to arrive that the following conclusion: + +
DeviceStandby Energy Consumption (TWh)
Cable, Analog3.2
Cable, Digital0.6
Wireless Receiver1.4
Video Game Console0.5
TV10.3
Computer3.3
Total19.3
+ +Figure 8: Table 7 + +Using this final figure in Table 7, we conclude that wasteful energy expenditure due to appliance standby in the USA consumes approximately 11.4 million barrels of oil yearly. + +Future Predictions By our model, assuming moderate economic and population growth, the following figures were determined for the Pseudo-USA: + +
YearEnergy Costs (millions of barrels of oil)
20101.14
20201.24
20301.66
20401.77
20501.89
+ +Figure 9: Table 8 + +However, we believe that such an analysis is of highly limited use. Predicting the future of so many variables for a 50 year period is extremely difficult especially in the + +realm of technology where it is common-place for innovations to change social paradigms. For example consider an attempt in the 1950's to model the growth of computer usage. Any such attempt would be unlikely to foresee the occurrence of personal computers, the Internet, or cellphones which today are rapidly replacing many of the functions of personal computers. Likewise the energy cost of a cellphone may very plausibly vary greatly due to changes in technology. It may be the case that social awareness about energy efficiency will drive them to become even more energy cost effective. It is equally likely that cellphones will gain additional features or be replaced by miniaturized computers which consume much more energy. Therefore the above prediction can only be regarded as relevant in the state of affairs where "all other things" are equal. In the sphere of technological growth "all other things" is very broad and the assumed stagnation is unlikely. + +# Conclusions + +# Recommendations + +From and energy perspective we find that it is without a doubt more efficient to abandon landlines in favor of cell phones. This suggestion is reinforced by the model prediction which suggests an elimination of landlines in the near future by consumer adoption of a wireless-only lifestyle. Finally we find that the waste generated by chargers on standby (ie. not charging a device) are a significant source of energy waste. We therefore advocate that efforts be made to forgoe convenience and unplug devices when in standby. + +# Model Strengths and Weaknesses: + +Strengths: -reproduces sigmoidal innovation- adoption behavior without making undue assumptions about the underlying processes -incorporates a broad span of indirect sources of energy consumption: battery recycling, commuters with cellphones, landline companion technologies + +Weaknesses -Our model only captures global adoption behavior. This exclusion of underlying behavior is a detriment in capturing the deviations from the standard behavior. This is exemplified by the underestimation in the 90's during which economic expansion may have driven telephone adoption + +- Due to a lack of data the model relies on interpolation of data related to cellphone and landline energy costs + +- For simplicity the model excluded other possible communications technologies. As noted earlier, paradigm shifts in technology are commonplace yet hard to predict -excludes other communications technologies + +- The model fails to capture any benefit of landlines not provided by cellphones. It may be that landlines are associated with a certain degree of security which mediates the current prediction that landlines will be completely abandoned. + +Future Work - Though we found it problematic and frustrating, we believe a model which acts at the microscopic level and takes into consideration consumer perceptions and habits in addition to economic data would perform the best. + +- We also believe modelling cellphones and landlines as more directly competing products with reference to economic data would provide better data fits and predictions + +- The above analysis was limited to the household level. Landline phones persist in many businesses and we believe this will be a significant factor in energy consumption. + +# References + +{ + +"American Time Use Survey Summary." 9 Feb 2009 < http://www.bls.gov/news.release/atus.nr0.htm>. +"Average size of households (most recent) by country." Statistics. 08 02 2009. NationMaster. Acce Bagchi, Kallol . "The impact of price decreases on telephone and cell phone diffusion." Informati +"Battery FAQ -- Frequently Asked Questions." 9 Feb 2009 < http://www.cellpower.com/FAQs.cfm>. +"Behavioral Risk Factor Surveillance System." BRFSS Annual Survey Data. 07/28/2008. Center for Di Botelho, Anabela. "The diffusion of cellular phones in Portugal." Telecommunications Policy 28(20 +"Computer and Internet Use In the United States: 2003." 9 Feb 2009 < http://www.census.gov/population/www/socdemo/computer.html>. +"Construction and Housing". US Census Bureau. 9 Feb 2009 . +Cox, Micheal. Myths of Rich and Poor: Why We’re Better Off Than We Think. 1. New York City: Basic Ehlen, John, and Patrick Ehlen. "CELLULAR-ONLY SUBSTITUTION IN THE UNITED: IMPLICATIONS FOR TELEPH Eisner, James . "Table 16.2: Household Telephone Subscribership in the United States." Trends in Electricity Consumption by End Use in U.S. Households, 2001. Energy Information Administration. 9 +"Extended Measures of Well-Being:" Household Economic studies. United States Census Bureau. 9 Fe +Fishbein, Bette. "Waste in the Wireless World: The Challenge of Cell Phones." 9 Feb 2009 . +Holden, K. Economic forecasting: an introduction. 1. New York: Cambridge University Press, 1990. +*How much electricity do computers use?*. 9 Feb 2009 <( http://michaelbluejay.com/electricity/computers.html, http://www.accee.org/pubs/a981.htm>. +Ishihara, Kaoru and Nobuo Kihira. "Environmental Burdens of Large Lithium-Ion Batteries Developed in a Japanese National Project." 9 Feb 2009 . +U.S. Bureau of the Census, Current Population Reports, Series P25-1130, "Population Projections o +"U.S. Households by Size, 17902006." Household and Family Statistics. Infoplease. Accessed 08 02 + +{ + +opt.m + +maxt=50000; + +max1=1000000; + +sum=0; + +sum0=-1; + +tick=0; + +lim=0; + +change=0; + +for $k = 1:47$ + +sum=sum+((parms0(1)*(1/(1+parms0(2)*exp((-parms0(3)*(x(k)-27)))))+1/(1+parms0(5)*exp((parms0(4 + +end + +sum + +while((sum>.0001) || (abs(sum-sum0).>0.00)) + +if (tick >= maxt) || (lim >= maxl) + +disp('Limit Reached') + +change %#ok + +break + +end + +$\text{parm} = \max (.5, 6 * \text{rand}(1, 1)) * \text{sqrt}(\text{sum}) / 30$ ; + +var=2*parm*rand(1,1)-pmr; + +$\mathrm{n = floor(.5 + 4*rand(1,1))}$ + +parmb0=parms0; + +sum0 = sum; + +sum=0; + +if $n == 0$ + +$\mathrm{parms0(2) =}$ parms0(2)+var/10; + +end + +if $n == 1$ + +$\mathrm{parms0(4) =}$ parms0(4)+var/100; + +end + +if $n == 2$ + +$\mathrm{parms0(1) = parms0(1) + var}$ + +end + +if $n == 3$ + +$\mathrm{parms0(3) = parms0(3) + var / 100}$ + +end + +if $n == 4$ + +$\mathrm{parms0(5) =}$ parms0(5)+var/10; + +end + +for $j = 1:48$ + +sum=sum+((parms0(1)*(1/(1+parms0(2)*exp((-parms0(3)*(x(j)-27)))))+1/(1+parms0(5)*exp((parms + +end + +if sum >= sum0 + +parms0=parms0; + +sum=sum0; + +else + +tick=0; + +change $\equiv$ change $+1$ sum $\%$ #ok end tick $\equiv$ tick $+1$ lim $\equiv$ lim $+1$ end opt1.m max $= 10000$ maxl $= 1000000$ tick $= 0$ lim $= 0$ change $= 0$ sum $= 0$ sumO=-1; for j=1:41 sum $\equiv$ sum+(tele(x(j),parms)-y(j))^2; end while ((sum>10(-5)) || (sum-sum0<10(-9))) if (tick >= max) || (lim >= maxl) disp('Limit Reached') change $\%$ #ok break end sumO=sum; parmb $\equiv$ parms; p=sqrt(sum)/30; var=p*(2*rand(1,1)-1); n=floor(1+3*rand(1,1)); if n == 1 parms(n)=parms(n)+var; end if n == 2 parms(n)=parms(n)+var/100; $\%$ #ok< $*\mathrm{AGROW}$ end if n == 3 parms(n)=parms(n)+var; end sum=0; %parms for k=1:41 sum=sum+(tele(x(k),parms)-y(k))^2; end if sum $\equiv$ sumO parms $\equiv$ parmb; sum $\equiv$ sumO; else tick $= 0$ + +change $=$ change $+1$ end tick $\equiv$ tick $+1$ lim $\equiv$ lim $+1$ sum $\%$ #ok +end +opt2.m +max=10000; +maxl $= 1000000$ . +lim $= 0$ . +tick $= 0$ . +change $= 0$ +sum $= 0$ . +sumO=-1; +for j=1:19 sum $\equiv$ sum+(celle(x1(j),parms1)-y1(j))^2; +end +while((sum>10^(-5)) || (sum-sum0<10^(-9))) if(tick $\coloneqq$ max)||(lim $\coloneqq$ max1) disp('Limit Reached') change $\%$ #ok break end +sumO=sum; +parms1=parms1; +p=sqrt(sum)/10; var=p*(2*rand(1,1)-1); n=floor(1+3*rand(1,1)); +if n $= = 1$ parms1(n)=parms1(n)+var; +end if n $= = 2$ parms1(n)=parms1(n)+var/1000;%#ok<\*\*AGROW> end if n $= = 3$ parms1(n)=parms1(n)+var; +end sum=0; %parms for k=1:19 sum $\equiv$ sum+(celle(x1(k),parms1)-y1(k))^2; end if sum $\coloneqq$ sumO parms1=parms1; sum $\equiv$ sumO; else + +tick=0; change $\equiv$ change $+1$ end tick $\equiv$ tick $+1$ lim $\equiv$ lim $+1$ sum $\%$ #ok +end +opt3.m +max=10000; +maxl=1000000; lim $\equiv 0$ . tick $\equiv 0$ . change $\equiv 0$ sum $\equiv 0$ sum0=-1; +for j=1:47 sum $\equiv$ sum+(tele(x(j),parms) $^+$ detere(x(j),parms2)-y(j))^2; +end +while ((sum>10\~(-5)) || (sum-sum0<10\~(-9))) if (tick >= max) || (lim>=maxl) disp('Limit Reached') change $\%$ #ok break end sumO=sum; parms2=parms2; p=sqrt(sum)/10; var=p\*2rand(1,1)-1); n=floor(1+3\*rand(1,1)); if n $= = 1$ parms2(n)=parms2(n)+var; end if n $= = 2$ parms2(n)=parms2(n)+var/1000; $\%$ #ok<\*AGROW> end if n $= = 3$ parms2(n)=parms2(n)+var; end sum=0; %parms for k=1:47 sum $\equiv$ sum+(tele(x(k),parms) $^+$ detere(x(j),parms2)-y(k))^2; end if sum $\equiv$ sumO parms2=parms2; sum $\equiv$ sumO; + +else tick $= 0$ change $\equiv$ change $+1$ sum $\%$ #ok end tick $=$ tick+1; lim $= \lim +1$ . +end +celle.m +function C=celle(x,parms) %cellphone adoption function +if x<29 C=0; else C=parms(1)./(1+exp(-parms(2).*x+parms(3)))); +end +parms0.m +[1.126346841186298,1.092443438718643,0.042304296551599,0.158692053851258,0.010864436027023] +parms1.m +[0.929052472290896,0.295494950200304,-42.973887435250440] +Code for optimization of energy in Pseudo USA +\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* +function C=Cost(par,Hm,C1,n) %landline can only serve n members +T=sum(par.\*Hm); +Cn=sum((1:7).*1-par).\*Hm); +Cn=Cn+sum(par(n+1:end).\*Hm(n+1:end).\*(1:7-n)); +C=Cl*T+cellenergynew(Cn,2009); +% +function [ce] $=$ cellenergynew(n,t) +% function to calculate the energy consumed by cell phones, given the n +% cell phones in use at some time t ce $= (\mathrm{n})^{*}(8.176)+$ n\*(.021)\*(-0.07639)\*exp(-t+1993)/17.1572)+0.1139); +\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\# +C=10^20; par=rand(1,7);best=par; changes=0;Cl=49.9880;Cc=11.096;r=7; +tic +for k=1:100000 n=1+fLOOR(7*rand); par(n)=rand; cost $=$ Cost(par,Hm,C1,r); + +if cost $< =$ C(end) best $\equiv$ par; $\mathrm{C} = [\mathrm{C}$ cost]; changes $\equiv$ changes+1; else par $\equiv$ best; end +end +toc +plot(C(2:end))} + +5898 \ No newline at end of file diff --git a/MCM/2009/C/2009-ICM-ComA/2009-ICM-ComA.md b/MCM/2009/C/2009-ICM-ComA/2009-ICM-ComA.md new file mode 100644 index 0000000000000000000000000000000000000000..820800bd90488a57f7da0e296eb4089dfc8c66b4 --- /dev/null +++ b/MCM/2009/C/2009-ICM-ComA/2009-ICM-ComA.md @@ -0,0 +1,62 @@ +# Authors' Commentary: The Outstanding Coral Reef Papers + +Melissa Garren + +Center for Marine Biodiversity and Conservation + +Scripps Institution of Oceanography + +University of California-San Diego + +La Jolla, CA + +Joseph Myers + +Dept. of Mathematical Sciences + +U.S. Military Academy + +West Point, NY + +# Introduction + +According to the Food and Agriculture Organization of the United Nations, aquaculture is the fastest growing sector of animal-based food production for human consumption. As the global population increases, pressure on coastal ecosystems and the need to produce food also grow. More than half of the world's population lives within $200\mathrm{km}$ (120 mi) of a coast, and many natural fisheries are already fished at or over capacity. Within this context, the influence of aquaculture on coastal ecosystems is a topic of social, environmental and scientific concern and the subject for this year's problem in the Interdisciplinary Contest in Modeling (ICM) $^{\text{®}}$ . + +Coral reefs are delicate and valuable ecosystems that only thrive in shallow, tropical, nutrient-poor waters. They cover less than $1\%$ of the ocean's floor but harbor $25\%$ of marine biodiversity. Many people depend on these ecosystems for food, trade, tourism, shoreline protection, and new sources of medicinal compounds. The majority of coral reefs on this planet grow along inhabited tropical coastlines of developing countries. Thus, as an ever-growing number of aquaculture facilities are installed in coastal waters, the interactions between coral reef ecosystems and fish farms are of particular interest. + +There are many forms of aquaculture practices, but the more environmentally compatible versions tend to be more costly to set up and operate than their + +less compatible counterparts. Developing methods that are both cost-effective and have a low impact on the surrounding ecosystem is an important issue and a complex and timely challenge. A common method is simply to raise one species of carnivorous fish in pens set directly in coastal waters. Unfortunately, this method causes several environmental problems: + +- There is no real barrier between the captive and wild populations, so any disease that occurs in the densely packed pens will flow directly into contact with wild populations. +- No filtration of effluent exists—all excess feed, fish feces, and microbial populations mix directly with natural waters. +- Living organisms can only use $10 - 20\%$ of the energy they consume, so the other $80 - 90\%$ goes to waste—raising an organism higher up the food chain (a carnivore) means that several rounds of $80 - 90\%$ loss occurred to simply make the food that the target species will eat. + +These practices are currently happening on and adjacent to many coral reefs. A growing body of scientific literature is demonstrating that these fish farms have a significant negative impact on the corals, and thus major improvements are needed to attain a viable industry and a sustainable coral reef ecosystem. + +# Formulation and Intent of the Problem + +The goal of this year's ICM problem was for student teams to tackle the ecological and technological challenges of improving such practices within the tractable confines of one specific case study of milkfish (Chanos chanos) aquaculture directly next to coral reefs in Boliano, Philippines. There are many possible approaches to improving the current situation, but we asked teams specifically to come up with a polyculture scenario that would improve water quality sufficiently for corals to recolonize the areas close to the fish pens where they currently cannot survive. By adding more than one species to the industry, energy inputs can be reduced by growing food for the milkfish locally and water quality can be improved by filter feeders and algae that absorb excess nutrients without requiring major gear or technology shifts. This particular method of more environmentally responsible aquaculture also emphasizes the ecological links between different species and trophic levels. There are a number of potentially negative impacts associated with introducing new species into an ecosystem, so teams were also asked to evaluate the potential risks associated with their polyculture solution. + +Teams were first asked to model the original, healthy coral reef ecosystem before the introduction of fish farms. For the purpose of modeling, the complex ecosystem was simplified to one member from each major trophic and phylogenetic guild. The purpose was to identify how the natural system's organisms interact to control water quality in the area. + +The second task was to model the current system with the monoculture of milkfish present. Since the natural milkfish food supply was removed by placing the animals in pens, feed must be purchased and added to the system. The idea was to see the effect of exogenous feeding on water quality. They compared the results of their model to actual observed water quality data from Bolinao. Next, teams were asked to model a remediation scenario. They chose the species they wanted to include in their polyculture system and modeled the effects on water quality, harvest, and economic value. They were asked to discuss the harvesting of each species and what parameters they would use to determine the value of the harvest. The last modeling challenge was to maximize the value of the total harvest while maintaining sufficient water quality levels for corals to grow. + +The end result of modeling was to write recommendations to the Pacific Marine Fisheries Council regarding the management of the Bolinao milkfish aquaculture industry. This is where the teams evaluated the ecological pros and cons of the species chosen for their particular polyculture system, the economic trade-offs of improving water quality, and how long the remediation of Bolinao coral reefs can be expected to take. + +A major goal of this contest problem was for teams to relate the modeling choices they made to realistic ecological and biological processes. Teams were asked to use realistic parameters for their models based on actual ecological and physiological data and to justify any assumptions made. Fundamental understandings of primary production, trophic interactions, and energy transfer were essential for building and critiquing their own models. + +This year's ICM problem is based on research being done by the World Bank and Global Environment Facility's Coral Disease Working Group. This international group of scientists has been working to understand the ecological consequences of this fish farm industry on coral health. The first phase of the project was to identify some of the mechanisms by which fish pens are negatively impacting corals. This is the final year of phase one, and much progress has been made. As we enter phase two of the project, we move forward with a goal of testing and implementing alternative methods of farming in this area. Polyculture is one of the alternatives currently being discussed. + +# References + +Hinrichsen, D. 1998. Coastal Waters of the World: Trends, Threats, and Strategies. Washington, DC: Island Press. +United Nations Food and Agricultural Organization (FAO). 2006. State of World Fisheries and Aquaculture 2006. http://www.fao.org/docrep/009/a0699e/A0699E00.htm. + +# About the Authors + +![](images/3653abb96bfb0ce09a4c04b26186ee426a0a30c746162e6ad6f776789e612461.jpg) + +Melissa Garren is currently a Ph.D. candidate in marine biology at Scripps Institution of Oceanography in La Jolla, CA. She earned her B.S. in molecular biology from Yale University and her M.S. in marine biology from Scripps Institution of Oceanography. Her research focuses on the ecological response of microbes to organic coastal pollution, particularly in coral reef environments, with the aim of understanding the effects on coral health and disease. + +Joe Myers has served for two decades in the Dept. of Mathematical Sciences at the United States Military Academy. He holds degrees in Applied Mathematics and other disciplines and is a licensed Professional Engineer. He currently serves as a Professor, having directed freshman calculus, sophomore multivariable calculus, the electives program, and the research program. He has been involved in several major initiatives to improve teaching and learning, including building interdisciplinary activities and programs under the NSF-sponsored Project Intermath; integrating technology and student laptop computers into the classroom; and weaving modeling, history, and writing threads into the mathematics curriculum. He enjoys modeling and problem solving, has posed and guided the research of dozens of math majors, and has been involved in several research projects with the Army Research Laboratory. \ No newline at end of file diff --git a/MCM/2009/C/2009-ICM-ComJ/2009-ICM-ComJ.md b/MCM/2009/C/2009-ICM-ComJ/2009-ICM-ComJ.md new file mode 100644 index 0000000000000000000000000000000000000000..504e5f29c468561b31ac13757bae9f018c384c0e --- /dev/null +++ b/MCM/2009/C/2009-ICM-ComJ/2009-ICM-ComJ.md @@ -0,0 +1,114 @@ +# Judges' Commentary: The Outstanding Coral Reef Papers + +Sheila Miller + +Dept. of Mathematical Sciences + +U.S. Military Academy + +West Point, NY + +Melissa Garren + +Center for Marine Biodiversity and Conservation + +Scripps Institution of Oceanography + +University of California-San Diego + +La Jolla, CA + +Rodney Sturdivant + +Dept. of Mathematical Sciences + +U.S. Military Academy + +West Point, NY + +Rodney.Sturdivant@usma.edu + +# Introduction + +The Interdisciplinary Contest in Modeling (ICM) $^{\text{®}}$ is an opportunity for teams of students to tackle challenging real-world problems that require a wide breadth of understanding in multiple academic subjects. This year's problem required a particularly deep understanding of ecology to model a solution effectively. Due to the rapid growth of aquaculture facilities currently being installed in or adjacent to many sensitive coastal ecosystems, research into sustainable culturing methods is an active area of investigation. Seven judges gathered in late March to select the most successful entries of this challenging competition out of an impressive set of submissions. + +# The Problem + +The primary goal of this year's ICM was to develop an aquaculture scenario that incorporated species from multiple trophic levels to reduce the level of effluent leaving the fish pens for a specific case study in the Philippines. These fish farms are adjacent to coral reefs, thus the target was to improve water quality such that corals could thrive in the area while an economically viable aquaculture industry could also be maintained. The main tasks expected of the teams were as follows: + +1. Model the original Bolinao coral reef ecosystem before the introduction of fish farms. +2. Model the current Bolinao milkfish monoculture. +3. Observe the remediation of Bolinao via aquaculture. +4. Maximize the value of the total harvest. +5. Call to action. + +Overall, the judges were impressed by both the strength of many of the submissions of individual teams and the variety of approaches that teams used to address the questions posed by the ICM problem. + +# Judges' Criteria + +In order to ensure that the individual judges assessed submissions on the same criteria, we developed a rubric. The framework used to evaluate submissions is described below. + +- Executive Summary: It was important that teams succinctly and clearly explained the highlights of their submissions. These executive summaries needed to include modeling approach(es) used for both the current monoculture and in remediation using polyculture. Further, the summary needed to answer the most pressing questions posed in the problem statement, namely recommendations for remediation and the impact on water quality and optimizing the harvest. Truly Outstanding papers were those that communicated their approach and recommendations in well-connected and concise prose. +- Domain Knowledge and Science: The problem this year was particularly challenging for teams in terms of the science. + +- To address the requirements effectively, teams needed first to establish an ecological frame of reference. Many teams were able to do this reasonably well; teams that excelled clearly did a great deal of research. Often, what distinguished the top teams was the ability not just to describe the + +ecosystem in a single section of the paper, but to integrate this domain knowledge throughout the modeling process. + +- A second important facet of the problem was the ability to understand issues that impact water quality. Many teams created reasonable models of the species and their interactions but very few effectively modeled the water quality. + +- Modeling and Assumptions: The most popular models used were differential equations—usually linear for the simple cases and then expanding to include nonlinear terms. Simulation was also a popular approach to the problem. Often the models appeared appropriate but neglected any discussion of important assumptions. Additionally, many papers lacked a reasonable discussion of model development, instead presenting a series of equations and parameter values without support. Finally, the very best papers not only formulated the models well, but were able to use the models to produce meaningful results to address the problem and to make recommendations. +- Solution (Optimization): Perhaps the most distinct difference between the best papers and others was the ability to utilize their models to develop an actual solution to the problems. Many teams failed to address the most important portions of the problem in any substantive way—what should be done to remediate Bolinao and how to balance the water quality while maximizing harvesting. As a result, the judges put additional emphasis on the actual solution presented, in addition to the modeling approach. +- Analysis/Reflection: Successful papers utilized the models developed in early sections of the paper to draw conclusions about the important issues in addressing problems with the Bolinao ecosystem. For example, the important parameters were identified in terms of their impact on the water quality and the harvest available. In the best papers, trade-offs were discussed and, in truly exceptional cases, some sensitivity analysis conducted to identify potential issues with the solutions presented. +- Communication: The challenges of the modeling in this problem may have contributed to the difficulty many teams had in clearly explaining their solutions. Papers that were clearly exposited distinguished themselves significantly, emphasizing that it is not only good science that is important, but also the presentation of the ideas. + +# Discussion of the Outstanding Papers + +The two Outstanding papers each had features that distinguished them from the other submissions. Working under the time constraint, both teams were impressive in their ability to research the ecological issues, propose reasonable models, and to present their work in a clear and readable manner. This year, in + +particular, the judges felt that the Outstanding papers each demonstrated particular strengths in one of the important dimensions discussed in the previous section. No submission was able to dominate every area, but these two teams were clearly superior in different ways. + +# China University of Mining and Technology + +The China University of Mining and Technology submission was notable for the impressive array of modeling techniques utilized in attacking the problems. There were other papers with a similar level of modeling, but this group not only described the modeling process clearly but connected the models coherently to the problem at hand. As with many of the teams, the principal models used were differential equations (Volterra models). The team also used the Analytical Hierarchy Process (AHP), as well as nonlinear optimization to improve their models and to address the later requirements of the problem. They propose strengthening the "middle strata of the foodweb" by introducing herbivorous species to the polyculture. They also propose a strategy for harvesting various species while still satisfying the constraint to maintain good water quality in Bolinao. While extremely strong on the modeling, the paper could have been further improved with more depth on the ecological issues and the overall quality of the writing. + +# U.S. Military Academy + +The paper from the U.S. Military Academy included perhaps the clearest understanding and presentation of the ecological problem and issues among all submissions. The paper was extremely well written and researched. Unlike many teams, the group chose discrete models (difference equations) as the primary tool for their analysis and then employed simulations to help with the optimization tasks. The team did an exceptional job of showing how their model output support the move from a mono- to poly-culture—they added blue mussel mollusks to the system to show the positive effects of such a change. They also proposed an optimal harvesting strategy involving multiple species. This paper could have been strengthened by adding detail about the models and modeling process. + +# Why Some Other Teams Weren't Outstanding + +In addition to the two Outstanding papers, the judges noted several other papers of equal merit in terms of the modeling effort but excluded from award due to issues with proper documentation. The issue was not the fact that material from Websites or books was included—within reason, quotations properly cited are appropriate. Rather, some teams used material taken directly from such sources (sometimes as much as one or more pages of text) in place of their own ideas, failed to document a quoted passage as a quotation, or both. + +# Conclusion + +The judges extend their congratulations to all who participated in the contest. It is a pleasure to see the variety of approaches taken by the different teams; some of these were novel and interesting. The number of excellent papers made the judging both enjoyable and difficult. The problem this year was extremely challenging and the ability to both research and then model in a short period of time was impressive. + +Two facets of this year's ICM are worth noting: + +- The importance of understanding the underlying science in formulating mathematical models. In the practice of modeling, assumptions should be carefully thought out and checked and, whenever possible, experts should be consulted. +- How critical communication skills are to the analyst. A great mathematical model is not likely to be used if not clearly and concisely explained. + +# Recommendations for Future Participants + +- Not even ingenious solutions are a substitute for clear exposition. +- Ensure that the assumptions you make are clear to the reader, and address them in your conclusions and recommendations. +- Address all aspects of the problem that are asked. +- Between two equally-clear explanations, the shorter one is better. +- Properly citing sources is critical. Judges notice plagiarized material and disqualify papers that contain it; cite as you go, not at the end. +- The recommendations and sensitivity analysis are often as important as the model itself. Frequently, it is better to have a well-analyzed model that accounts for slightly less than a comprehensive but untested one. +- Team members should work to integrate their final submissions. Your paper should read as though it has only one author. + +# About the Authors + +Sheila Miller is an Assistant Professor and Davies Fellow at the U.S. Military Academy in West Point, NY. She holds a Ph.D. in Mathematics from the University of Colorado, Boulder and does research in set theory, sea turtle population modeling, value of information, and change detection in social networks. + +![](images/493dd1fe390a7cd151b718b5ff64eefc5c28658ff7b84117db84a70d11223ae5.jpg) + +![](images/94eba33a6660b1cc9b6dabcf4cbf7f300eff96a8dedfe447249c3a74b6403208.jpg) + +Melissa Garren is currently a Ph.D. candidate in marine biology at Scripps Institution of Oceanography in La Jolla, CA. She earned her B.S. in molecular biology from Yale University and her M.S. in marine biology from Scripps Institution of Oceanography. Her research focuses on the ecological response of microbes to organic coastal pollution, particularly in coral reef environments, with the aim of understanding the effects on coral health and disease. + +Rod Sturdivant is an Associate Professor at the U.S. Military Academy in West Point, NY. He earned a Ph.D. in biostatistics at the University of Massachusetts-Amherst and is currently program director for the probability and statistics course at West Point. He is also founder and director of the Center for Data Analysis and Statistics within the Dept. of Mathematical Sciences. His research interests are largely in applied statistics with an emphasis on hierarchical logistic regression models. + +![](images/22cd78b6dcb86ba67d6c4f4a7edc598b848c5e8b86a9ac25c43b617a85188537.jpg) \ No newline at end of file diff --git a/MCM/2009/C/4400/4400.md b/MCM/2009/C/4400/4400.md new file mode 100644 index 0000000000000000000000000000000000000000..d63fc00ae02b8258c60b11c043d75c28ee1ee384 --- /dev/null +++ b/MCM/2009/C/4400/4400.md @@ -0,0 +1,715 @@ +For office use only + +T1 + +T2 + +T3 + +T4 + +# 4400 + +Problem Chosen + +C + +For office use only + +F1 + +F2 + +F3 + +F4 + +# Abstract + +After deep analyses of the problem, we conclude that it falls into Re-Balancing Human-Influenced Ecosystems problem based on food chain. + +In Task I, based on Volterra Model, we establish Bait-Predator Model Derived from Three Biological Populations, and specify the steady state numbers of the three populations. Then, based on Analytic Hierarchy Process and Competition Model, we obtain the ratio of different species in Population II, predict the steady state level of water quality is not high, and make the water quality satisfying through respectively adjusting the numbers of six species, + +In Task II, when the milkfish farming suppresses other animal species, we set up the Logistic Model, and predict the water quality of steady state is very awful, the same to that in the fish pen, insufficient for the continued healthy growth of coral species. When other species are not totally suppressed, with improved Bait-Predator Model derived from three biological populations, we simulate the water quality of Bolinao, making it match the one observed currently, obtain the predicted numbers of populations, and discuss some changes to the Bait-Predator Model which are aimed at making the numbers of populations into closer agreement with observations. + +In Task III, we establish Polyculture Model, which is able to reflect the polyculture system made up of an interdependent set of species, introduce mussels and, with seaweed growing on the sides of the pens, and obtain the numbers of populations in steady state and the outputs of our model. + +In Task IV and V, we differentiate the values of edible biomass and define the value of edible biomass as the sum of the values of each species harvested, minus the cost of milkfish feed. Under the circumstances of acceptable water quality, we build the Nonlinear Equilibrium Optimization Model, receiving optimal strategy and harvest. + +In Task VI, we put forward a strategy to improve the water quality in Bolinao. With the ratio between feeding cost and net income as the index, the index value of the model is smaller than that of Bolinao area, which signifies the leverage of the strategy. Also, we analyze polyculture system in terms of ecology. + +# Re-Balancing Human-Influenced Ecosystems + +# Introduction + +Consider an area in the Philippines located in a narrow channel between Luzon Island and Santiago Island in Bolinao[1], Pangasinan, that used to be filled with coral reef and supported a wide range of species. The once plentiful biodiversity of the area has been dramatically reduced with the introduction of commercial milkfish (Chanos chanos) farming in the mid 1990's. It's now mostly muddy bottom, the once living corals are long since buried, and there are few wild fish remaining due to over fishing and loss of habitat. While it is important to provide enough food for the human inhabitants of the area, it is equally important to find innovative ways of doing so that allow the natural ecosystem to continue thriving; that is, establishing a desirable polyculture system that could replace the current milkfish monoculture. + +In order to improve the situation in Bolinao, we need to establish a practicable polyculture system, and we make it through gradual effort. + +So our goal is pretty clear: + +- Model the original Bolinao coral reef ecosystem before fishfarm introduction. +- Model the current Bolinao monoculture milkfish. +- Model the remediation of Bolinao via polyculture. +- Discuss the outputs and economic values of species. +- Write an information paper to the director of the Pacific Marine Fisheries Council summarizing the relationship between biodiversity and water quality for coral growth. + +# Our approach is: + +- Deeply analyze data in the problem, gradually establish the Model of Coral Reef Foodweb. +- With available data in the problem as evaluation criteria, confirm the water quality based on elements in the sediment. +- Establish models, and interpret actual situation with data, with the purpose of improving water quality. +- Do further discussion based on our work. + +# Solutions + +# Task 1 + +Aimed to the Coral Reef Foodweb Model, we assume all the species grow in the same fish pen, and firstly we divide these species into three populations: one algae species(Population I), one herbivorous fish, one mollusc species, one crustacean species, one echinoderm species(Population II) and the sole predator species, i.e. milkfish (Population III). The interrelationship between them is presented as Figure 1-1: + +![](images/03e80e2745c2dcf11912f7d3a3f78c83b50836af8dfec840af559448f6ef296e.jpg) +Figure 1-1 Figure of Interrelationship between Three Populations + +On this basis, we can establish the Volterra Bait-Predator Model[2] Derived from Three Populations, assume the numbers of the three populations respectively as $x_{1}(t), x_{2}(t), x_{3}(t)$ . If not taking restrictions of natural resources on the algae species into consideration, the algae species follow the exponential growth law when growing dependently, assume the relative growth rate as $r_{1}$ , then $\dot{x}_{1}(t) = r_{1}x_{1}$ , but the species of Population II feeding on the algae species will decrease the growth rate of the algae. So the model of the algae species is: + +$$ +\dot {x} _ {1} (t) = x _ {1} \left(r _ {1} - \lambda_ {1} x _ {2}\right) \tag {1-1} +$$ + +Proportionality coefficient $\lambda_{1}$ reflects the feeding capability of species in Population II for the algae species. Assume the death rate of the species in Population II as $r_{2}$ + +when existing dependently, then $\dot{x}_2(t) = -r_2x_2$ , so based on the foodweb we conclude: + +$$ +\dot {x} _ {2} (t) = x _ {2} \left(- r _ {2} + \lambda_ {2} x _ {1}\right) \tag {1-2} +$$ + +Proportionality coefficient $\lambda_{2}$ reflects the support capability of the algae species for + +Population II, which provide food for the milkfish, so the existence of milkfish reduce the growth rate of the species in Population II, the right side of Equation (1-2) should subtract the blocking effect, and then we obtain the model of Population II: + +$$ +\dot {x} _ {2} (t) = x _ {2} \left(- r _ {2} + \lambda_ {2} x _ {1} - \mu x _ {3}\right) \tag {1-3} +$$ + +Likewise, the model of milkfish is: + +$$ +\dot {x} _ {3} (t) = x _ {3} \left(- r _ {3} + \lambda_ {3} x _ {2}\right) \tag {1-4} +$$ + +At last, we obtain the interdependent and mutually-restricting mathematical model of the three populations: + +$$ +\left\{ \begin{array}{c} \dot {x} _ {1} (t) = x _ {1} \left(r _ {1} - \lambda_ {1} x _ {2}\right) \\ \dot {x} _ {2} (t) = x _ {2} \left(- r _ {2} + \lambda_ {2} x _ {1} - \mu x _ {3}\right) \\ \dot {x} _ {3} (t) = x _ {3} \left(- r _ {3} + \lambda_ {3} x _ {2}\right) \end{array} \right. \tag {1-5} +$$ + +Since System of Differential Equations (1-5) has no analytic solution, we use MATLAB to get its numerical solution, and then speculate the construction of the analytical solution through the numerical solution. The ecologists point out that the periodic solution cannot be observed in most balanced ecosystems, which tend to a certain balanced state, that is, there exists an equilibrium point in each balanced ecosystem. In addition, some ecologists think the structure of long-existing and periodically-changing balanced ecosystems in the nature is steady, that is to say, if diverging from the former periodic track because of disturbance; the internal control mechanism will restore it. However, the periodically-changing state described by Volterra Model is non-structure stability, and even subtle adjustments to the parameters will change the periodic solution. So we improve the model, and still set the numbers of the three populations as $x_{1}(t), x_{2}(t), x_{3}(t)$ . Also, the number evolution follows Logistic Law. When the algae species grow independently, the model is: + +$$ +\dot {x} _ {1} (t) = x _ {1} r _ {1} \left(1 - \frac {x _ {1}}{N _ {1}}\right) \tag {1-6} +$$ + +In Equation (1-6), $r_1$ refers to the intrinsic growth rate of the algae species; $N_1$ + +refers to the maximum number of the algae species which is allowed by the environmental resources. The algae species provide food for the species of Population II, so the model of the algae species is: + +$$ +\dot {x} _ {1} (t) = r _ {1} x _ {1} \left(1 - \frac {x _ {1}}{N _ {1}} - \sigma_ {1} \frac {x _ {2}}{N _ {2}}\right) \tag {1-7} +$$ + +In Equation (1-7), $N_{2}$ is the maximum capacity of the species in Population II; $\sigma_{1}$ refers to the quantity of the algae (compared to $N_{1}$ ) eaten by the unit quantity species in Population II (compared to $N_{2}$ ). + +Without the algae species, the species in Population II will perish, set its death rate as $r_2$ , then when existing dependently, we will have: + +$$ +\dot {x} _ {2} (t) = - r _ {2} x _ {2} \tag {1-8} +$$ + +The algae species provide food for Population II, the right side of Equation (1-8) should add the auxoaction, and the growth of the species in Population II is also influenced by internal blocking action, then we have: + +$$ +\dot {x} _ {2} (t) = r _ {2} x _ {2} \left(- 1 - \frac {x _ {2}}{N _ {2}} + \sigma_ {2} \frac {x _ {1}}{N _ {1}}\right) \tag {1-9} +$$ + +In Equation (1-9), $\sigma_{2}$ is similar to $\sigma_{1}$ appearing in (1-7), we can get the model of the species in Population II: + +$$ +\dot {x} _ {2} (t) = r _ {2} x _ {2} \left(- 1 - \frac {x _ {2}}{N _ {2}} + \sigma_ {2} \frac {x _ {1}}{N _ {1}} - \sigma_ {3} \frac {x _ {3}}{N _ {3}}\right) \tag {1-10} +$$ + +In Equation (1-10), $\sigma_3$ is similar to $\sigma_1$ and $\sigma_2$ . Without the species in Population II, milkfish will disappear, set its death rate as $r_3$ , then when the milkfish exists independently, we will have: $\dot{x}_3(t) = -r_3x_3$ . + +The species in Population II provide food for the milkfish, and the growth of milkfish is also restricted by the internal blocking action. Here the model is: + +$$ +\dot {x} _ {3} (t) = r _ {3} x _ {3} \left(- 1 - \frac {x _ {3}}{N _ {3}} + \sigma_ {4} \frac {x _ {2}}{N _ {2}}\right) \tag {1-11} +$$ + +Simultaneous Equations (1-7), (1-10), (1-11) constitute the interdependent mathematical model between the three populations: + +$$ +\left\{ \begin{array}{c} \dot {x} _ {1} (t) = r _ {1} x _ {1} \left(1 - \frac {x _ {1}}{N _ {1}} - \sigma_ {1} \frac {x _ {2}}{N _ {2}}\right) \\ \dot {x} _ {2} (t) = r _ {2} x _ {2} \left(- 1 - \frac {x _ {2}}{N _ {2}} + \sigma_ {2} \frac {x _ {1}}{N _ {1}} - \sigma_ {3} \frac {x _ {3}}{N _ {3}}\right) \\ \dot {x} _ {3} (t) = r _ {3} x _ {3} \left(- 1 - \frac {x _ {3}}{N _ {3}} + \sigma_ {4} \frac {x _ {2}}{N _ {2}}\right) \end{array} \right. \tag {1-12} +$$ + +So far, we have established the Bolinao Coral Reef Ecosystem Model before putting farming into practice. After consulting relevant materials, we obtain the values of some parameters in the model, and through nonlinear data fitting of the original data of the local three populations[2][3][4], we get their natural growth rates: + +$$ +\sigma_ {1} = 0. 6, \sigma_ {2} = 0. 5, \sigma_ {3} = 0. 5, \sigma_ {4} = 2, r _ {1} = 1, r _ {2} = 0. 5, r _ {3} = 0. 6, N _ {1} = 1 5 0 0 0 0, +$$ + +$N_{2} = 30000$ , $N_{3} = 2200$ .According to the volume of local fish pens and relevant materials, we get the original numbers of three + +populations: $x_{1}(0) = 121500$ , $x_{2}(0) = 27000$ , $x_{3}(0) = 2000$ . Then we use MATLAB to simulate the model, and the changes are illustrated in Figure 1-3: + +![](images/051c054366642fca4a4331eecc6bb51c59b4c0172635c9e1922854cf51bd9be2.jpg) +Figure 1-2 Figure of Numerical Solutions to $x_{1}(t), x_{2}(t), x_{3}(t)$ + +From Figure 1-2, we can see, with the passage of time, the changes of $x_{1}(t), x_{2}(t)$ and $x_{3}(t)$ tend to steady values. Use simulation to get the numerical solution. + +![](images/a660d789505cca2f04a2951f5b56bb88b04945d52de737899bc70b1e4bf33843.jpg) +Figure 1-3 Simulation Diagram + +From numerical solution, we can approximately get the stable value (69027, 27015, 1760.3). Then the numbers of the three populations are respectively 69027, 27015, 1760.3. Here the number 27015 of the species in Population II is made up of herbivorous fish, mollusks, crustaceans and echinoderm. + +Now we will confirm the numbers of all the species in Population II, which stay at the same trophic level, coexisting and mutually competing. Here we apply Expert System and Group Decision Theory to confirm the weight between all the species in Population II. The so-called Expert System and Group Decision refers to the multi-attribute Decision Problem which aims to select the optimal solution from many alternatives or sort the available alternatives. Assume the finite solution set as $Y = \{y_{1}, y_{2}, \dots, y_{n}\}$ , of which $n \geq 2$ , $y_{i}$ signifies the $i$ th solution, here $i = 1, 2, \dots, n$ . Attribute set is $C = \{c_{1}, c_{2}, \dots, c_{q}\}$ , of which $q \geq 2$ , $c_{j}$ refers to the $j$ th attribute, here $j = 1, 2, \dots, q$ . Decision expert set is $E = \{e_{1}, e_{2}, \dots, e_{m}\}$ , of which $m \geq 2$ , $e_{k}$ means the $k$ th decision-making expert, here $k = 1, 2, \dots, m$ . $S = \{s_{1}, s_{2}, \dots, s_{g}\}$ , here $S$ is a pre-defined set constituted by odd-chain elements, and $S = g + 1$ . Decision-making Expert $e_{k}$ selects one element from $S$ as the evaluation value of Solution $y_{i}$ under Attribute $c_{j}$ , set it as $p_{ij}^{k}$ , $p_{ij}^{k} \in S$ , $p^{k} = (p_{ij}^{k})_{n \times q}$ refers to the + +judgment matrix made by Decision-making Expert $e_k$ on all the solutions under all the attributes; the attribute weight vector in evaluating information form given by Decision-making Expert $e_k$ is $W^k = (w_1^k, w_2^k, \dots, w_q^k)^T$ , of which $w_j^k$ means the weight of Attribute $c_j$ selected by Decision-making Expert $e_k$ from Set $S$ , $w_j^k \in S$ . This theory can be actualized through Analytical Hierarchy Process. + +Analytical Hierarchy Process (shortened as AHP) was first put forward by American operational researcher A.L.Saaty in the 70s in $20^{\text{th}}$ century. AHP is a kind of method for decision-making analysis combining qualitative and quantitative methods. Using this method, decision makers can separate complex problems into several levels and factors and compare and find the weight of different programs and provide the basis for the optimum program. + +Basic principle: AHP first classify the problem into different levels based on the nature and the purpose of the problem, construct a multi-level structure model ranked as the lowest level(program for decision making, measures etc), compared with the highest level (the highest purpose). + +Based on AHP, we can establish the stratification diagram shown in Figure 1-4: + +![](images/2c8990c61cbb48d32381cc555e9710dab421d4c5607ddfcefbd2141f8335434d.jpg) +Figure 1-4 AHP Stratification Diagram + +At last, we make consistency check of the result, finding that the consistency ratio of each expert's judgment matrix is below 1, so the consistency of judgment matrix is acceptable. Finally we figure out the weight of the numbers of all the species in Population II, as shown in Table 1-1: + +Table1-1 Weight of Each Species Measured by AHP + +
AlternativesWeight
Herbivorous fish0.2087
Crustaceans0.2334
Molluses0.3140
Echinoderm0.2438
+ +Here we adopt population competition model to confirm the weight of each species in Population II + +$$ +\left\{ \begin{array}{l} \frac {d N _ {1}}{d t} = N _ {1} \left(\varepsilon_ {1} + \gamma_ {1} N _ {2}\right) \\ \frac {d N _ {2}}{d t} = N _ {2} \left(\varepsilon_ {2} + \gamma_ {2} N _ {1}\right) \end{array} \right. \tag {1-13} +$$ + +Here, $\varepsilon_{i}$ : birth-rate, $\gamma_{i}$ :coefficients of species interaction. + +According to (1-13), we find that the ratio between different species is almost consistent with that obtained by AHP, which also confirms the correctness of our method. + +In this way, we can find: herbivorous fish, crustaceans, molluscs and echinoderm can coexist and also compete. So the number of each species can be figured out based on the data in steady state from the previous models, as shown in Table 1-2: + +Table 1-2 The Number of Each Species in Steady State + +
OrganismNumber (per pen)
The algae69027
Herbivorous fish5638
Crustaceans6305
Molluscs8483
Echinoderm6589
milkfish1760.3
+ +Now we use the model to check the water quality, and make clear whether it is suitable for the continued healthy growth of the coral. + +Firstly, we should calculate the current concentration of chlorophyll in the fish pen. With help of relevant materials, we find out the regression equation between the algae and chlorophyll: + +$$ +N = 0. 7 5 6 8 C + 1. 2 7 8 5 +$$ + +The unit of $N:10^{4} / \mathrm{mL}$ , the unit of $C:ug / L$ Since $N = 6.90273$ , $C = 7.43155$ . + +Here we obtain the content of chlorophyll. Obviously the concentration of chlorophyll is far beyond $0.25\mathrm{ug / L}$ , suitable concentration for the growth of coral. + +Now with the available data in the problem, we figure out the mass of organic particles in the fish pen, and then work out the mass of each element. + +- With the available data in the problem, figure out the dry weight of echinoderm in the pen is $45464.1\mathrm{g}$ , the dry weight of milkfish excrement is $425992.6 \sim 867827.9\mathrm{mg}$ , and then get total dry weight of excrement in the pen is $948829.75 \sim 1390665.05\mathrm{mg}$ . +- The volume of the pen is $10\mathrm{m}^{*}10\mathrm{m}^{*}8\mathrm{m}$ . Finally we get the concentration of organic particles is $1186.037 \sim 1738.331~\mathrm{ug / L}$ . Based on the percentage of elements given in the problem, we figure out then concentration of $\mathrm{C}(10\%)$ , $\mathrm{N}(0.4\%)$ , $\mathrm{P}(0.6\%)$ + +Table 1-3 The Concentration of Elements in the Pen + +
ElementConcentration(ug/L)
C (10%)118.604 ~ 173.833
N (0.4%)4.744 ~ 6.953
P (0.6%)7.116 ~10.430
+ +Comparing the water quality in Site A, B, C and D, we find the concentration of organics is between A and B, which is suitable for the growth of coral (here the concentration of elements is calculated only based on the excrement of milkfish and echinoderm), so the concentration of the microbe meets the multiplication needs of the coral. But the concentration of chlorophyll is seriously out of limits. So we have to adjust the numbers of some species to make the concentration of chlorophyll reach the standard. + +Counter-reasoning from the concentration of chlorophyll $(0.25\mathrm{ug / L})$ suitable for the growth of the coral, together with the regression equation + +$$ +N = 0. 7 5 6 8 C + 1. 2 7 8 5, +$$ + +With estimated steady state number, we can assume the initial introducing value as: (10000, 5500, 350). With relevant literature, we get the maximum volume of fish pens $(N_{1}, N_{2}, N_{3})$ : (30000, 6000, 400), and through re-simulation finally find out the positive corrected results of steady state number: (13732, 5432, 320). + +we work out the estimated steady state number of the algae $\mathrm{N} = 1.4677$ , and then counter-reason the numbers of three populations: (14677, 5744, 350). + +After correction, we get the actual steady number of the algae: $\mathrm{N} = 1.3732$ . Put is back into the regression equation, we get: $\mathrm{C} = 0.125$ , that is, the concentration of chlorophyll is $0.125\mathrm{ug / L}$ , which means the water quality after adjustment completely meets the demand. Moreover, the total number of milkfish and echinoderm is smaller than that before correction, so the index of the organics can certainly reach the growing demands of the coral, as shown in Figure 1-6: + +![](images/d93599d921571104b5f1525f86b1ba1e884be7ff3a557ae5964984336490854d.jpg) +Figure 1-6 The Numbers of Species Meeting the Demands after Adjustment + +In the retroegulation process, which is the feedback mechanism of this model, with known water quality, we counter-reason the estimated steady state number of all the species, make positive simulation after estimating the initial introducing value of all the species, and get the corrected steady state value. With this mechanism, we can find out the steady state number of each species based on water quality, which provides great convenience to the solution to the following problems. + +# Task 2 + +# Task 2.a Establishment of Logistic Model + +In this task, with all the herbivorous fish, crustaceans, molluscs and echinoderm excluded, it is required to find out the changes to the species and the circumstances of water quality. Based on analyses, we make clear the reasons why the growth rate will decrease when the milkfish increase to a certain amount. It is noticed that such factors as natural resources, environmental conditions restrict the growth of milkfish, and with its growth, the blocking effect will become greater and greater. The blocking effect is expressed in terms of the influence on the growth rate of milkfish $r$ , making $r$ decrease with the increase in the number of milkfish $x$ . If $r$ is expressed as + +the function of $x: r(x)$ , it should be a decreasing function, so we have: + +$$ +\frac {d x}{d t} = r (x) x, x (0) = x _ {0} \tag {2-1} +$$ + +The simplest assumption of $r(x)$ is: assume $r(x)$ as the linear function of $x$ , that is, + +$$ +r (x) = r - s x, (r > 0, s > 0) \tag {2-2} +$$ + +Here $r$ is intrinsic growth rate, that is, the growth rate when the number of the milkfish is very small. In order to confirm the meaning of Coefficient $s$ , we introduce the maximum quantity $x_{m}$ which is allowed by natural resources and environmental conditions, which is regarded as the milkfish capacity. When $x = x_{m}$ , $x$ will stop increasing, that is, growth rate $r(x_{m}) = 0$ , put it into (2-2), we will get $s = \frac{r}{x_{m}}$ , then: + +$$ +r (x) = r \left(1 - \frac {x}{x _ {m}}\right) \tag {2-3} +$$ + +Another interpretation to Equation (2-3): the growth rate $r(x)$ is in direct proportion to the unsaturated part of the milkfish capacity $x = (x_{m} - x) / x_{m}$ , the proportionality coefficient is intrinsic growth rate $r$ . Put Equation (2-3) into Equation (2-1), we will get: + +$$ +\frac {d x}{d t} = r x \left(1 - \frac {x}{x _ {m}}\right), x (0) = x _ {0} \tag {2-4} +$$ + +The factor $rx$ on the right side of Equation (2-4) expresses the internal growing tendency of the milkfish, and the factor $(1 - \frac{x}{x_m})$ expresses the blocking effect of resources and environment on the milkfish growth. Obviously, the bigger $x$ is, the bigger $rx$ is, the smaller $(1 - \frac{x}{x_m})$ is. The growth of milkfish is the result of the coaction of the two factors. + +It is easy to find that Equation (2-4) can be solved with the method of Separation of Variables: + +$$ +x (t) = \frac {x _ {m}}{1 + \left(\frac {x _ {m}}{x _ {0}} - 1\right) e ^ {- r t}} \tag {2-5} +$$ + +With Equation (2-5), we can know: the increase of $x$ is fast and then full, + +when $t \to \infty$ , $x \to x_{m}$ . Now we use LLS linear least squares to estimate the parameter $r$ and $x_{m}$ of this model, and express Equation (2-4) as: + +$$ +\frac {d x / d t}{x} = r - s x, s = \frac {r}{x _ {m}} \tag {2-6} +$$ + +We consult relevant data[3] (here we should point out the amount of milkfish refers to the one harvested all over the Philippines), and calculate these data with MATLAB, and then we get: $r = 0.5$ , $x_{m} = 190050.00$ . Put them into Equation (2-5), we get the changes to the function as shown in Figure 2-1. Further, we get the weight and number of the milkfish respectively as 172375346.966kg and 34475069 ~ 24625050. (Reference: The land area of the Philippines is 300,000 km², the sea area 27.6 mi². The Philippines is surrounded by the sea, and has lots of islands; the depth of the sea between islands is mostly within 50 m.) + +![](images/2a47d28f8fb657e2fa14cb1fd030550817ccfbdf80d9d56b2115953c3d9f5358.jpg) +Figure 2-1 Milkfish Changes + +We get the sediment per $\mathrm{m}^2$ based on the sea area: $0.1165849\sim 0.3325079~\mathrm{g / m}^2$ . Since the sediment is usually not very thick, we assume it as $0.1\mathrm{m}$ , then we get the sediment per $\mathrm{m}^3$ : $1.165849\sim 3.325079~\mathrm{g / m}^3$ . Then based on the information given in the problem, we get: + +Table 2-1 Element Content + +
ElementContent(ug/L)
C(10%)116.5849 ~ 332.5079
N(0.4%)46.63396 ~ 133.00316
O(0.6%)69.95094 ~ 199.50474
+ +From Table 2-1, we can see eutrophy is very serious, and the coral cannot grow. The water quality is very poor, which almost matches the environment in the pen. + +# Task 2.b Simulating Comparison of the Current Situation + +In Task 2.a, we discuss the independent farming of the milkfish, but actually in the pen, there are more than the milkfish and the algae. So here we have to introduce the removed species as the middle strata, and according to the requirements of the problem, only adjust the numbers of the species in the middle strata to simulate the water quality in the Bolinao area until the water quality matches the one currently observed. + +The concrete practices are as follows: take simulating the water quality in Site D for example, solve the problem according to the model in Task 1. Since the model bears the following characteristics: it is easier to find out the water quality in the area of stability when knowing the initial introducing values of the algae, milkfish and other species than to feedback the numbers of the algae, milkfish and other species when knowing the water quality. + +Adopt violence random search method: + +- Set the initial values of the algae, other species and the milkfish as $N_{1} = 100000, N_{2} = 10000, N_{3} = 1300$ . Other parameters have nothing to do with the introducing amount, the same to the model in Task 1. +- According to the introducing ratio between the milkfish and the algae and the requirements for the capacity of the pen obtained from Task 2.a, we introduce the algae and the milkfish respectively as 72000 and1300, and at the same time have the introducing amount of other species in random distribution between $8000 \sim 10000$ , with the aim of searching for the theoretical value matching the observed water quality. +- Simulate the model in Task 1, recycle 1,000 times, and finally put out the water quality in steady state which is consistent with the actually observed value. + +Now give out the criteria for judging water quality consistency: (parameters: x1 is the steady state value of the algae, x2 the steady state value of other species, x3 the steady state value of the milkfish): + +```txt +Chlorophyll a: ca=x1*0.0001-1.2785)/0.7568 +Total content of organics: K=x2*0.2438*6.9*[0.2,11.5]+x1*[242,493] +Percentage of Different Elements in the Excrement: POC 10%, PON 0.4%, P 0.6% +POC meets: abs(c(1)-c1(1))<=100 +abs(c(2)-c1(2))<=100 +PON meets: abs(n(1)-n1(1))<=10 +abs(n(2)-n1(2))<=10 +Chlorophyll a: abs(ca-4.5) <= 0.15 +``` + +Sort out results meeting the above requirements, that is, the numbers of three species when the water quality obtained through simulation similar to the observed one: + +Table 2-2 Simulating Results + +
Simulating ResultsInitial Introducing Amount: 70000[8008,8995]1100
Amount in the Steady State: 4606289891040
Estimated Observing DataAmount in the Steady State: 457009325890
+ +In order to make the numbers of the species close to those predicted in the model, we compare the numbers of existing species with those observed in Bolinao area. Here we take into account that the added feedstuff can correct the model in Task 1, that is, we can add a constant $\lambda$ to the model in Task 1 to express the influence of feedstuff on the numbers of the species, and the newly-built model is shown as: + +$$ +\left\{ \begin{array}{c} \dot {x} _ {1} (t) = r _ {1} x _ {1} \left(1 - \frac {x _ {1}}{N _ {1}} - \sigma_ {1} \frac {x _ {2}}{N _ {2}}\right) \\ \dot {x} _ {2} (t) = r _ {2} x _ {2} \left(- 1 - \frac {x _ {2}}{N _ {2}} + \sigma_ {2} \frac {x _ {1}}{N _ {1}} - \sigma_ {3} \frac {x _ {3}}{N _ {3}}\right) \\ \dot {x} _ {3} (t) = r _ {3} x _ {3} \left(- 1 - \frac {x _ {3}}{N _ {3}} + \sigma_ {4} \frac {x _ {2}}{N _ {2}}\right) + \lambda \end{array} \right. \tag {2.7} +$$ + +Then introduce the initial values (70000, [8008,8995], 1100), and figure out the steady state numbers of all the species: (46062, 8989, 1051), as shown in Figure 2-2: + +![](images/e320340dae0e7d5d19f96fef8ac5d74b8a44ec7778b5ba826774fd59d2dd0086.jpg) +Figure 2-2 Comparison between Observed Values and Simulated Values + +![](images/f6fe6d613c5625230526390ad9d084a66046757e4413a4b5946ad44612550128.jpg) + +# Task 3 + +# Task 3.a Develop a commercial polyculture to remediate Bolinao + +Based on the Bolinao Coral Reef Ecosystem Model before farming in Task 1: after introducing filter feeders, correct the model in Task 1, and set the numbers of the four species respectively as: $x_{1}(t), x_{2}(t), x_{3}(t), x_{4}(t)$ . Also, the number evolution follows Logistic Law. When the algae species grow independently, the model is: + +$$ +\dot {x} _ {1} (t) = x _ {1} r _ {1} ^ {\prime} \left(1 - \frac {x _ {1}}{N _ {1}}\right) \tag {3-1} +$$ + +In Equation (3-1), $r_1'$ refers to the intrinsic growth rate of the algae species, $N_1$ the maximum number of the plants allowed by the environment and resources. The algae species provide food for the filter feeders and the herbivores, so the model of the algae species is: + +$$ +\dot {x} _ {1} (t) = r _ {1} ^ {\prime} x _ {1} \left(1 - \frac {x _ {1}}{N _ {1}} - \sigma_ {1 2} ^ {\prime} \frac {x _ {2}}{N _ {2}} - \sigma_ {1 3} ^ {\prime} \frac {x _ {3}}{N _ {3}}\right) \tag {3-2} +$$ + +In Equation (3-2), $N_{2}$ signifies the maximum capacity of the filter feeders, $N_{3}$ the maximum capacity of the herbivores; $\sigma_{12}'$ refers to $\sigma_{13}'$ times unit amount algae(compared to $N_{1}$ ) eaten by unit amount filter feeders (compared to $N_{2}$ ), $\sigma_{13}'$ refers to $\sigma_{13}'$ times unit amount algae(compared to $N_{1}$ ) eaten by the unit amount herbivores(compared to $N_{3}$ ). + +Without the algae species, the filter feeders will perish, assume its death rate as $r_2'$ , then when it exists independently, we will have: + +$$ +\dot {x} _ {2} (t) = - r _ {2} ^ {\prime} x _ {2} \tag {3-3} +$$ + +The algae species provide food for the filter feeders and the herbivores, the right side of Equation (3-3) should add the auxoaction of the algae species to the filter feeders and the herbivores, and the growth of filter feeders is also influenced by the internal + +blocking action, so we get the model of the filter feeders: + +$$ +\dot {x} _ {2} (t) = r _ {2} ^ {\prime} x _ {2} \left(- 1 - \frac {x _ {2}}{N _ {2}} + \sigma_ {2} ^ {\prime} \frac {x _ {1}}{N _ {1}} - \sigma_ {7} ^ {\prime} \frac {x _ {4}}{N _ {4}}\right) \tag {3-4} +$$ + +Likewise, we infer the models of the herbivores and the milkfish: + +The herbivore: $\dot{x}_3(t) = r_3'x_3\left(-1 - \frac{x_3}{N_3} + \sigma_3'\frac{x_1}{N_1} - \sigma_8'\frac{x_4}{N_4}\right)$ (3-5) + +The milkfish: $\dot{x}_4(t) = r_4'x_4(-1 - \frac{x_4}{N_4} +\sigma_4'\frac{x_2}{N_2} +\sigma_6'\frac{x_3}{N_3} +\sigma_5'k)$ (3-6) + +Here $k$ refers to the amount of feedstuff cast. + +In this way, we get the model of polyculture with simultaneous equations (3-2), (3-5) and (3-6): + +$$ +\left\{ \begin{array}{c} \dot {x} _ {1} (t) = r _ {1} ^ {\prime} x _ {1} \left(1 - \frac {x _ {1}}{N _ {1}} - \sigma_ {1 2} ^ {\prime} \frac {x _ {2}}{N _ {2}} - \sigma_ {1 3} ^ {\prime} \frac {x _ {3}}{N _ {3}}\right) \\ \dot {x} _ {2} (t) = r _ {2} ^ {\prime} x _ {2} \left(- 1 - \frac {x _ {2}}{N _ {2}} + \sigma_ {2} ^ {\prime} \frac {x _ {1}}{N _ {1}} - \sigma_ {7} ^ {\prime} \frac {x _ {4}}{N _ {4}}\right) \\ \dot {x} _ {3} (t) = r _ {3} ^ {\prime} x _ {3} \left(- 1 - \frac {x _ {3}}{N _ {3}} + \sigma_ {3} ^ {\prime} \frac {x _ {1}}{N _ {1}} - \sigma_ {8} ^ {\prime} \frac {x _ {4}}{N _ {4}}\right) \\ \dot {x} _ {4} (t) = r _ {4} ^ {\prime} x _ {4} \left(- 1 - \frac {x _ {4}}{N _ {4}} + \sigma_ {4} ^ {\prime} \frac {x _ {2}}{N _ {2}} + \sigma_ {6} ^ {\prime} \frac {x _ {3}}{N _ {3}} + \sigma_ {5} ^ {\prime} k\right) \end{array} \right. \tag {3-7} +$$ + +In the simultaneous equations, $x(1), x(2), x(3), x(4)$ respectively signifies the numbers of the algae, the filter feeders, the herbivorous fish and the milkfish, $r_1', r_2', r_3', r_4'$ their respective growth rate, and $k$ the constant of feedstuff casting. + +$\sigma_{12}^{\prime}$ : the algae eaten by the unit amount filter feeders, + +$\sigma_{13}^{\prime}$ : the algae eaten by the unit amount herbivorous fish, + +$\sigma_2'$ : the filter feeders fed by the unit amount algae, + +$\sigma_3^{\prime}$ : the herbivorous fish fed by the unit amount algae, + +$\sigma_4'$ : milkfish/the milkfish fed by the unit amount filter feeders, + +$\sigma_{6}^{\prime}$ : milkfish/the milkfish fed by the unit amount herbivorous fish, + +$\sigma_{7}^{\prime}$ : the filter feeders preyed by the unit amount milkfish, +$\sigma_{8}^{\prime}$ : the herbivorous fish preyed by the unit amount milkfish. + +So, Equation (3-7) can be solved with MATLAB to obtain the numbers of the algae, the filter feeders, the herbivorous fish and the milkfish: (14314, 6092, 6129, 6979). The diagram when reaching the steady state is shown as Figure 3-1: + +![](images/7d0b53d7996473a798d364cad9088e10d99e3197bf3239efbfe1109b38bb563f.jpg) +Figure 3-1 The Changes to the Numbers of the Algae, the Filter Feeders, the Herbivorous Fish and the Milkfish + +# Task 3.b Report on the outputs of the model. + +Based on Equation (3-7), all the farming is carried out in the pen, and we can easily find out: + +- This model optimize the water quality, since only when the water quality reaches a certain standard, can it provide the ideal growing environment for a species, and only in the viable environment, is it meaningful to talk about the number of each species. +- We establish a newly-born coral reef habitat without the help of man, that is, without feedstuff casting, with least leftover nutriment and particles (foodstuff and excrement) sediment. +- According to Task 3.a, we get the steady state numbers of the algae, the filter feeders, the herbivorous fish and the milkfish: (14314, 6092, 6129, 6979), and + +then with the calculating method in Task 1, we regard (14314, 6092, 6129, 6979) as the initial values to figure out the concentration of chlorophyll is 0.2022 ug/L. Based on the information about the elements percentage given in problem, we calculate the content of different elements, as shown in Table 3-1: + +Table 3-1 The Content of Elements in the Fish Pen + +
ElementContent(ug/L)
C(10%)34.6604 ~ 71.7465
N(0.4%)1.3864 ~ 2.8699
P(0.6%)2.0796 ~4.3048
+ +The water quality can be reflected from Table 3-1. + +- Assume the total income $K = \sum x(i)t(i)$ , $i = 1,2,3,4$ , here $x(i)$ respectively signifies the number of the algae, the molluscs, the herbivorous fish and the milkfish, and $t(i)$ their respective market values. +- Based on market investigation and relevant online data, we get the average weight and price of each species, and finally figure out the income: \( K = \sum x(i) t(i) = \\(1.1357 \times 10^{5} / \text{pen}. \) +- In order to calculate the cost on improving water quality, assume that we introduce 1,000 mussels into the pen, investigate such factors as weight and market price of mussels, and put them into the model in Task 1 to figure out all the indexes. + +Table 3-2 The Steady State Number of Each Species before Adjustment + +
the algaethe molluscs (mussels)the herbivorous fishmilkfish
Before1.43140.60920.61290.6979
Adjustment(×104)
After1.37260.61870.61490.7011
Adjustment(×104)
+ +Table 3-3 The Content of Each Element Before and After Adjustment + +
Chlorophyll a(ug)C (ug/L)N(ug/L)P(ug/L)
Before Adjustment0.202234.6604-71.74651.3864-2.86992.0796-4.3048
After Adjustment0.124533.2377-68.86611.3295-2.75461.9943-4.1320
+ +According Table 3-3, it is easy to find the water quality has been improved, since the introduced mussels feed on the algae for one thing, and decompose the organic particles for another, which has been validated by the data. + +Calculate the cost on improving the water quality of a fish pen: the 1,000 introduced mussels $361.2 or so, the total income is finally adjusted as K = ∑ x(i) t(i) = 1.1407 × 10^5 $, and the cost accounts for 0.317% of the income. + +# Task 4 + +According to the model in Task 3.a, we can figure out the number of each species after reaching the steady state. From Task 3.a, we know the numbers of the algae, the filter feeders, the herbivorous fish and the milkfish as (14314, 6092, 6129, 6979). Here we can see the number of the algae is the biggest, and the numbers of the other species are similar. In such a steady state, we will discuss: + +- First of all, according to the relationship between market supply and demand and price, we cannot think the amount of milkfish is as important as that of seaweed harvested. Obviously, the price of milkfish is higher than that of seaweed. In addition, although the amount of seaweed is large, but it is light, so we cannot pursue the maximum weight. +- Secondly, if measuring harvest with the price of each species harvested, we have to differentiate the values of the species. Since there are various costs on feeding the milkfish, we should take these costs into consideration when calculating the values of each species. We should define the value of edible biomass as the sum of the values of each species harvested, minus the cost of milkfish feed. +- Finally, we point out, when people expect the outputs of edible biomass to reach the maximum income, we should subtract the cost of milkfish feedstuff from its total value. + +# Task 5 + +When confirming commercial polyculture scheme, we usually not only consider the economic benefits of farming, but also ensure to reach win-win between economy and environment under the premise of keeping the ecological environment and water quality in good condition. Hence, we establish the following optimal model to pursue the maximum commercial benefits, with the premise of not having water quality worse. Combined with the previous Polyculture System Model, we establish the following Non-linear Optimization Model of Balance to maximize the total values of harvest. This model falls into Optimization Model, but it belongs to complex Nonlinear single-objective optimization model since non-linear differential equations are embedded into its constraint conditions. And the specific model is as follows: + +Objective function: $\max f = ax_1 + bx_2 + cx_3 + dx_4 - \mu$ + +$a, b, c, d$ are respectively the unit market price of each species, and $\mu$ the feedstuff price. + +Constraint conditions: the following three are constraint conditions of water quality, that is, the result must satisfy the water quality condition required by the coral growing. + +> the content of chlorophyll below $0.28 \mathrm{mg} / \mathrm{mL}$ +> the content of POC below 196ug/L +> the content of PON below 39 ug/L + +so the final model is: $\max f = ax_1 + bx_2 + cx_3 + dx_4 - \mu$ + +$$ +\left\{ \begin{array}{l} \frac {0 . 0 0 0 1 x _ {1} - 1 . 2 7 8 5}{0 . 7 5 6} \leq 0. 2 8 \\ 1. 6 8 2 2 2 x _ {2} [ 0. 2, 1 1. 5 ] + 0. 1 x _ {4} [ 2 4 2, 4 9 3 ] \leq 1 9 6 \\ 1. 6 8 2 2 2 x _ {2} [ 0. 2, 1 1. 5 ] + 0. 0 0 4 x _ {4} [ 2 4 2, 4 9 3 ] \leq 3 9 \\ \left\{ \begin{array}{c} \dot {x} _ {1} (t) = r _ {1} x _ {1} \left(1 - \frac {x _ {1}}{N _ {1}} - \sigma_ {1 2} \frac {x _ {2}}{N _ {2}} - \sigma_ {1 3} \frac {x _ {3}}{N _ {3}}\right) \\ \dot {x} _ {2} (t) = r _ {2} x _ {2} \left(- 1 - \frac {x _ {2}}{N _ {2}} + \sigma_ {2} \frac {x _ {1}}{N _ {1}} - \sigma_ {7} \frac {x _ {4}}{N _ {4}}\right) \\ \dot {x} _ {3} (t) = r _ {3} x _ {3} \left(- 1 - \frac {x _ {3}}{N _ {3}} + \sigma_ {3} \frac {x _ {1}}{N _ {1}} - \sigma_ {8} \frac {x _ {4}}{N _ {4}}\right) \\ \dot {x} _ {4} (t) = r _ {4} x _ {4} \left(- 1 - \frac {x _ {4}}{N _ {4}} + \sigma_ {4} \frac {x _ {2}}{N _ {2}} + \sigma_ {6} \frac {x _ {3}}{N _ {3}} + \sigma_ {5} k\right) \end{array} \right. \end{array} \right. \tag {5-1} +$$ + +The above is Non-linear optimization model of balance. From the equality constraint constrained by the nonlinear equations in the latter part of constraint conditions, we can see the dereferencing of variables must keep the balance between ecosystem stability and environmental stability. + +Such complex optimal model cannot be solved directly with any software, so firstly we make Cycle Simulation Search (actually still Violence Search) for the equality constraints in the latter part to find out enough solutions meeting water quality conditions, and obtain the intervals of the steady numbers of the species which meet the demands of water quality, as shown in Table 5-1: + +Table 5-1 The Steady State Number of Each Species + +
AlgaeMolluscsherbivorous fishmilkfish
MAX(×104)1.39220.62490.62330.7061
MIN(×104)1.32860.61520.61740.7018
+ +Therefore, we can replace the constraint conditions of nonlinear equations with the intervals of the steady numbers of the four species. If so, Equation (5-1) can be simplified as: + +$$ +\max f = a x _ {1} + b x _ {2} + c x _ {3} + d x _ {4} +$$ + +s.t + +$$ +\left\{ \begin{array}{l} \frac {0 . 0 0 0 1 x _ {1} - 1 . 2 7 8 5}{0 . 7 5 6} \leq 0. 2 8 \\ 1. 6 8 2 2 2 x _ {2} [ 0. 2, 1 1. 5 ] + 0. 1 x _ {4} [ 2 4 2, 4 9 3 ] \leq 1 9 6 \\ 1. 6 8 2 2 2 x _ {2} [ 0. 2, 1 1. 5 ] + 0. 0 0 4 x _ {4} [ 2 4 2, 4 9 3 ] \leq 3 9 \\ 1. 3 2 8 6 \leq x _ {1} \leq 1. 3 9 2 2 \\ 0. 6 1 5 2 \leq x _ {2} \leq 0. 6 2 4 9 \\ 0. 6 1 7 4 \leq x _ {3} \leq 0. 6 2 3 3 \\ 0. 7 0 1 8 \leq x _ {4} \leq 0. 7 0 6 1 \end{array} \right. \tag {5-2} +$$ + +And we can use LINGO to solve the equivalent model, and the results are as follows: + +Table 5-2 The Steady State Number of Each Species Solved by LINGO + +
AlgaeMolluscsherbivorous fishmilkfish
(×104)1.39220.62490.62330.7061
+ +Corresponding to this solution, the maximum harvest is: max=$115189.9 + +Analyses of the model results: according to the optimal results, the current water quality can be obtained as shown in Table 5-3: + +Table 5-3 Content of Each Element after Optimization + +
Chlorophyll a(ug/L)C(ug/L)N(ug/L)P(ug/L)
0.150417.1086-36.01960.6843-1.44081.0265-2.1612
+ +Compared to the water quality required by coral growth, the obtained water quality here is obviously satisfactory, and we reap relatively high economic benefits at the same time. + +# Task 6 + +Dear the director, + +Less than $1\%$ of the ocean floor is covered by coral. Yet, $25\%$ of the ocean's biodiversity is supported in these areas. Thus, conservationists are concerned when coral disappears, since the biodiversity of the region disappears shortly thereafter. + +Consider an area in the Philippines located in a narrow channel between Luzon Island and Santiago Island in Bolinao, Pangasinan, that used to be filled with coral reef and supported a wide range of species. The once plentiful biodiversity of the area has been dramatically reduced with the introduction of commercial milkfish (Chanos chanos) farming in the mid 1990's. It's now mostly muddy bottom, the once living corals are long since buried, and there are few wild fish remaining due to over fishing and loss of habitat. + +Through modeling and discussion of the single farming mode of milkfish in Bolinao, we find out, based on the results of the model, the defects of existing farming mode and its negative influence on the water quality, and then, directed to the defects, we put forward new farming mode, according to which, we establish rational milkfish farming model and receive ideal results. At last, taking the economic benefits and water quality into consideration, we establish the farming mode with optimization target and draw reasonable conclusions. + +According to the above models and conclusions, we have got the following findings when studying the relationship between biodiversity and water quality for coral growth: + +# 1. Strategy to Restore the Ecosystem in Bolinao + +# 1.1 Existing single farming mode + +The milkfish farming programs in the mid 1990s promote the locals in Bolinao extensively adopt single farming mode, farming milkfish only in the fish pens, which led to poor water quality, the algae multiplying greatly, sludge covering the sea floor + +and coral buried, eventually reducing the biodiversity. + +The milkfish appearing in the model is a kind of predator, which hardly restricts the multiplication of the algae, and lead to the accumulation of chlorophyll; the number of herbivorous fish is small in this area, especially that of filter feeders. As a result, the excrement of the milkfish cannot be normally filtered and thus accumulated, eventually worsening the water quality. These can be objectively reflected by the simulated results of Site D. + +Table 6-1 Microbial Abundances and Particle Characteristics of Site Water + +
SiteDissolved Organic Carbon(DOC) (uM)Total Nitrogen (Dissolved, uM )Chl a (ug/L)Particulate Organic Carbon (POC) (ug/L)Total Nitrogen (Particulate, ug/L)
A69.7±1.37.4±0.40.25±0.03106±49±15
B80.4±2.98.0±0.20.28±0.03196±5739±15
C89.6±1.714.2±0.70.38±0.03662±6854±17
D141±2.930.5±1.34.5±0.2832±33886±45
Fish Pens162±18.539.8±2.710.3±0.2641±6086±18
+ +# 1.2 Polyculture system with restoring function + +Through the above analyses of the defects of single farming mode, we find out the key problem lies in the shortage of herbivorous fish. So, we take the importance of the middle strata of the foodweb into account when establishing the new farming mode, in which we adopt polyculture system by introducing the herbivorous fish. To be specific, we raise both milkfish and mussels in the available waters. Mussels have not only economic values, can provide food for the milkfish, and can also suppress the multiplication of the algae and filter-feed the microbe, thus the water quality improved. + +Through modeling the polyculture system and solving the model, we get the initial introducing numbers, the improved water quality and the time required for restoring, as shown in Table 6-2, Table 6-3: + +Table 6-2 The Stocking and Output in the Polyculture System + +
AlgaeMolluscs (mussels)Herbivorous Fishmilkfish
The volume of the initial stocking (×104)4.60070.60020.40020.1040
Stabilitynumber (×104)1.43140.60920.61290.6979
+ +Table 6-3 Content of Elements in the Pen + +
ElementContent(ug/L)
C(10%)34.6604 ~ 71.7465
N(0.4%)1.3864 ~ 2.8699
P(0.6%)2.0796 ~4.3048
+ +Time needed for restoring: 3.2 years. + +# 2. Optimal harvesting/feeding strategy + +# 2.1 Optimal harvesting + +First of all, according to the relationship between market supply and demand and price, we cannot think the amount of milkfish is as important as that of seaweed harvested. Obviously, the price of milkfish is higher than that of seaweed. In addition, although the amount of seaweed is large, but it is light, so we cannot pursue the maximum weight. + +Secondly, if measuring harvest with the price of each species harvested, we have to differentiate the values of the species. Since there are various costs on feeding the milkfish, we should take these costs into consideration when calculating the values of each species. We should define the value of edible biomass as the sum of the values of each species harvested, minus the cost of milkfish feed. + +Finally, we point out, when people expect the outputs of edible biomass to reach the maximum income, we should subtract the cost of milkfish feedstuff from its total value. + +That is to say, we finally confirm the optimal harvest is the net income of the milkfish. + +# 2.2 Optimal harvesting feeding strategy + +According to the above optimal harvesting, regarding the net income of farming as the highest goal, we establish optimal farming strategy model under the premise of satisfying water quality constraint conditions. We obtain the greatest income and the numbers of all the species when reaping the greatest benefits, as shown in Table 6-4: + +Table 6-4 The Steady State Number of Each Species Solved by LINGO + +
AlgaeMolluscsherbivorous fishmilkfish
(×104)1.39220.62490.62330.7061
+ +Maximum benefit:\(115189.9. + +In order to prove the results of our model are correct, we define: fishing/harvest index=feeding cost/net income + +Then the result we obtained is: fishing/harvest index $= 0.06\%$ + +The actual result is : fishing/harvest index $= 2.8\%$ + +Based on analyses of the model, when obtaining the optimal solution, we find the feeding cost of unit net income is obviously less than the actual one, so our feeding strategy can produce better harvest. + +# 3. Comments on the polyculture system from perspective of ecology + +According to analyses of 1.1 and 1.2, if actualizing our polyculture system, the join of the herbivorous fish as the middle strata contributes to the decomposition of solid particles, and can suppress the over-multiplication of the algae, improve water quality, enable the coral to grow normally, and then restore the current ecosystems and biodiversity. + +However, in our model we don't take the dissoluble POC released by the algae into account, the accumulation of which is likely to hinder the improvement of water quality. In view of this, someone may doubt the restoring ability of our polyculture system. But the microbe such as bacteria in the waters can process POC, rational measures can be taken to control the content of microbiology, thus ensuring the improvement of water quality. So, in terms of ecology, our polyculture system bears the potential of improving water quality and promoting the development of ecosystems. + +The above are our findings in the process of studying the relationship between biodiversity and water quality suitable for coral growth. + +# References + +[1] "Around the world and regional fisheries Overview Study" edited by Research Group. Countries in the world and regional fisheries Overview(Next List)2004(01). +[2] SHAN Yu-lin, TANG Jia-de. Numerical Solution of Bait-Predator Model Derived from Three Biological Populations Based on MATLAB. Exploitation and Application of Software. 2007, Vol. 26, No. 12, 94~96. +[3] By Neila S. Sumagaysay.Milkfish (Chanos chanos) production and water quality in brackishwater ponds at different feeding levels and frequencies.N. S. Sumagaysay, J. Appl. Ichthyol. 14 (1998). 81-85. +[4] Chih-Yu Chen, Hong-Nong Chou. Ichthyotoxicity studies of milkfish Chanos chanos fingerlings exposed to a harmful dinoflagellate Alexandrium minutum. Journal of Experimental Marine Biology and Ecology. 262 (2001) 211-219 +[5] LV H G, ZHANG X H. Original algae and chlorophyll a quantitative relationship, Water Supply and Drainage, 2005. +[6] JingQY, Xie, JX, Ye, J. Mathematical model (Third edition). Higher Education Press. 2003(02):12-14. +[7] Danilo C. Israel. The Milkfish Broodstock-Hatchery Research and Development Program and + +Industry: A Policy Study, Philippine Institute for Development Studies. January 2000. +[8] Garren, Smriga, Azam (2008). Gradients of coastal fish farm effluents and their effect on coral reef microbes. *Environmental Microbiology* 10: 2299–2312. +[9] Yongjian Xu et al. (2008). Improvement of water quality by the Macroalga, Gracilaria, near Aquaculture effluent outlets. Journal of World Aquaculture Society 39: 549. +[10] Nair Yokoya & Eurico Oliveira (1992). Temperature responses of economically important red algae and their potential for mariculture in Brazilian waters. Journal of Applied Phycology 4: 339-345. +[11] Marianne Holmer, Nuria Marba, Jorge Terrados, Carlos M. Duarte, Mike D. Fortes (2002). Impacts of milkfish (Chanos chanos) aquaculture on carbon and nutrient fluxes in the Bolinao area, Philippines. *Marine Pollution Bulletin* 44: 685-696. +[12] Rebecca J. Fox, David R. Bellwood (2008). Direct versus indirect methods of quantifying herbivore grazing impact on a coral reef. *Marine Biology* 154: 325–334. +[13] Cruz-Rivera and Paul, Edwin Cruz-Rivera, Valerie J. Paul (2006). Feeding by coral reef mesograzers: algae or cyanobacteria? Coral Reefs 25: 617-627. +[14] A. J. S. Hawkins, R. F. M. Smith, S. H. Tan, Z. B. Yasin (1998). Suspension-feeding behaviour in tropical bivalve molluscs: Perna viridis, Crassostrea belcheri, Crassostrea iradelei, Saccostrea cuculata and Pinctada margarifera. *MarineEcology Progress* 166: 173–185. +[15] German E. Merino, Raul H. Piedrahita, Douglas E. Conklin (2007). Ammonia and urea excretion rates of California halibut (Paralichthys californicus, Ayres) under farmlike conditions. Aquaculture 271: 227-243. +[16] B. F. McPherson (1968). Feeding and oxygen uptake of the topical sea urchin Eucidaris Tribuloides. *Biology Bulletin* 135: 308–321. +[17] German E. Merino, Raul H. Piedrahita, Douglas E. Conklin. Ammonia and urea excretion rates of California halibut (Paralichthys californicus, Ayres) under farm-like conditions. Aquaculture 271 (2007) 227-243 +[18] Edwin Cruz-Rivera, Valerie J. Paul. Feeding by coral reef mesograzers: algae or cyanobacteria? Coral Reefs (2006) 25: 617-627. + +# Appendix + +# No.1 The function named "fun1" + +```matlab +function f=fun1(t,x); +x=[1,1,1]; +r1=1;r2=0.5;r3=0.6;lambda1=0.1;lambda2=0.02;lambda3=0.06;mu=0.1; +f=[x(1)*(r1-lambda1*x(2)),x(2)*( -r2+lambda2*x(1)-mu*x(3)),x(3)*( -r3+lambda3*x(2))]; +[t,x]=ode45('fun1',[0,20],[100,40,6]); +subplot(1,2,1) +plot(t,x(:,1),'-',t,x(:,2),'-',t,x(:,3),"") +legend('x1(t)','x2(t)','x3(t)') +grid +subplot(1,2,2) +plot3(x(:,1),x(:,2),x(:,3)) +grid +``` + +# No.2 The function named "fun2" + +```matlab +function f=fun2(t, x); +sigma1=0.6; sigma2=5; sigma3=0.5; sigma4=2; +r1=1; r2=0.5; r3=0.6; +N1=100000; N2=10000; N3=1300; +f=[r1*x(1)*(1-x(1)/N1-sigma1*x(2)/N2); r2*x(2)*(1-x(2)/N2 +sigma2*x(1)/N1-sigma3*x(3)/N3); r3*x(3)*(1-x(3)/N3 + sigma4*x(2)/N2)]; +``` + +# No.3 The function named "fun3" + +```matlab +function f=fun3(t,x); +r1=1;r2=0.4;r3=0.6;r4=0.6;omic12= 0.4;omic13 =0.6 ;omic2 = 1;omic3 = 1.2;omic4 =0.4;omic6=1.5;omic5=0.8;omic7=0.1;omic8=0.05; +n1=15000;n2=7106*0.6;n3=7106*0.4;n4=1100;k=6.58; +f +[ \text{[r1*x(1)*(1-x(1)/n1-omic12*(x(2)/n2)-omic13*(x(3)/n3));r2*x(2)*( -1-x(2)/n2+omic2*(x(1)/n1)-omic7*(x(4)/n4));r3*x(3)*( -1-x(3)/n3+omic3*(x(1)/n1)-omic8*(x(4)/n4));r4*x(4)*(1-x(4)/n4+omic4*(x(2)/n2)+omic6*(x(3)/n3)+omic5*k)];} ] +``` + +# No.4 The M-file named "funnl1.m" + +```matlab +for k=1:1000 + x0=[70000, rand*1000+8000, 1100]; + [t,x]=ode45('fun2',[0, 15],x0); + ca=(x(53,1)*0.0001-1.2785)/0.7568; + k=x(53,2)*0.2438*6.9*[0.2,11.5]+x(53,1)*(242,493);% + c=0.1*k; + n=0.004*k; + p=0.006*k; + c1=[338,832]; + n1=[45,86]; + if abs(ca-4.5) <= 0.15%(abs(c(1)-c1(1))<=100) && (abs(c(2)-c1(2))<=100) && + (abs(n(1)-n1(1))<=10) && (abs(n(2)-n1(2))<=10) + x0 + [x(53,1),x(53,2),x(53,3)] + end +end +``` + +# No.5 The M-file named "funnel2.m" + +```matlab +for k=1:1000 + x0=[rand*20000+45000, rand*2000+4002.4, rand*1500+2501.6, rand*2000+9540]; + [t,x]=ode45('fun3',[0, 15],x0); + ca=(x(53,1)*0.0001-1.2785)/0.7568; + k=x(53,2)*0.2438*6.9*[0.2,11.5]+x(53,4)*[242,493]; + c=0.1*k; + n=0.004*k; + p=0.006*k; + c1=[338,832]; + n1=[45,86]; + if ca<=0.25 % (abs(c(1)-c1(1))<=100) && (abs(c(2)-c1(2))<=100) && (abs(n(1)-n1(1))<=10) && (abs(n(2)-n1(2))<=10) + [x(53,1),x(53,2),x(53,3),x(53,4)] + end +end +``` + +# No.6 The function named "fun24" + +function $f = \text{fun24}(t, x)$ ; +sigma1=0.6; sigma2=5; sigma3=0.5; sigma4=2; +r1=1; r2=0.5; r3=0.6; + +$\% \mathrm{a1} =$ evalin(fun1,'b1');a2=evalin(fun1,'b2');a3=evalin(fun1,'b3'); +for b1=1000:100:10000 +for b2=3500:100:7500 +for b3=150:10:350 N1=b1+200;N2=b2+50;N3=b3+10; $\% \mathrm{N1} = 30000;\mathrm{N2} = 6000;\mathrm{N3} = 400;$ f=[r1\*x(1)*(1-x(1)/N1-sigma1\*x(2)/N2);r2\*x(2)*( -1-x(2)/N2 + +sigma2\*x(1)/N1-sigma3\*x(3)/N3);r3\*x(3)\*(-1-x(3)/N3 + sigma4\*x(2)/N2)]; + $\%$ assignin(fun23,'a1',b1);assignin(fun23,'a2',b2);assignin(fun23,'a3',b3); [t,x]=ode45('fun23',[0, 15],[b1,b2,b3]); if (x(53,2)*4>1008333) && (x(53,2)*15<2054167) [b1,b2,b3] x(53,1) x(53,2) x(53,3) end end end + +No.7 Simulation data + +
x1x2x3x4x1x2x3x4
1.3490.02080.0220.70521.36680.01760.0210.7042
1.35270.02060.02150.70481.35930.02010.02080.7042
1.37090.01850.01990.70361.37040.01890.020.7036
1.34480.02040.0230.70581.36390.01990.02020.7038
1.380.01560.01970.70321.37540.01710.01990.7034
1.3470.02080.02250.70551.36070.02030.02040.7041
1.3380.02410.02210.70561.36210.02050.02010.704
1.35570.02120.02080.70451.35760.02050.02090.7045
1.33330.02320.02320.7061.37170.01790.02020.7037
1.37180.01910.01930.70311.33320.02350.0230.7059
1.33390.02360.02280.70581.36510.01930.02030.7039
1.34850.02260.02130.7051.35990.01880.02140.7046
1.36890.01810.02040.70371.37370.0180.01970.7034
1.37710.01760.01910.70311.37260.01930.01910.7032
1.35980.02170.02010.70411.36720.01830.02060.7041
1.34870.02160.02160.70481.35490.02030.02150.7048
1.37360.01730.01980.70331.35980.02090.02040.7042
1.370.01820.02030.70381.35420.01980.02170.7049
1.34570.02250.02180.70521.35650.02040.02130.7047
1.3680.01920.020.70371.34370.02120.02270.7057
1.36680.01850.02050.7041.37870.01770.01870.7029
1.34190.02310.0220.70541.38030.01730.01890.7027
1.36670.01820.02070.70391.35410.02130.02120.7047
1.35920.02150.02020.70411.37750.0190.01840.7027
1.36340.0180.02130.70421.35560.02110.02090.7045
1.3770.01850.01860.70271.35440.02180.02080.7045
1.37640.01820.0190.7031.34640.02220.02190.7052
1.36990.01860.020.70371.3580.01940.02160.7048
1.36460.01980.01980.70361.36080.02170.020.704
1.3730.0180.01980.70331.37510.01810.01940.703
1.36330.02010.02010.70391.37380.01820.01970.7033
1.35010.02320.02090.70471.3690.01840.02020.7036
1.37950.01730.0190.70291.36870.01870.02010.7038
1.37060.01740.02040.70381.36710.01910.02050.7039
1.38160.01720.01870.70271.35860.02040.02060.7043
1.35750.01880.02190.70491.37750.01660.01980.7033
1.3810.01710.0190.70291.34910.02270.02150.705
1.35130.02240.02110.70481.3680.01910.01990.7035
1.37380.01750.020.70341.35610.02060.02080.7043
1.35050.02050.02240.70531.34520.0210.02280.7057
1.34830.02020.02270.70551.36520.01880.02070.7042
1.36250.01870.0210.70411.37710.01790.01920.703
1.36010.02130.02030.70421.35160.02090.02180.7051
1.37340.01740.02010.70351.360.02040.02070.7043
1.38860.01680.01760.7021.37180.01870.01970.7035
\ No newline at end of file diff --git a/MCM/2009/C/5689/5689.md b/MCM/2009/C/5689/5689.md new file mode 100644 index 0000000000000000000000000000000000000000..66763fb73e08e8d3bc525b285a2e296d6a8bffe6 --- /dev/null +++ b/MCM/2009/C/5689/5689.md @@ -0,0 +1,451 @@ +For office use only + +T1 +T2 +T3 +T4 + +Team Control Number + +# 5689 + +Problem Chosen + +C + +For office use only +F1 +F2 +F3 +F4 + +# 2009 Mathematical Contest in Modeling (MCM) Summary Sheet + +As you read this paper, the Earth's human population is rapidly growing. That growth is coupled with demand for better quality health and nutrition, which can only be met by improved food production practices. Children in the developing world don't receive enough animal protein early in life to mature fully. Demand for animal protein is the root problem the people of Bolinao, Philippines have experienced over the last 15 years. Past solutions focused on harvesting large quantities of one type of fish using large cages. Unfortunately this approach failed to meet the demand for protein, ruined local water quality and destroyed coral reef. + +Yet, this problem is solvable. Future technological innovations like self-powered fish cages, algae based biodiesel fuel and radio frequency identification tracking offer great potential for waste reduction and improved results from open water fish harvesting in years to come. However, the people of Bolinao cannot wait for the future. Change must begin now. We must assist the transition to more economically viable and environmentally friendly fishing techniques. Ultimately, the people of Bolinao are the greatest stakeholders in the future quality of life there. Math based models show the various stages of this deterioration by demonstrating how the ecosystem in Bolinao once functioned before demand for fish grew dramatically in the early 1990s. We demonstrate the dangers to water quality created by the current practice of producing only milkfish in the region. Finally we attempt to show how introducing other species back into the commercial fish pens will allow equilibrium to recur, reducing levels of waste in the water and allowing coral reef (a catalyst for growth) to return. Combining the balanced ecosystem with market pricing formulas demonstrates how different alternative fish harvesting practices will lead to higher overall profits for the local population providing them with the protein that they need while offering them more money to better their quality of living in other ways. By implementing many of the practices which help produce the balance our models suggest is necessary, the people of Bolinao can effectively reduce the levels of particulate waste present in their water in less than a few years while allowing coral reef to grow again. + +Fish is the most efficient source of animal protein for humans because it requires less food to obtain the same amount of protein as chicken, beef or pork. It is no surprise malnutrition reduction has focused on harvesting fish. While one key limit of our models was the ability to compile an large enough data set to gain accurate pricing estimates and precise ratios of different species necessary to recreate the type of balanced ecosystem that previously existed, our results still demonstrate to the Bolinao people both the environmental and economic value of transitioning from producing only milkfish to a more diverse aquaculture. Finally, we suggest policy changes designed so that the people of Bolinao don't have to choose between getting enough food to eat now and having a healthy environment in the future. + +# TABLE OF CONTENTS + +Abstract + +Table of Contents 0 + +Table of Symbols 1 + +Table of Formulas 2 + +Table of Figures 2 + +Problem Restatement 3-4 + +Problem Approach 4-5 + +Problem Assumptions 5 + +Task 1: Modeling Water Quality before Mass Farming of Milkfish Disruption 6-8 + +Task 2: Modeling Water Quality of Current Milkfish Monoculture 8-10 + +Task 3: Modeling Water Quality of an adjusted Polyculture 10-12 + +Task 4: Valuing Polyculture for Human Consumption 12-13 + +Task 5: Return to Balance: Maximizing Bioproduce while Maintaining Water Quality 14-15 + +Task 6: Recommending Changes to Reestablish a Balance to Bolinao 15 + +Conclusion 15-17 + +Bibliography 18-21 + +Appendix A: Polyculture Population Model of Acceptable Ratios for Water Quality 22 + +Appendix B: Formulas for Levels of Contaminants Affecting Water Quality 23 + +Appendix C Letter to Pacific Marine Fisheries Council 24-26 + +# Table of Symbols + +Symbols used for quantities referring to each species are taken from the following key: + +(S refers to starfish, T to Giant Tiger Prawn, M to Milkfish, R to rabbitfish, L to Blue Mussels) + +$P_{m}$ Current population of milkfish + +$P_{my}$ Current population of juvenile (young) milkfish + +$P_{mo}$ Current population of breeding (old) milkfish: + +$P_{x}$ Current population of specie X(S refers to starfish, T to Giant Tiger Prawn, M to Milkfish, R to rabbitfish, L to Blue Mussels) + +$P_{x - 1}$ Population of specie X from the previous month + +$B_{x}$ Birth rate of specie X + +$S_{x}$ Survivability rate of specie X (Where Survivability $=$ Growth rate minus the death rate) + +$G_{x}$ Growth rate of specie X + +$D_{x}$ Death rate of specie X + +$E_{x}$ Rate at which specie $\mathbf{X}$ is eaten by a predator (found with factor of $P_{y}$ the population of the predator) + +$C_d$ Level of Carbon dissolved + +$N_{d}$ Level of Nitrogen dissolved + +Chl Level of Chlorophyll + +$C_p$ Level of particulate Carbon + +$N_{p}$ Level of particulate Nitrogen + +$W_{x}$ Level of bacteria created by individual Specie X + +$M_{x}$ Market Price for Specie X + +# Table of Formulas + +Current population of juvenile (young) milkfish: $P_{my} = B_mP_{mo - 1} + 0.066P_{my - 1}$ + +Current population of breeding (old) milkfish: $P_{mo} = S_mP_{mo - 1} + .0166P_{my - 1}$ + +Current population of rabbitfish: $P_{r} = S_{r}P_{r - 1} - E_{r}P_{m - 1}$ + +Current population of giant tiger prawn: $P_{t} = S_{t}P_{t - 1} - E_{t}P_{m - 1}$ + +Current population of starfish: $P_{s} = S_{s}P_{s - 1}$ + +Current population of algae: $P_{a} = S_{a}P_{a - 1} + .E_{a}P_{r - 1}$ + +Current population of mussels: $P_{l} = S_{l}P_{l - 1} + E_{l}P_{s - 1}$ + +Level of Carbon dissolved: $C_{d} = Y_{m}P_{m} + Y_{t}P_{t} - Y_{l}P_{l}$ + +Level of Nitrogen dissolved: $N_{d} = Y_{m}P_{m} + Y_{r}P_{r} - Y_{l}P_{l} - Y_{a}P_{a}$ + +Level of Chlorophyll: $Chl = Y_{a}P_{a}$ + +Level of particulate nitrogen: $N_{p} = Y_{m}P_{m} + Y_{t}P_{t} - Y_{a}P_{a}$ + +Level of particulate carbon: $C_p = Y_m P_m + Y_r P_r + Y_s P_s - Y_l P_l$ + +General formula for population growth of specie $X$ : $P_{x} = P_{x - 1} + (P_{x - 1}G_{x}) - (\sum E_{y}P_{y}) - P_{x - 1}D_{x}$ + +General formula for particulate waste level: $\sum all(P_xW_x)$ + +Value of the ecosystem for a ratio of species: + +[ D_{p} = M_{m}P_{m} + M_{r}P_{r} + M_{s}P_{s} + M_{l}P_{l} + M_{t}P_{t} + M_{a}P_{a} - \text{input}\cos{ts} ] + +Formula assumed as a conversion factor for particle and dissolved waste to bacteria level: + +# Table of Figures + +Figure 1.1 Problem Solving Process Diagram 6 + +Figure 1.2 Change in Rates Due to Population 7 + +Figure 2.1 Food Web of Polyculture Ecosystem 8 + +Figure 2.2 Water Quality When Only Milkfish are Present + +Figure 3.1 Water Quality With Mussels 11 + +Figure 4.1Optimal Polyculture Ration (without Algae) 13 + +*All graphs created from data obtained from internet sources cited in Bibliography and based on assumptions of the authors. + +*All diagrams the original work of the authors. + +# Problem Restatement + +The past thirty years has experienced a global population explosion while simultaneously undergoing a growing demand for a higher quality of living. Increased global trade and technological innovations has helped raise billions out of poverty. The goal of alleviating malnutrition in rising population created a push in the 1970s to expand the use of aquaculture to help alleviate this growing demand for animal protein in human diet. + +The most efficient source of animal protein for humans is fish of which $65\%$ of the raw weight is edible compared to $50\%$ of chicken weight. Both of these animals require less grain resource input to obtain the same level animal protein as beef or pork. Chicken and fish require $2\mathrm{kg}$ of grain for $1\mathrm{kg}$ of meat, whereas, beef requires $7\mathrm{kg}$ and pork $4\mathrm{kg}$ . It is no surprise that this effort to reduce malnutrition has focused on greater harvesting of fish. However, human demand for fish has outpaced the natural rate of growth of wild fish which has led to overfishing, reducing the naturally found stock of fish in ocean waters. To alleviate this growing demand as well as create price stability, humans have turned to aquaculture as a solution. By creating large pens and cages of fish in contained spaces and forcing spawning with fish feed engineered to yield bigger fish, farmers have been able to reduce the cost of fish and meet growing demand for animal protein. However, this solution to world hunger has created other ecosystem concerns as the byproducts of the fish farms like uncontrolled waste have created an imbalance in local ecosystems leading to the destruction of other marine wildlife including coral. + +In the Bolinao region of the Philippines, the overproduction of Milkfish has helped meet local demand for animal protein but has destroyed local ecosystems and coral by reducing the water quality. With a growing global population natural resources become increasingly valuable with food and water being most important. Mass Fish farming solves the problem of malnutrition and can increase the economic exports of an area but has the detrimental effect of increasing water pollution and through this disruption, limiting the amount of marine life over the long term. + +Task 1 involves modeling of the ecosystem before the mass farming of milkfish was introduced in order to understand the natural interaction amongst varying species in the ecosystem and how they created growth while maintaining water quality. + +Task 2 demonstrates the danger of an aquaculture based entirely on milkfish and algae. Ultimately the milkfish fecal waste and the overgrowth of algae create an imbalance that destroys coral which prevents the ecosystem of producing as much fish as it could otherwise. Task 2 then asks for a model of the current populations of the ecosystem based on the known water quality to understand the conditions from which the ecosystem is beginning. + +Task 3 examines what populations of various marine species would be required in order to achieve an acceptable level of water quality and to restart the growth of coral reefs. Coral growth increases biomass production and improved water quality allows more species to grow. Both conditions ultimately achieve the largest value of sellable marine produce. + +Task 4 requires criteria to determine how different parts of the ecosystem out to be valued. + +Task 5 looks at the multiple solutions of populations of different species from the model in part 3 that can achieve the acceptable constant level of water quality. It then requires examining which solution for the model in part 3 yields the highest economic value of the whole ecosystem based on those criteria determined as part of Task 4. + +Task 6 requires explaining to the Pacific Marine Fisheries council what changes to the biodiversity of the marine ecosystem of the Bolinao region are required to produce the maximum + +economic value for the local population and achieve the desired level of water quality that will allow coral growth which further increases the produce of the aquaculture. + +The domination of aquaculture in the Bolinao region of the Philippines by milkfish has created an imbalance which threatens the long term sustainability of the local marine life and human population. The long term sustainability of this system can be achieved by improving water quality and allowing coral reef to grow again. By transitioning from a milkfish monoculture to a balanced polyculture, the biodiversity can meet the increased demand for animal protein while increasing overall economic value while maintaining a stable system for both the long and short term. + +# Problem Approach + +Task 1: To model water quality before Milkfish dominated the local ecosystem we created formulas that modeled the interaction between the various species present to the ecosystem. This model focused on obtaining a solid state equilibrium of water quality by first establishing a formula for determining how to measure the change in water quality based on the sum of the products of the waste of each individual species. Some species like the blue mussel which consumes the waste of other species contributes negative waste and thus helps clean improve water quality. It was then necessary to develop an effective function that described the population of each individual species at any given time as the population at that time would determine the waste produced by that species and thus the water quality. The general formula for each species population calculated the overall change to the population by adding the number of new species based on the determined growth rate and subtracting the number of that species eaten by each of the other species as well as the number of that species that died naturally. This equation allowed us to determine the population of the species which was a required input to determine the water quality at that time. It is then possible to determine a steady state by running the whole model for several iterations until the level of the water quality of the current iteration remains the same as the previous one. Adjusting the number of each species in the system while keeping the ratio of each species to one another constant should allow for the model to predict what population level of each species existed before the disruption of overfishing which led to the commercial milkfish monoculture addressed in Task 2. + +Task 2: We then set the populations for all those species except Milkfish and algae to zero and ran the model to determine the effects on water quality. Based on the known current water quality we attempted to determine what the current populations of a variety of species might be + +Task 3: By setting the water quality to an acceptable desired constant we ran simulations of adjusting the populations of other species in different combinations that would reestablish an equilibrium polyculture. This polyculture would consume the waste products of the milkfish and keep the growth of algae under control. By examining the difference from the results of model 2b and model 3 we would expect to be able to determine different combinations for how many of various species needed to be introduced to the sites in the Bolinao region in order to reestablish acceptable water quality and create coral growth. + +Task 4: By examining existing data on the dollar value of various marine species we could determine values for each type of species in the system. Certain fish are worth more than others and algae and fish waste can be used as alternative products. + +Task 5: Based on the values of these individual products from Task 4 it is possible which combinations from model 3 are likely to create the most economic value for owners. + +Task 6: We will address some of the policy changes that the Pacific Marine Fisheries Council can assist the government of the Philippines implement in order to help ensure the long term viability of what should become a self-sustaining ecosystem. These policies center on harvesting all species at rates that allow the systems to keep the milkfish population under control and thus maintain the polyculture. These policy changes comprise a diverse set of ideas that center on local policing and enforcement, education, and location selection of the aquaculture farming cage sites. We also seek to mention some technological innovations that will potentially reduce current issues that affect water quality and coral reef growth. + +# Assumptions + +Through the course of this project our group made several assumptions in order to simplify the modeling process. The accuracy of any model decreases as more assumptions are made. However, without certain assumptions it would be impossible to provide useful conclusions because of the large number of unknown values for known variable. The assumptions we made are included below and addressed in more detail in the appropriate headings for the tasks under which they were assumed. Many of these assumptions were based on a lack of published or consistent data. + +These assumptions include that the growth rates of species were constant. Variability of amount of eggs laid by species was normally distributed. Humans are the only predator of the Milkfish population. The channel is not a closed system. Excess population can immigrate to other reef locations. The algae are a mix of Cyan bacteria and Red varieties in order to provide more realistic results. Milkfish stop being omnivores after they mature and only eat other animals. It takes 5 years for Milkfish to become sexually mature (FishBase). An adult Milkfish is capable of eating an adult Rabbit fish. The fish pens currently hold approximately 58,500,000 fish. Milkfish weigh about 500 to 600 grams (Milkfish). + +None of the other 5 species in the ecosystem model eat the Sea Stars. Rabbit fish waste has the same composition. The prices listed in Task 4 are estimates assumed from solitary sources. Giant Tiger Prawn spawn nightly at a rate of 7.6 to $9\%$ but only $50\%$ of these spawn hatch (Bray). Giant Tiger Prawn has a mortality rate of 10 to $40\%$ and an average weight of 106 grams (Food and Agriculture Organization of the United Nations). Rabbit fish double in population every 1.4 to 4.4 years. Prawns excrete 0.028 milligrams of ammonia per gram of body weight per hour (Burbord). Mollusks urinate up to $45\%$ of their body weight per day. Each year $55\%$ of blue mussels die. Female mussels release 1 million eggs semi-annually, of which $30\%$ hatch. Japanese starfish release 10 million to 25 million eggs per year. Starfish has an average lifespan of 3 years. A starfish eats 36 grams of mussel each month. + +# Task 1: Modeling Water Quality before the Mass Farming of Milkfish Disruption + +Every civilization wants to be able to provide for the basic needs of its children and elderly. The people of the Philippines are no different and have been fishing the local waters for centuries to put food on their tables. For a long time the amount of fish in the area was more than adequate to meet the needs of the population. However, as people sought better nutrition by eating more fish protein, the people fished more until the local population of wild fish was no longer large enough to sustain itself. To resolve this problem people developed various techniques like dynamite explosions and the distribution of sodium cyanide to catch ever more elusive remaining fish. These techniques killed off not only milkfish but other species that kept the ecosystem in balance. The resulting uncontrollable growth of algae in combination with the destruction that explosives caused destroyed parts of the coral reef by depriving it of the nutrients and sunlight it needed to grow. The people built the milkfish population back up by introducing them in large numbers and keeping them in large cages where they could be fed until they were large enough to harvest. + +Using better quality fish feed + +allowed the milkfish population to grow more quickly but also increased the levels of pollution in the local waters as a result of the fish waste. Previously other species like the blue mussel mollusk (which feed on the waste of milkfish) and other species kept the water pollution level in check. Other herbivorous fish like the rabbitfish and echinoderms like the starfish helped contain algae growth. The starfish also ate the blue mussels. As seen in Figure 1.2, the food web of this ecosystem allowed for different species to coexist in certain ratios to one another which kept the water clean and allowed the coral reef to grow. Additional coral reef further increased the overall population growth. By allowing for special feed to replace the natural diet of the milkfish, the people unknowingly depleted the quality of the local water supply in the Bolinao area while simultaneously destroying the coral reef. This coral reef had served as a catalyst for the growth + +![](images/3f16420ebf3dcf9aceaeb1a3310ebfb87b8731a0178937d7105c28ef0332f91a.jpg) + +of the overall system by providing shelter for certain species from their predators. By modeling the previous stability it is possible to show what levels of different populations were previously required to maintain a balanced ecosystem. These ratios can then serve as a helpful starting point for reestablishing a new balance within commercial milkfish farms. + +To produce this model we researched the relationships between the various species and determined appropriate rates of population growth patterns. The general formula: $P_{x} = P_{x - 1} + (P_{x - 1}G_{x}) - (\sum E_{y}P_{y}) - P_{x - 1}D_{x}$ calculated the current population of the species X, given the population of X from the previous month, adding the growth rate $G_{x}$ , while subtracting for the decline to the population as a result of the death rate $D_{x}$ , and the amount of the X eaten by each of the other species in the system $(\sum E_{y}P_{y})$ . + +Knowing the population of each species allowed us to multiply the population $P_{x}$ by the rate of bacteria waste production $W_{x}$ . Taking the sum of the products of the bacterial waste for all of the species in the system, it was then possible to determine the overall bacteria level in the water. This process could be conducted for each type of waste product $C_{d}, N_{d}, Chl, C_{p}, N_{p}$ and compared to the acceptable levels of these contaminants present at site A in the problem statement to determine if the water quality was acceptable or not. + +Once overall system of equations (our model) was ran for enough iterations it should have been possible to demonstrate that at a certain point the level of each contaminant in the water reached a constant level. By maintaining the ratio of the species present at this level of water quality and reducing the overall number of each species a similar equilibrium should be been obtainable that achieved the allowable level of the water quality. + +However, our model was unable to ever achieve this steady state level of water quality for several reasons. The main reason was that our model considered the growth rate $G_{x}$ , + +Death rate $D_{x}$ , and survivability rate $S_{x}$ (Growth rate minus the death rate), to remain constant which does not occur in nature due to the conservation of mass. An example of the more natural trend of this relationship is depicted in Figure 1.2. As the fish population increases the rate at which they are eaten increases. As the fish population increases the rate at which they survive decreases. + +In any closed system the overall mass of the system must stay the same. Thus, the addition of any new member of a population of one species to the overall system precludes the growing of something else either immediately or in the future. An example is that when the fish population is larger the death rate should be greater at some point because fish are more easily caught by their predators. Our model did not include any + +![](images/cccb47294e998c88989d6a03c32654a7d02269a98a870fa2427319f16e856a17.jpg) + +upper limit on the population of any of the species within the ecosystem and so over time the population of all organisms continued to grow at similar rates and water quality never reached an equilibrium value. In reality there has to be a natural limit to the system if for no other reason than eventually if the fish waste grows uncontrollably it occupies all of the space killing any new fish that would grow by choking of nutrient access to them. + +One possibility would have been to introduce an assumed limit to the ecosystem by confining the space to the Bolinao region. We were able to find source which estimated the water area of Bolinao to cover 1170 hectares. Based on the limit in the problem that the farmers currently use about 50,000 milkfish to a pen and operate about 10 pens per hectare, it is possible to assume a natural limit on the milkfish population of 585,000,000 milkfish (500,000 milkfish per hectare multiplied by 1170 hectares). This assumption is reasonable because the farmers want to grow as many fish as possible without sacrificing any fish. Given that they only produce one type of fish and demand is greater than what they can supply, they should grow as many milkfish as the ecosystem will support. By assuming this upper equilibrium it is then possible to base the growth rate off a factor of the difference between the current population of milkfish $P_{m}$ , and the upper limit of 585,000,000 to produce a formula of $(G_{x}(585,000,000 - P_{m})$ . + +Despite the difficulty in achieving steady state equilibrium of water quality we were still able to produce a model that demonstrates the general trend that should have been present in the ecosystem that existed before the introduction of only mass farming milkfish (See Figure 1.3). + +# Task 2: Modeling Water Quality of Current Milkfish Monoculture + +Poor water quality and the destruction of coral reef don't really seem like problems for people who are trying to meet their basic needs and keep their children healthy. It is difficult to show people how their actions now are ultimately leading to greater problems for them and their children in the future. The current thought process is that growing one type of fish, milkfish and feeding them specially formulated fishmeal creates the larger amounts of fish necessary to meet growing demand. It also doesn't require the sustenance of a variety of different creatures. Why is it not possible to simply apply modern agriculture methods + +![](images/590e6adee782f1dd617d53e6f2feeaad60b0fbeff93c48c3c673925fea395bf5.jpg) + +to aquaculture? Why shouldn't Filipinos continue to increase the yield of milkfish with specially designed fishmeal the way a farmer in America's Midwest increases the yield of his soybean or corn harvest by using specially formulated seed and fertilizer? Initial observations may lead to the conclusion that such an approach is both viable and desirable. After all, why not simply remove the excess fish waste and sell it as fertilizer for local farmers? This might be possible. However, just like land farmers eventually realized that growing certain crops year after year led to decreased yields because of nutrient depletion in the soil, fish farmers encounter the threat of decreased overall yield because only growing milkfish in the same area depletes the water quality by causing algae and waste to grow uncontrollably. The excess algae reduce coral growth in the same way that lack of crop rotation depletes soil of nitrogen. Both conditions appear to offer better results in the short term but in both cases the repetitive use of either method destroys the longer term viability of the system. + +Still, for people to change behavioral practices, it is important to be able to demonstrate the limiting effects of the current system to address their problem in a continual way. For our model this requires first demonstrating that the current system of only farming milkfish actually causes water quality and the amount of harvestable fish to decline. To model the current system we took our model from part one and set the values for the populations of everything but milkfish and algae to zero. As demonstrated in Figure 2.1, it is possible to show the decline in water quality over time and the rise of the algae population which in excess chokes off the viability of the milkfish because of increased amount of oxygen demanded by the algae and the decreased quantity available to the fish. + +![](images/c23a5b62c54f2768c76ef0be9996e7a79ddb2ed104893606d47df2ad717e8f66.jpg) + +However, it is unrealistic to assume the current system consists only of milkfish and algae. We know that the current system has a water quality of 10 million bacteria per milliliter and 15 micrograms of chlorophyll per liter both of which are much greater than the suggested 0.5 million to 1 million bacteria per milliliter and 0.25 micrograms of chlorophyll suggested to be acceptable for adequate coral growth. This coral growth acts like a skyscraper in that it allows more fish to grow in a given space through vertical partitioning. Therefore, we gradually adjusted the populations of the various species in our model to achieve the level of current water + +pollution in Bolinao. By conducting this process it is possible to exam how these populations might be altered in order to help achieve the desired levels of water quality which allow coral and the overall fish population to grow over the long term; growth that is more than it is able to when the environment is almost exclusively dominated by the farming of milkfish alone. + +Again, our model was unable to produce a steady state equilibrium of water quality when the ecosystem consists of only milkfish and algae because the algae do not entirely dispose of the waste from the milkfish and without another species like blue mussels to reduce the waste of the milkfish, the milkfish growing uncontrollably over time even if the twenty percent of young one that mature each year to level at which they can breed are removed by humans. If the humans harvest those milkfish aged less than the age of five years at which they can reproduce the level of milkfish will drop below its own level of sustainability until those milkfish left able to reach the age at which they can reproduce. This human harvesting can reduce the level of waste in the water somewhat although it is insufficient to achieve a steady state because there is still nothing to reduce the waste except the algae (which will grow uncontrollably to consume the milkfish waste which raise the level of chlorophyll above acceptable levels which chokes off the sunlight and nutrients needed for the coral reef to grow) (Environmental Protection Agency). Our model of bacteria waste levels in the water quality when the ecosystem consists only of milkfish and algae is depicted in Figure 2.2. While possible to reduce the levels of waste through harvesting this will only reduce the rate at which the waste level of bacteria grows (a more gradual slope), not cause it to decline. + +# Task 3: Modeling Water Quality of an adjusted Polyculture + +Before the farming of massive quantities of milkfish in pens, there was a balanced ecosystem of a variety of species that coexisted in ratios that allowed the waste of certain animals to serve as the food for others. However, the demand for milkfish led to a disruption of this balance which required still more milkfish which led to the creation of the massive milkfish farms. + +The ecosystem is not as ideal as it once was as we modeled in Task 1. However, it is not as bleak a situation as the milkfish monoculture we modeled in Task 2. There are other species that exist in current system although in limited quantities. They include mollusks like blue mussels, echinoderms likes starfish, herbivores like rabbitfish, and crustaceans like giant tiger prawn. However, as demonstrated in the second model for Task 2 the quantities of these other species are insufficient to reach the established levels of water quality. These levels will maximize the value of biomass available for harvest by restoring the natural catalyst of coral growth. The coral serves as protective shelter for all of these species, which is required in order for them to reach their optimal growth. Coral grows very slowly, on average only 80 millimeters per year (Roth). However, by determining the quantities of these species required to reach the desired water quality of 0.5 million to 1 million bacteria per milliliter and 0.25 micrograms of chlorophyll per liter, it is possible to increase the overall yield of fish available for harvest while recreating a polyculture that is sustainable. Through modeling this process we determined how to recreate the previous stable ecosystem that was once naturally present before commercial milkfish farming. This process will also reduce the cost of overall feed for the milkfish as they can eat certain quantities of the other species. By fixing the goals of acceptable water quality as the output of this model we were able to determine what combinations of populations of the various species could be self sustaining. Still this practice requires some guidelines for + +harvesting only a portion of any species so as to prevent recreating the overfishing problem that was the original cause for the rise of commercial fish farming that created the issues with water quality and coral reef destruction in the first place. + +Reestablishing the balance that occurred in the region under the conditions present in the model from part 1 is a difficult Task. It requires introducing other species into the commercial fish pens that help to keep the other populations under control. However, our model was able to effectively demonstrate the pattern of what would occur to waste levels over time if such a combination was attempted. This process + +![](images/558bb5d1ada30c41999564850161a6589183ebf0ff9e2ee350ebf282b9c00b57.jpg) +Figure 3.1 Water Quality with Mussels + +was possible by taking data from internet sources to determine sustainability rates $S_{x}$ for each of the species and then adjusting the populations of each species in relation to one another in order to achieve the desired water quality levels. These results of this model rely heavily on increasing the population of blue mussel in order to control the waste levels of bacteria from the growing milkfish population. This downward trend in the level of bacteria present in the water is depicted in figure 3.1. In the span of less than a few years the population of blue mussels almost entirely eliminates the level of bacteria waste. In a similar manner the rabbitfish reduces the level of chlorophyll through its consumption of the algae, a process that provides more sunlight and nutrients for coral to grow again (FishBase). The milkfish keeps the rabbitfish population under control and the population of tiger prawn provides the milkfish an alternative feed source so that the milkfish don't wipe out the rabbitfish population. Similarly the starfish consumes the mussels to keep them from growing uncontrollably. As part of the process of setting the inputs of growth rate for the starfish the reproductive rate can vary widely which if an overpopulation of starfish occurs in early months before the blue mussel can grow sufficiently, the waste levels of bacteria can grow upward exponentially because the blue mussel is not yet able to sustain its own survivability. Thus the process requires a reduced presence of starfish early in the biodiversity effort and a greater number of blue mussels. After about six to eight months the mussels have grown enough that more starfish can gradually be introduced. If the starfish reproduce too quickly early it may be necessary to added more blue mussels to the system periodically because there is no effective control in our model on the starfish population. + +Our model ultimately required the introduction of certain quantities of starfish, rabbit fish, blue mussels, and giant tiger prawn in order to reestablish a sustainable polyculture that would support the milkfish while improving the water quality and coral growth. The overall cost + +of introducing these changes was... It also requires the establishment of certain harvesting guidelines so that the system is allowed to maintain itself naturally. The goal is to keep those harvesting guidelines above the demand for milkfish so that overfishing would become economically undesirable because it would only create excess supply above the natural level of demand. In order to make these guidelines more enforceable a combination of community based policing standards as well as law enforcement officials paid for by fishing licensing fees would be desirable. For specific equations used in order model to determine the water quality levels at given times please refer to Appendix B. Such guidelines are addressed in more detail in our report to the Pacific Marine Fisheries Council as part of Task 6. + +# Task 4: Valuing Polyculture for Human Consumption + +Being able to show how a current fishing system monoculture based exclusively on the harvest of milkfish is undesirable for the long term is an insufficient argument to change a local population's practices. That argument must also demonstrate how it benefits the population economically to make changes to those practices now. + +As part of Task 3 we were able to model and demonstrate what types of input quantities of certain other species would be required to establish a self-sustaining polyculture yielded more harvestable biomass over the long term. However, those inputs came at short term up front monetary costs in addition to longer term costs in the form of restrained harvesting guidelines in order to allow the system to remain at equilibrium. Harvesting different quantities of different species at different rates of time could be achieved loosely through enforcement of simple harvesting guidelines. + +However, demonstrating the costs of changing the system doesn't get people excited if they are unable to see the economic benefit these changes will mean for them in both the short and long term. In order to demonstrate the benefits of these changed practices it is important to clarify the time required for water quality to improve and coral to grow again. It is also necessary to demonstrate how this growth will lead to more money for the population than continuing to only farm milkfish. This process requires setting a value on coral growth as well as the harvestable fish in the system. + +As we know from the initial cause of the problem, animal protein from fish is more desirable to not only the local population but also to a global population of consumers who are increasingly demanding better quality of health and nutrition for themselves and their children. The harvesting of different combinations of the various species discussed in Task three could still produce the same desired level of water quality through reduced fish waste and the same desired coral growth through the decreased abundance of chlorophyll. However, certain species produce a different value on the market than others. + +We therefore sought to explain the different value for different types of species in the system and why these various species as a whole could produce a greater overall value of income for the population than simply growing milkfish. On the simplest level, besides being unsustainable over the long term because of depletion of the natural resources in an area, growing only milkfish is undesirable because an excessive supply of milkfish only makes the value of each additional fish worth less. By harvesting a polyculture of species with economic value to both the local and global population, the people of the Bolinao region have the potential to make more money and raise their standard of living over both the short and the long term. + +This policy through diversification of risk also reduces the likelihood of a farmer losing his entire stock to disease. + +In our research we established that coral reef growth created a value of $52,000 a square kilometer and that each square kilometer of coral reef could produce 20 tons of fish biomass overall (Alcala). Looking at individual species we were able to estimate market values for each type of species in the polyculture. Many of them are worth more than milkfish. Some are worth less. The value of Giant Tiger prawn shrimp was estimated to be worth$ 6400 per ton + +![](images/ad22f7b2ad5f707cfd24fb72b1945d7588eb9baed1d6bb9a72f793ba7af51312.jpg) +Figure 4.1 Optimal Polyculture Ratio (without algae) + +(Aquaculture). Blue mussel yielded much less at $1000 a ton. Starfish was estimated to yield$ 2200 a ton and is considered a delicacy in some areas of Southeast Asia. Rabbit fish was estimated to offer about $4600 a ton although it is difficult to believe that it as an herbivorous fish would be more valuable than the $1280 a ton offered for milkfish. + +Unfortunately, the pricing of most of + +these products was very difficult to obtain and varied greatly. + +Our model from part three was only able to yield general combinations of the ratios which would be required in a biologically diverse polyculture ecosystem. A pie chart of the combination that worked well to achieve acceptable water quality is depicted above in Figure 4.1. Therefore it is difficult to produce the exact optimal market value of the new system and thus conclusively show the desirability of transitioning from the current system. However the high price of giant tiger prawn over milkfish makes it an attractive alternative to grow more of. Growing additional blue mussel, while it may not be worth as much as milkfish alone is desirable because the reduction in waste levels it creates allows for more milkfish to be grown in the same area. Algae can be sold in smaller quantities to produce what is now $18 to$ 30 a gallon biodiesel (Morton). However, by assisting the research in that area now, farmers would be guaranteeing a future decline in the price obtainable for algae but would be creating greater future profits by increasing the future demand in the marketplace for this product. + +Hopefully, the global production of a wider variety of seafood produce would create pressure for a more transparent and standardized market for seafood commodities similar to the markets that already exist for cattle and grain, which would allow for better research of the desirability of making certain adjustments to various species. + +# Task 5: Return to Balance: Maximizing Bioproduce while Maintaining Water Quality + +One of the great difficulties in getting the population of Bolinao and the commercial fish farmers to make changes to their milkfish based monoculture is to tangibly show how a polyculture of different complementary species would not only be good because it would improve water quality and coral growth, but would also leave them with more money in their pockets. By setting values for the different species that would exist in the proposed polyculture, it is possible to show how one type of fish is more desirable to rise compared to another. However, while different combinations of the populations of various species in a polyculture may produce the same water quality, they are not all equal in terms of their economic value because certain species will produce a better price at market. + +Part of the difficulty in maximizing the overall value obtainable for the fish farmer requires providing accurate values for each of the different species and how they might continue to be valued over time. In Task 4 we researched and established estimates for each species in the polyculture. By taking different combinations of these populations from the model in part three and multiplying the harvestable population of each species by the price we researched in Task 4 it is possible to create an estimate of the revenue the farmer could receive from that ratio of polyculture. By subtracting the associated input costs for establishing that polyculture, it is then possible to compare the various polyculture combinations that meet the desired water quality levels and choose the option that ultimately maximizes profit for the fish farmer. + +Provided that profit is greater than what the farmer currently receives from raising the monoculture of only milkfish, it should be an easy persuasion to convince him or her not only of the environmental benefit of sustainability that switching to a polyculture would provide, but also the economic benefit that such a transition would bring. Unfortunately, although we were able to develop such a model, it is somewhat questionable in terms of the accuracy of its results because it is very difficult to provide accurate market prices that a fish farmer is likely to receive for the various species being produced as part of the desired polyculture. + +Hopefully, one of the positive side effects of this increased production of a variety of seafood species would be the creation of a more standardized seafood commodities market for the region. Such price transparencies associated with a system would allow farmers to plan and optimize the species ratios of their polycultures better as well as providing consumers with a better understanding of what their costs will be. Ultimately such transparency should lead to better prices and reduced the waste of unsold or unused fish. + +With greater price transparency and consistency it would then be possible to more accurately model which harvesting strategies and levels of milkfish feeding produce the highest level of water quality per unit of harvest value in the short term so that the overall yield of harvest will be greater in the long term because of improved water quality and coral reef growth associated with the development of that polyculture combination. + +Recognizing that a major inadequacy of the results of our model results from the inability to obtain accurate pricing data, our model indicates that the maximum value of harvest for the given requirement of water quality occurs when the farm produces a combination of (insert numbers of different species) which is better than when the farm produces a combination of (insert other numbers of different species) which has a lower overall profit even though it results in the same level of water quality. + +Ultimately with greater understanding of price changes and levels of pollution at different sites, it is possible to use our models from part 3 to determine what the most optimal harvest strategy is to achieve a unit of water quality. Because water quality is a product of both the level of bacteria and the level of chlorophyll, certain sites may produce more value when the combination of different species is tailored to reduce more of one of those two water contaminants. We recognize that the applicability of our model is in many ways limited by the information available on the value of different species and the variability associated with the pollution of different sites. No model is perfect. However, our model achieves the basic purpose of providing fish farmers in the Bolinao region of the Philippines an accurate estimate of the profit increase they are likely to see by transitioning from a monoculture of farming only milkfish to a polyculture. Such a transition provides more overall value while reducing the levels of fish waste and chlorophyll in the region. This reduction improves water quality and the restoration of coral reef growth which acts as a catalyst for further increased yield of a variety of species (Precht). + +# Task 6: Recommending Changes to Reestablish a Balance to Bolinao + +While the our model was never able to achieve a steady state equilibrium of water quality, there are still several policy recommendations that will help the people of Bolinao achieve the better water quality, coral reef growth, and better economic opportunity that they desire. All of our policy recommendations are the result of the understanding of how the various inputs in the biologically diverse polyculture ecosystem we developed interact. Principles like the inclusion of additional blue mussel to reduce waste levels are helpful to improve initial conditions and reduce waste. Even if the Bolinao people ultimately reject a transition to a polyculture at the very least they should attempt to manually improve the water quality of the milkfish pens by using commercial scoops to remove fish waste which can be recycled and sold to local land farmers. Simple ideas and principles like the value of local community based education and policing efforts are helpful in improving local quality of life for all Filipinos regardless of the approach they choose. We have highlighted many of these pragmatic practices in a letter address to the Pacific Marine Fisheries Council whom we believe is the best avenue to suggest these ideas to the people in the region. For a copy of this letter see Appendix C. + +# Conclusion + +At first glance, Bolinao appears to have a problem of coral reef destruction and water quality deterioration based on the overproduction of a single type of fish. However, when examined more closely the real issues are much more personal. Filipinos are not simply farming large amounts of milkfish because they seek to destroy their living environment. Their monoculture milkfishing practices stem from a growing need for animal protein of which fish is the most economical and accessible source for the people in their country to produce. Fish offers the highest yield of food for raw weight at $65\%$ while simultaneously requiring the lowest amount of feed input to achieve a kilogram of animal protein when compared to beef or pork. What Bolinao is really struggling to deal with is the very human problem of meeting a growing need for better nutrition and quality of life for their children. + +In order to break this cycle of short term economic gain at the expense of gradual environmental destruction of both accessible water quality and coral growth (which has the + +added benefit of helping protect coastal integrity from tropical storms (United States Agency International Development), it must be demonstrated to local farmers that their current milkfish monoculture does more harm than good and that an alternative polyculture offers not only a better long term stability for the environment in the Bolinao region (through improved water quality and coral growth) but also a better economic situation for the local population that will allow them to feed their children and offer both current and future generations of Filipinos a rising standard of nutrition and better quality of life. + +Our solution involved a series of models to explain the past system, the current system, and what a transition in aquaculture practices could create if a future system is adopted. It then focused on explaining the economic value of the current system and comparing it to the better potential economic value of a polyculture system based on the harvesting of a variety of species as opposed to the current monoculture focused on harvesting only milkfish. + +Our solution offers the double benefit of a more sustainable ecosystem that reduces bacteria and chlorophyll to more acceptable levels while allowing for greater coral growth and economic benefit to the local population through the better prices received for the variety of species harvestable from a polyculture. At first glance a monoculture of milkfish seems to the type of specialization that offers the highest economic profit for fish farmers by reducing the unit cost of each fish. However, the long term sustainability cost of the byproduct waste of a milkfish monoculture is not taken into consideration. Neither is the greater profit that can be obtained through the introduction and harvesting of other species that naturally reduce the economic cost of raising milkfish by reducing the effect of their waste and creating more space to grow additional fish. These other fish eat the byproduct of the milkfish which provides more harvestable biomass produce per unit of effort. Finally, a variety of combinations of ratios of different populations in a polyculture produce the same level of water quality. However, they don't all yield the same economic profit for the farmer because certain fish offer a better profit than others and can be raised in larger quantities than in other scenarios. While our model was unable to demonstrate multiple scenarios that provide great economic profit than the current system such scenarios do exist and could be demonstrated by our model given a larger data set. + +We developed simulations to determine the varying water quality based on the conditions of different quantities of the various species and accounted for the harvesting rates required in order to make these polycultures obtainable. By applying population quantities for polyculture combinations that achieved the appropriate water quality levels to a formula that produced an profit value for that combination, we could have determined which polyculture could provide the most profit to the fish farmer for the desired level of water quality that would allow for successful coral growth and long term sustainability of the polyculture. Furthermore, this increased profit could then be used in global trade to create a wider variety of diet than would be otherwise available. + +Ultimately our model could be improved through the use of a more complete data set to improve the ratio of relationship between the species to levels closer to what is observable in nature. A more developed data set would not fundamentally change any of the relationships between the variables in the models we developed. Additionally, more accurately accounting for the human population in the model and adjusting the harvesting rate of the milkfish based on this inclusion would also provide more accurate results than trying to extrapolate what the human population should harvest on a periodic base in order to bring the ecosystem back into balance. This inclusion of the human population into the growth model of the population of the milkfish is necessary because in our ecosystem, humans are the only predator of the milkfish thus making + +them a requirement for equilibrium to be achieved. Finally, our model of the economic benefits for the people of Bolinao would be more accurate if we had been able to obtain more complete and recent pricing data for the market value and cost inputs of introducing the other species into the commercial fish farms next to the milkfish. + +Despite the shortcomings of our models, we were still able to adequately show the economic and environmental benefit to the region by transitioning from a monoculture of milkfish to a polyculture of biological diversity. One of the biggest contributors to changing this system and reducing bacterial waste was the growth of the blue mussel mollusks. Through this process our models should convince the people of the Bolinao region of the Philippines to transition from the current monoculture of raising and harvesting only milkfish to a polyculture where they raise and harvest a wider variety of species to obtain the maximum sustainable yield from the ecosystem. With this optimal combination, implemented through the introduction of better farming practices and other species of aquatic life, it is possible to achieve a better result for both the environmental and economic quality of life for the Bolinao people over both the short and long term. + +# Bibliography + +Alcala, Ashley, Edgardo Gomez, Garry Russ, Alan White. 30 June 2006. 07 February 2009. +. +Anchorage World. 02 January 2009. 07 February 2009. +. +Aquaculture Consultancy and Engineering. 07 February 2009. +. +Artificial Reef Pictures. 07 February 2009. . +Bray, W.A. Igenta Connect. 01 January 2008. 07 February 2009. +$ +Bray, W.A. Texas Agriculture and Mechanical University. 07 February 2009. +$ +Burbord, Michele. CSIRO Marine Research. 07 February 2009. +. +Environmental Protection Agency. 07 Febraury 2009. +. +FishBase. 10 January 2008. 07 February 2009. . +FishBase. 15 January 2009. 07 February 2009. +. + +Fisheries Improved for a Sustainable Harvest. USAID. 07 February 2009. +. +Food and Agriculture Organization. Fisheries and Aquaculture Department, United Nations. 07 +February 2009. . +Food and Agriculture Organization. Fisheries and Aquaculture Department, United Nations. 07 +February 2009. . +Food and Agriculture Organization. Fisheries and Aquaculture Department, United Nations. 07 +February 2009. . +Food and Agriculture Organization. Fisheries and Aquaculture Department, United Nations. 07 +February 2009. . +Food and Agriculture Organization. Fisheries and Aquaculture Department, United Nations. 07 +February 2009. . +Green Biz. Business Week. 17 November 2008. + +Locations of Artificial Reefs and Wrecks. 07 February 2009. +. +Milkfish, Chanos Chanos. 07 February 2009. . +Morton, Steven. Modern Use of Cultivated Algae. 26 October 1998. 07 February 2009. +. +NIMPIS (2002). Asterias amurensis species summary. National Introduced Marine Pest +Information System (Eds: Hewitt C.L., Martin R.B., Sliwa C., McEnnulty, F.R., Murphy, +N.E., Jones T. & Cooper, S.). 07 February 2009. . + +NIMPIS (2002). Asterias amurensis reproduction & life cycle. National Introduced Marine Pest Information System (Eds: Hewitt C.L., Martin R.B., Sliwa C., McEnnulty, F.R., Murphy, N.E., Jones T. & Cooper, S.). 07 February 2009. . +Precht, William. Coral Reef Restoration Handbook. 07 February 2009. +. +Roth, Ariel. 07 February 2009. . +Southeast Asian Fisheries Development Center. Aquaculture Department, United Nations. 07 February 2009. . +SpringerLink. 03 November 2004. 07 February 2009. +. +The Common or Blue Mussel (Mytilus Edulis). 07 February 2009. +. +United Kingdom Marine SACs Project. 07 February 2009. +. +USAID. 07 February 2009. . +USAID. 07 February 2009. . +Van Dan, Le. 21 July 2008. 07 February 2009. . + +Vennevig, Nielss. SINTEF Fisheries and Aquaculture. 07 February 2009. + +. + +# Appendix A: Polyculture Population Model of Acceptable Ratios for Water Quality + +![](images/19786b94f2f1af7e2cff6dd6095a48af39fed84298f199b4b802ca499e2a01a7.jpg) +Appendix A: Polyculture Population Model of Acceptable Ratios for Water Quality + +# Appendix B: Equations for Calculation of Levels of Contaminants Affecting Water Quality + +Level of Carbon dissolved: $C_{d} = Y_{m}P_{m} + Y_{t}P_{t} - Y_{l}P_{l}$ + +Level of Nitrogen dissolved: $N_{d} = Y_{m}P_{m} + Y_{r}P_{r} - Y_{l}P_{l} - Y_{a}P_{a}$ + +Level of Chlorophyll: $Chl = Y_{a}P_{a}$ + +Level of particulate nitrogen: $N_{p} = Y_{m}P_{m} + Y_{t}P_{t} - Y_{a}P_{a}$ + +Level of particulate carbon: $C_p = Y_m P_m + Y_r P_r + Y_s P_s - Y_l P_l$ + +# Appendix C: Policy Recommendations to the Pacific Marine Fisheries Council + +To the Pacific Marine Fisheries Council: + +Increasing the biodiversity of the aquaculture in the Bolinao region of the Philippines will result in a more environmentally sustainable and economically desirable quality of life for the Filipino people. People in the region have been acting with intentions to provide their families with a better quality of life and higher standard of nutritional value in their diets by harvesting more fish. This overfishing of wild milkfish led to their depletion. More effort was required to find fewer fish and so techniques like using dynamite and sodium cyanide to kill or stun the fish destroyed other species in the ecosystem as well which allowed algae to grow uncontrollably reducing the nutrients available for coral growth. The desire for cheap animal protein led to the solution of commercial fish farming and harvesting of milkfish. This imbalance of a one species dominated monoculture resulted in stagnant fish waste and excess algae which deteriorated the water quality and choked off nutrients that coral needed to grow. Commercial fish harvesting seems to be a natural fact of life given the increased demand for animal protein. However, a multi-faceted approach can still be taken in order to allow commercial fish farming to continue in a manner that is both environmentally sustainable and economically productive for both the short and long term. + +In order to achieve this combination of improved water quality, increased coral growth, and improved economic opportunity for the people of the Bolinao region, we propose the following changes. All of these improvements are contingent upon the principles of transitioning from the current monoculture of harvesting only milkfish to a more biologically diverse aquaculture that raises and harvests a variety of complementary species. + +First it is important to spread out the area between fisheries in order to reduce fish waste from stagnating in very close areas. This allows for better water flow. Second, a fishing license system should be implemented with an associated fee being used to pay for the cost of enforcing certain laws governing the harvesting of certain types of fish at rates that allow local demand for animal protein to be met but which preserve the overall sustainable reproduction of all species in the polyculture. Third, a policy of educational efforts conducted at the local level is necessary to help people understand the value of good practices associated with effective coastal habitat management. The local population is the primary stakeholder in the long term sustainability of their environment and they should better understand the danger that practices like dynamite and cyanide fishing create for the stability of their own area. + +Fourth, similar to the way agriculture farmers practice crop rotation in order to allow nitrogen and other nutrients to return to the soil, aquaculture farmers out to be instructed in how to better rotate the locations of their fish cages and pens away from coastal reefs so that those reefs can grow and allow wild species of fish to repopulate those areas while still meeting the demand for fish with the existing fish farms. This rotation will also reduce water quality issues associated with stagnant waste in one area. Fifth, creating a more biologically diverse aquaculture will reduce some of this waste because species like blue mussels and algae consume the leftover excrement (Common or Blue Mussel). Additional waste can be removed and recycled for use as fertilizer on local land based farms. + +Sixth, allowing coral reef to repair through the seeding and elimination of excess algae in the area will help attract tourism. Furthermore, the improved water quality will actually make it desirable for people to go see the coral reef. Seven, where existing coral reef is not present old steel ships, armored military vehicles, subway cars, and aircraft can be seeded with live coral in + +order to promote the growth of artificial coral reef (Reefs and Wrecks). While not as biologically productive as natural coral reef, such artificial habitat can supplement the growth of a biologically diverse aquaculture and reduce the overall need for commercial fish farms while reducing the fish waste associated with them (Artificial Reef). + +Finally, certain technological innovations offer promise for the alternative uses and practices of commercial fish harvesting and its biomass products in the future. Some companies are currently developing self-propelled alternative energy (solar and wave turbine) powered fish cages that could push further out to sea, thus reducing the damage caused by the stagnation of waste resulting from reduced water circulation near shore (Green Biz). They would still require the periodic refill of the fishmeal but would continue to offer the benefits of easily harvestable fish while allowing them to grow in a more natural environment. Another promising technological innovation, although too expensive to implement now is the use of radio frequency identification pills. These pills could be given to fish allowing them to grow out in the wild while periodically harvesting them with large fish scoops, scanning for the ones who have reached maturity while tossing back those who still need more time to grow. Finally, the demand for alternatives to crude oil based fuel products offers promise for innovations that will bring down the costs of algae as a biodiesel alternative fuel. + +The combination of these changes in commercial fish harvesting practices as well as the eventual implementation of the technological innovations described above provide an effective foundation for the eventual resolution of all three aspects of the problem facing both the environment and the people of the Bolinao region. Implementing these best practices as well as reintroducing the appropriate ratios of certain species promises to create a sustainable biologically diverse aquaculture that will improve water quality, increase coral growth, and provide greater economic profit than the current milkfish based monoculture. We believe that your organization is best equipped to address these concerns and to suggest these practices to the Filipino government and to the local organizations in the Bolinao region that can most effectively implement them. The right to good nutrition is something that all Filipinos deserve. You as the people of the Pacific Marine Fisheries Council have the ability to help make that goal a reality over both the short and long term. Now is the time to act. \ No newline at end of file diff --git a/MCM/2014/2014MCM&ICM/2014MCM&ICM.md b/MCM/2014/2014MCM&ICM/2014MCM&ICM.md new file mode 100644 index 0000000000000000000000000000000000000000..573eb5050092b3d96baed67c09e9d4c2e3097592 --- /dev/null +++ b/MCM/2014/2014MCM&ICM/2014MCM&ICM.md @@ -0,0 +1,4199 @@ +# The U + +# M + +Publisher + +COMAP, Inc. + +Executive Publisher + +Solomon A. Garfunkel + +ILAP Editor + +Chris Arney + +Dept. of Math'1 Sciences + +U.S. Military Academy + +West Point, NY 10996 + +david.arney@usma.edu + +On Jargon Editor + +Yves Nievergelt + +Dept. of Mathematics + +Eastern Washington Univ. + +Cheney, WA 99004 + +ynievergelt@ewu.edu + +Reviews Editor + +James M. Cargal + +Mathematics Dept. + +Troy University—Montgomery Campus + +231 Montgomery St. + +Montgomery, AL 36104 + +jmcargal@gmail.com + +Chief Operating Officer + +Laurie W. Aragón + +Production Manager + +George Ward + +Copy Editor + +Julia Collins + +Distribution + +John Tomicek + +# AP Journal + +Vol. 35, Nos. 2-3 + +# Editor + +Paul J. Campbell + +Beloit College + +700 College St. + +Beloit, WI 53511-5595 + +campbell@beloit.edu + +# Associate Editors + +Don Adolphson + +Aaron Archer + +Chris Arney + +Ron Barnes + +Arthur Benjamin + +Robert Bosch + +James M. Cargal + +Murray K. Clayton + +Lisette De Pillis + +James P. Fink + +Solomon A. Garfunkel + +William B. Gearhart + +William C. Giauque + +Richard Haberman + +Jon Jacobsen + +Walter Meyer + +Yves Nievergelt + +Michael O'Leary + +Catherine A. Roberts + +John S. Robertson + +Philip D. Straffin + +J.T. Sutcliffe + +Brigham Young Univ. + +Google Research + +U.S. Military Academy + +U. of Houston-Downtn + +Harvey Mudd College +Oberlin College + +Troy U.-Montgomery + +U. of Wisc.—Madison + +Harvey Mudd College + +Gettysburg College COMAP, Inc. + +Calif. State U., Fullerton + +Brigham Young Univ. + +Southern Methodist U. + +Harvey Mudd College + +Adelphi University + +Eastern Washington U. + +Towson University + +College of the Holy Cross + +Georgia Military College + +Beloit College + +St. Mark's School, Dallas + +# Subscription Rates for 2014 Calendar Year: Volume 35 + +# Institutional Web Membership (Web Only) + +Institutional Web Memberships do not provide print materials. Web memberships allow members to search our online catalog, download COMAP print materials, and reproduce them for classroom use. + +(Domestic) #3430 $490 (Outside U.S.) #3430 $490 + +# Institutional Membership (Print Only) + +Institutional Memberships receive print copies of The UMAP Journal quarterly, our annual CD collection UMAP Modules, Tools for Teaching, and our organizational newsletter Consortium. + +(Domestic) #3440 $328 (Outside U.S.) #3441 $369 + +# Institutional Plus Membership (Print Plus Web) + +Institutional Plus Memberships receive print copies of the quarterly issues of The UMAP Journal, our annual CD collection UMAP Modules, Tools for Teaching, our organizational newsletter Consortium, and online membership that allows members to search our online catalog, download COMAP print materials, and reproduce them for classroom use. + +(Domestic) #3470 $818 (Outside U.S.) #3471 $859 + +For individual membership options visit www.comap.com for more information. + +To order, send a check or money order to COMAP, or call toll-free 1-800-77-COMAP (1-800-772-6627). + +The UMAP Journal is published quarterly by the Consortium for Mathematics and Its Applications (COMAP), Inc., Suite 3B, 175 Middlesex Tpke., Bedford, MA, 01730, in cooperation with the American Mathematical Association of Two-Year Colleges (AMATYC), the Mathematical Association of America (MAA), the National Council of Teachers of Mathematics (NCTM), the American Statistical Association (ASA), the Society for Industrial and Applied Mathematics (SIAM), and The Institute for Operations Research and the Management Sciences (INFORMS). The Journal acquaints readers with a wide variety of professional applications of the mathematical sciences and provides a forum for the discussion of new directions in mathematical education (ISSN 0197-3622). + +Periodical rate postage paid at Bedford, MA and at additional mailing offices. + +Send address changes to: info@comap.com + +COMAP, Inc., Suite 3B, 175 Middlesex Tpke., Bedford, MA, 01730 + +© Copyright 2014 by COMAP, Inc. All rights reserved. + +Mathematical Contest in Modeling (MCM), High School Mathematical Contest in Modeling (HiMCM), and Interdisciplinary Contest in Modeling (ICM) are registered trade marks of COMAP, Inc. + +# Vol. 35, Nos. 2-3 2014 + +# Table of Contents + +# Publisher's Editorial + +Stirring the Pot—The Common Core and All That Sol Garfunkel 93 + +# Editor's Note + +About This Issue 96 + +# MCM Modeling Forum + +Results of the 2014 Mathematical Contest in Modeling William P. Fox 97 +Keep Right to Keep "Right" +Yaofeng Zhong, Yunyi Zhang, and Xiao Zhao 111 +Judges' Commentary: The Keep Right Papers +Kelly Black 139 +Author's Commentary: The Keep Right Papers Michael Tortorella 149 +Judge's Commentary: The Ben Fusaro Award Jerrold R. Griggs 153 +Evaluation System for College Coaching Legends Feng Xiong, Wenchao Ding, and Jingling Li. 157 +Judges' Commentary: The Coach Papers Robert Burks 181 +Author's Commentary: The Coach Papers William P. Fox 189 +Judges' Commentary: The Frank Giordano Award for 2014 Marie Vanisko 195 +Our Story with the MCM +Libin Wen, Jingyuan Wu, and Cong Wang. 201 +First Experience with Modeling Matthew Marner, Princep Shah, Dobromir Yordanov, and Amanda Beecher 209 +Model Students Robert Emro. 215 + +# ICM Modeling Forum + +Results of the 2014 Interdisciplinary Contest in Modeling Chris Arney 219 + +Who Are the $20\%$ ? Chen Wang, Mi Gong, and Zhen Li 229 + +Judges' Commentary: Measuring Network Influence and Impact Chris Arney, Kathryn Coronges, and Tina Hartley 249 + +Developing an Interdisciplinary Mindset with Students Through the ICM Heidi Berger and Rick Spellerberg 271 + +# Publisher's Editorial + +# Stirring The Pot— + +# The Common Core and All That + +Solomon A. Garfunkel + +Executive Director + +COMAP, Inc. + +175 Middlesex Turnpike, Suite 3B + +Bedford, MA 01730-1459 + +s.garfunkel@comap.com + +# Introduction + +When I first started to think seriously about reform in mathematics education, some 45 years ago, I asked advice from my then Mathematics Dept. Chair at Cornell, Alex Rosenberg. I remember him telling me that he believed that every 20 years or so it is necessary to "stir the pot" so that we are forced to re-think how we teach and re-imagine how children learn. + +Well, 40 years ago, I began to work with applications and modeling; 20 years ago, I was immersed in creating materials that exemplified the NCTM standards; and today, we are all dealing with implementation of the Common Core. The pot sure as hell is getting stirred. + +But there is so much noise that it's hard to discern what's going on and what it all means. So here's my take. + +# The NCTM Standards and Reactions + +In 1989, I believe that NCTM basically got it right. The standards that they wrote emphasized the need to improve the mathematics education of all students. There was no laundry list of topics to be mastered by a certain age, but rather some key notions of what some have called "mathematical habits of mind." The National Science Foundation (NSF) quickly funded a number of projects designed to produce "standards-based" curricula at the elementary, middle, and high school levels. It is interesting to note that the progeny of those curricula hold roughly $50\%$ of the elementary market, $25\%$ of the middle school market and $5\%$ of the secondary market. + +So what went wrong? Enter the "math wars." Basically, a small—but extremely vocal—group of research mathematicians, right-wing politicians, the wealthy, and religious fundamentalists got scared, for a number of different reasons. + +- The mathematicians were afraid that in creating curricula designed to serve all students, we would short-change the best and the brightest, potentially losing future Ph.D. candidates. +- For the political right, this smacked of federal interference in local control of education, i.e., a federally-mandated national curriculum. +- For the wealthy, we were potentially changing the rules in a game that they and their children were already winning—so why take the risk? +- For the fundamentalists, the emphasis on applications and modeling left the mathematics classroom open to influencing the hearts and minds of their children in a way that solving quadratic equations didn't. + +And so words like "fuzzy math" were thrown around to make the new curricula look silly and non-rigorous, and members of this alliance turned out at every local and statewide school board meeting—and won. + +There were many consequences of this turmoil. NCTM rewrote their standards in 2000 hoping in vain to placate their critics. NSF redefined its mission as one primarily—if not solely—devoted to research, backing away from 50 years of curriculum reform and implementation. But without the NCTM standards to provide direction, we were left without guidance on how to proceed with reform of math education. + +# NCLB and High-Stakes Testing + +Enter No Child Left Behind (NCLB) and the mandated state-wide high-stakes tests from grade 3 onward in mathematics and language arts. Effectively, NCLB created 50 different sets of standards and the tests to assess them. As you can imagine, the tests are widely disparate in quality, and a satisfactory performance in one state may be woefully inadequate in another. But NCLB says nothing about what should be on the tests, how hard the tests should be, or what should be used as cutoff scores. Those are left up to the individual states to determine. And to be fair, it's a mess. + +# CCSSM: Higher Expectations—and Risks + +Enter the Common Core State Standards for Mathematics (CCSSM). The logic of the Common Core rationale is simple and admittedly compelling: We have standards/ testing Tower of Babel; it makes no sense. We need one common set of standards to teach to and to hold students (and teachers) accountable. + +The devil, however, is in the details. The current version of the CCSSM is, I believe, destined to disadvantage the already-disadvantaged. For the most part, it does obeisance to those who worry most about where our next generation of STEM researchers and workers will come from. In the name of raising expectations, it runs the risk of being irrelevant to the vast majority of students taking math today. The promise of the NCTM standards was "math for all." CCSSM threatens to deliver math for the few. + +Ostensibly, the Common Core is a creature of the states; but its standards were certainly created in concert with administration education policy and strongly influenced by large private foundations, as well as publishers and test-makers. + +And these standards, too, have their set opponents: + +- educators who worry that the topic lists grade by grade are simply designed as an escalator to calculus; +- teachers and teacher unions who see the coming assessment of teacher performance—based in part on student performance—as an attack on tenure and a weapon to be used against teachers; +- the same right-wing politicians who fear federal control of education; and +- those who fear the undue influence of the aforementioned large corporate interests (publishers, test-makers), as well as that of private and public foundations. + +All of this is confounded by the fact that, to the general public, the Common Core is synonymous with the high-stakes tests being designed to assess them. These tests will roll out this year and next. The results will likely be horrific, with predictions of failure rates in excess of $70\%$ . This fact, the general cost of the assessments, and the factors mentioned above put the broader issue of acceptance of the CCSSM at great risk. + +# We Are Not in a Race + +In some ways, this is a shame. There is promise here. CCSSM needs to be viewed as a living document, one that can be adapted as needed to serve the broad student population. If we can back away from our obsession with high-stakes testing, then we could use common standards to produce common assessments which we could then use to diagnose student needs in order to improve learning—not to punish students, teachers, or schools. + +Forty years ago, we feared the Russians, 20 years ago the Japanese, and today the Chinese. Mathematics education is not a horse race. It is not about ensuring that our best are better than their best. It is about ensuring that every student has the opportunity to learn as much mathematics—and as much of how they can use that mathematics—as possible. That is a pot worth stirring. + +# About the Author + +Solomon Garfunkel is the founder and Executive Director of COMAP and Executive Publisher of this Journal. + +He served on the mathematics faculties of Cornell University and the University of Connecticut at Storrs, but he has dedicated the last 35 years to research and development efforts in mathematics education. He was project director for the Undergraduate Mathematics and Its Applications (UMAP) and the High School Mathematics and Its Applications (HiMAP) Projects funded by NSF, and directed three telecourse projects, including Against All Odds: Inside Statistics and In Simplest Terms: College Algebra, for the Annenberg/CPB Project. He has been the Executive Director of COMAP, Inc. since its inception in 1980. + +Dr. Garfunkel was the project director and host for the video series *For All Practical Purposes: Introduction to Contemporary Mathematics*. He was the Co-Principal Investigator on the ARISE Project, and Co-Principal Investigator of the CourseMap, ResourceMap, and WorkMap projects. In 2003, Dr. Garfunkel was Chair of the National Academy of Sciences and Mathematical Sciences Education Board Committee on the Preparation of High School Teachers. + +# Editor's Note About This Issue + +This year we had almost 8,000 teams in the MCM and ICM contests combined; the 19 Outstanding papers ran to more than 500 manuscript pages. Editing and publishing all the Outstanding papers, which we once did, is simply not possible any more. + +Hence, as in the past few years, we present in the pages of this Journal only one Outstanding paper for each of the MCM and ICM problems. The selection of which papers to publish reflected editorial considerations and was done blind to the affiliations of the teams. + +All 19 Outstanding papers appear in their original form on the 2014 MCM-ICM CD-ROM, which also has the press releases for the two contests, the results, the problems, and some commentaries. Information about ordering is at http://www.comap.com/product/cdrom/index.html or at (800) 772-6627. + +# MCM Modeling Forum + +# Results of the 2014 + +# Mathematical Contest in Modeling + +William P. Fox, MCM Director + +Dept. of Defense Analysis + +Naval Postgraduate School + +1 University Circle + +Monterey, CA 93943-5000 + +wpfox@nps.edu + +# Introduction + +A total of 6,755 teams of undergraduates from hundreds of institutions and departments in 18 countries spent a weekend in February working on applied mathematics problems in the 30th Mathematical Contest in Modeling (MCM) $^{\text{®}}$ . + +The 2014 MCM began at 8:00 P.M. EST on Thursday, February 6, and ended at 8:00 P.M. EST on Monday, February 10. During that time, teams of up to three undergraduates researched, modeled, and submitted a solution to one of two open-ended modeling problems. Students registered, obtained contest materials, downloaded the problems and data, and entered completion data through COMAP's MCM Website. After a weekend of hard work, solution papers were sent to COMAP on Monday. Two of the top papers appear in this issue of The UMAP Journal, together with commentaries. + +In addition to this special issue of The UMAP Journal, COMAP offers a supplementary 2014 MCM-ICM CD-ROM containing the press releases for the two contests, the results, the problems, unabridged versions of the Outstanding papers, and judges' commentaries. Information about ordering is at + +http://www.comap.com/product/?idx=1418 + +or at (800) 772-6627. + +Results and winning papers from the first 29 contests were published in special issues of Mathematical Modeling (1985-1987) and The UMAP Journal (1985-2013). The 1994 volume of Tools for Teaching, commemorating the tenth anniversary of the contest, contains the 20 problems used in the first 10 years of the contest and an Outstanding paper for each year. That volume and the special MCM issues of the Journal for the last few years are available from COMAP. The 1994 volume is also available on COMAP's special Modeling Resource CD-ROM. Also available is The MCM at 21 CD-ROM, which contains the 20 problems from the second 10 years of the contest, an Outstanding paper from each year, and advice from advisors of Outstanding teams. These CD-ROMs can be ordered from COMAP at + +http://www.comap.com/product/cdrom/index.html. + +This year, the two MCM problems represented significant challenges: + +- Problem A, "The Keep-Right-Except-To-Pass-Rule," asked teams to build and analyze a mathematical model to analyze the performance of this rule in light and in heavy traffic. Is this rule effective in promoting greater throughput? If not, teams were to suggest and analyze alternatives that might promote greater throughput, safety, and/or other factors that they deemed important. +- Problem B, "College Coaching Legends," asked teams to build a mathematical model to identify the "best all-time college coach," male or female, in any sport, over the past century. Teams were to clearly articulate their metrics for assessment, and to discuss how their model can be applied across both genders and across all sports. Teams also had to prepare a 1-2-page article intended for Sports Illustrated explaining their reasoning and results, including a nontechnical explanation of their model that sports fans will understand. + +# COMAP also sponsors: + +- The MCM/ICM Media Contest (see p. 109). +- The Interdisciplinary Contest in Modeling (ICM) $^{\text{®}}$ , which runs concurrently with the MCM and next year again will offer a modeling problem involving network science, together with the choice of a second problem involving human-environment interactors. Results of this year's ICM are on the COMAP Website at + +http://www.comap.com/undergraduate/contest. + +The contest report, an Outstanding paper, and commentaries appear in this issue. + +- The High School Mathematical Contest in Modeling (HiMCM) $^{\text{®}}$ , which offers high school students a modeling opportunity similar to the MCM. Further details are at + +http://www.comap.com/highschool/contest.s + +# 2014 MCM Statistics + +- 6,755 teams participated (with 1,028 more in the ICM) +- 12 high school teams (0.2%) +391 U.S. teams (6%) +- 6,364 foreign teams (93%), from Canada, China, Finland, Hong Kong, India, Indonesia, Japan, Macao, Mexico, New Zealand, Scotland, Singapore, South Africa, South Korea, Spain, Sweden, and the United Kingdom +13 Outstanding Winners ( $0.2\%$ +12 Finalist Winners ( $0.2\%$ +656 Meritorious Winners (9%) +2,168 Honorable Mentions (31%) +3,891 Successful Participants (57%) + +# Problem A: The Keep-Right-Except-To-Pass Rule + +In countries where driving automobiles on the right is the rule (that is, the U.S.A., China, and most other countries except for Great Britain, Australia, and some former British colonies), multi-lane freeways often employ a rule that requires drivers to drive in the right-most lane unless they are passing another vehicle, in which case they move one lane to the left, pass, and return to their former travel lane. + +Build and analyze a mathematical model to analyze the performance of this rule in light and heavy traffic. You may wish to examine tradeoffs between traffic flow and safety, the role of under- or over-posted speed limits (that is, speed limits that are too low or too high), and/or other factors that may not be explicitly called out in this problem statement. Is this rule effective in promoting better traffic flow? If not, suggest and analyze alternatives (to include possibly no rule of this kind at all) that might promote greater traffic flow, safety, and/or other factors that you deem important. + +In countries where driving automobiles on the left is the norm, argue whether or not your solution can be carried over with a simple change of orientation, or would additional requirements be needed. + +Lastly, the rule as stated above relies upon human judgment for compliance. If vehicle transportation on the same roadway was fully under the control of an intelligent system—either part of the road network or imbedded in the design of all vehicles using the roadway—to what extent would + +this change the results of your earlier analysis? + +# Problem B: College Coaching Legends + +Sports Illustrated, a magazine for sports enthusiasts, is looking for the "best all-time college coach," male or female, for the previous century. Build a mathematical model to choose the best college coach or coaches (past or present) from among either male or female coaches in such sports as college hockey or field hockey, football, baseball or softball, basketball, or soccer. + +Does it make a difference which time line horizon that you use in your analysis, that is, does coaching in 1913 differ from coaching in 2013? + +Clearly articulate your metrics for assessment. Discuss how your model can be applied in general across both genders and all possible sports. Present your model's top 5 coaches in each of 3 different sports. + +In addition to the MCM format and requirements, prepare a 1-2-page article for Sports Illustrated that explains your results and includes a nontechnical explanation of your mathematical model that sports fans will understand. + +# The Results + +The solution papers were coded at COMAP headquarters so that names and affiliations of the authors would be unknown to the judges. Each paper was then read preliminarily by two "triage" judges at either Appalachian State University (Keep Right Problem), Carroll College (Coach Problem), or by a panel in China. At the triage stage, the summary and overall organization are the basis for judging a paper. If the judges' scores diverged for a paper, the judges conferred; if they still did not agree, a third judge evaluated the paper. + +Additional Regional Judging sites were created at the U.S. Military Academy and at the Naval Postgraduate School, to support the growing number of contest submissions. + +Final judging took place at the Naval Postgraduate School, Monterey, CA. The judges classified the papers as follows: + +
OutstandingFinalistMeritoriousHonorable MentionSuccessful ParticipationTotal
Keep Right Problem664539572,4533,885
Coach Problem762031,2111,4392,871
13126562,1683,8926,756
+ +We list here the 13 teams that the judges designated as Outstanding; the list of all participating schools, advisors, and results is at the COMAP Website. + +# Outstanding Teams + +# Institution and Advisor + +# Team Members + +# Keep Right Problem + +"Keep Right to Keep 'Right' + +Tsinghua University + +Beijing, China + +Zhiming Hu + +Yaofeng Zhong + +Yunyi Zhang + +Xiao Zhao + +"Rules of the Road" + +Tufts University + +Medford, MA + +Scott MacLachlan + +Michael Bird + +Kathleen Cachel + +Charlie Colley + +"Freeway Traffic Model Based on Cellular Automata and Monte-Carlo Method" + +Shanghai Jiaotong University + +Shanghai, China + +Jinliang Yue + +Dongyu Jia + +Zhaoyang Shi + +Yanping Xie + +"Simulating and Scoring the Performance of Traffic Driving Rules" + +Beijing Normal University + +Beijing, China + +Haigang Li + +Yihan Sun + +Xiang Xu + +Junwei Zhang + +"A New Traffic Rule" + +Nanjing University + +Nanjing, China + +Meilin Zhu + +Luowei Zho + +Xiuyu Wang + +Wanjia Zhu + +"The Keep-Right-Except-to-Pass Rule" + +Zhejiang University + +Hangzhou, China + +Jianxin Zhu + +Yuan Gong + +Shu Liu + +Yandi Shen + +# Coach Problem + +"Grey Correlation and Fuzzy Models for Best Coach" + +Chongqing University + +Chongqing, China + +Xiaobing Hu + +Yue Wang + +Bo Hou + +Qiang Zhang + +"Finding Out the Best All-Time College Coach" + +Southwest University for Nationalities + +Chengdu, China + +Gaoping Li + +Yiping Liu + +Yongyi Xie + +Yao Zhang + +"Who Is the Centennial Best Coach?" + +Southeast University + +Nanjing, China + +Zhizhong Sun + +Yatao Fu + +Yuan Dong + +Yuyang Wang + +"College Coaches' Mount Rushmore" + +Northeastern University + +Shenyang, China + +Dali Chen + +Yantao Shen + +Bingzhu Xie + +Yingyi Ma + +"An Evaluation Model of College Coaches" + +University of International Business and Economics + +Beijing, China + +Shuyu Zhang + +Zhuoyi Chen + +Mengru Wang + +Jie Hang + +"A Networks and Machine Learning Approach to Determine the Best College Coaches of the 20th-21st Centuries" + +NC School of Science and Mathematics + +Durham, NC + +Christine Belledin + +Christopher Qian Yuan + +Tian-Shun Allan Jiang + +Zachary T. Polizzi + +"Evaluation System for College Coaching Legends" + +Huazhong Univ. of Science and Technology + +Wuhan, China + +Zhibin Han + +Feng Xiong + +Wenchao Ding + +Jingling Li + +# Awards and Contributions + +Each participating MCM advisor and team member received a certificate signed by the Contest Director and the appropriate Head Judge. + +INFORMS, the Institute for Operations Research and the Management Sciences, recognized as INFORMS Outstanding teams two teams: the teams from Tsinghua University (Keep Right Problem) and from NC School of Science and Mathematics (Coach Problem) and provided the following recognition: + +- a letter of congratulations from the current president of INFORMS to each team member and to the faculty advisor; +- a check in the amount of $300 to each team member; +- a bronze plaque for display at the team's institution, commemorating team members' achievement; +- individual certificates for team members and faculty advisor as a personal commemoration of this achievement; and +- a one-year student membership in INFORMS for each team member, which includes their choice of a professional journal plus the OR/MS Today periodical and the INFORMS newsletter. + +The Society for Industrial and Applied Mathematics (SIAM) designated one Outstanding team from each problem as a SIAM Winner. The SIAM Award teams were from Zhejiang University (Keep Right Problem) and Southwest University for Nationalities (Coach Problem). Each team member was awarded a $300 cash prize. The teams were offered partial expenses to present their results in a special Minisymposium at the SIAM Annual Meeting in Chicago, IL in July, and the team from Southwest University for Nationalities was able to come. Their schools were given framed hand-lettered certificates in gold leaf. + +The Mathematical Association of America (MAA) designated one North American team from each problem as an MAA Winner. The MAA Winners were from Tufts University (Keep Right Problem) and the NC School of Science and Mathematics (Coach Problem). With partial travel support from the MAA, the teams presented their solutions at a special session of the MAA Mathfest in Portland, OR in August. Each team member was presented a certificate by an official of the MAA Committee on Undergraduate Student Activities and Chapters. + +# Ben Fusaro Award + +One Meritorious, Finalist, or Outstanding paper is selected for the Ben Fusaro Award, named for the Founding Director of the MCM and awarded + +for the 11th time this year. It recognizes an especially creative approach; details concerning the award, its judging, and Ben Fusaro are in Vol. 25 (3) (2004): 195-196. The Ben Fusaro Award Winner was the Outstanding team from Tsinghua University (Keep Right Problem). A commentary about it appears in this issue. + +# Frank Giordano Award + +For the third time, the MCM is designating a paper with the Frank Giordano Award. This award goes to a paper that demonstrates a very good example of the modeling process in a problem featuring discrete mathematics—this year, the Coach Problem. Having worked on the contest since its inception, Frank Giordano served as Contest Director for 20 years. The Frank Giordano Award for 2014 went to the Outstanding team from Huazhong University of Science and Technology. A commentary about it appears in this issue. + +# Judging + +Director + +William P. Fox, Dept. of Defense Analysis, Naval Postgraduate School, Monterey, CA + +Associate Director + +Patrick J. Driscoll, Dept. of Systems Engineering, U.S. Military Academy, West Point, NY + +# Keep Right Problem + +Head Judge + +Patrick J. Driscoll, Dept. of Systems Engineering, U.S. Military Academy, West Point, NY + +Associate Judges + +William C. Bauldry, Chair-Emeritus, Dept. of Mathematical Sciences, Appalachian State University, Boone, NC (Head Triage Judge) + +Kelly Black, Mathematics Dept., Clarkson University, Potsdam, NY + +Karen Bolinger, Dept. of Mathematics, Clarion University, Clarion, PA + +Tim Elkins, Dept. of Systems Engineering, U.S. Military Academy, West Point, NY + +Thomas Fitzkee, Mathematics Dept., Francis Marion University, Florence, SC + +Ben Fusaro, Dept. of Mathematics, Florida State University, Tallahassee, FL (SIAM Judge) + +Jerry Griggs, Mathematics Dept., University of South Carolina, Columbia, SC + +Marvin Keener, Mathematics Dept., Oklahoma State University, Stillwater, OK + +Yongji Tan, Dept. of Mathematics, Fudan University, Shanghai, China + +Michael Tortorella, Dept. of Industrial and Systems Engineering, + +Rutgers University, Piscataway, NJ + +# Regional Judging Session at the U.S. Military Academy + +Head Judge + +Patrick J. Driscoll, Dept. of Systems Engineering + +Associate Judges + +Dave Chennault, Tim Elkins, James Enos, Daniel McCarthy, + +Kenny McDonald, Elizabeth Schott, and Russell Schott + +Dept. of Systems Engineering + +Steve Horton, Dept. of Mathematical Sciences + +—all from the United States Military Academy at West Point, NY + +Paul Heiney, Dept of Mathematics, U.S. Military Academy Preparatory + +School, West Point, NY + +Ed Pohl, Dept. of Industrial Engineering + +Tish Pohl, Dept. of Civil Engineering + +—both from University of Arkansas, Fayetteville, AR + +# Triage Session at Appalachian State University + +Head Triage Judge + +William C. Bauldry, Chair, Dept. of Mathematical Sciences + +Associate Judges + +Bill Cook, Ross Gosky, Jeffry Hirst, Lisa Maggiore, René Salinas, and + +Joel Sanqui + +—all from the Dept. of Mathematical Sciences, Appalachian State + +University, Boone, NC + +Amy H. Erickson and Keith Erickson + +—Dept. of Mathematics, Georgia Gwinnett College, Lawrenceville, GA + +Steven Kaczkowski + +—Dept. of Mathematics, University of South Carolina, Columbia, SC + +Douglas Meade + +—Governor's School for Science and Mathematics, Hartsville, SC + +Harrison Schramm + +Office of the Chief of Naval Operations, Washington, DC + +Rich West + +—Francis Marion University, Florence, SC + +# Coach Problem + +Head Judge + +William P. Fox, Dept. of Defense Analysis, Naval Postgraduate School, Monterey, CA + +Associate Judges + +Robert Burks, Operations Research Dept., Naval Postgraduate School, Monterey, CA + +Frank R. Giordano, Dept. of Defense Analysis, Naval Postgraduate School, Monterey, CA + +Michael Jaye, Dept. of Defense Analysis, Naval Postgraduate School, Monterey, CA + +Xiwen Lu, East China University of Science and Technology (ECUST), Shanghai, China + +Richard Marchand, Mathematics Dept., Slippery Rock University, Slippery Rock, PA + +Veena Mendiratta, Lucent Technologies, Naperville, IL + +Jack Picciuto, Director of Operations Analysis and Planning at IT Cadre, Ashburn, VA + +Kathleen M. Shannon, Dept. of Mathematics and Computer Science, Salisbury University, Salisbury, MD (MAA Judge) + +Dan Solow, Case Western Reserve University, Cleveland, OH (INFORMS Judge) + +Maynard Thompson, Mathematics Dept., University of Indiana, Bloomington, IN + +Marie Vanisko, Dept. of Mathematics, Engineering, and Computer Science, Carroll College, Helena, MT (Giordano Award Judge) + +# Regional Judging Session at the Naval Postgraduate School + +Head Judge + +William P. Fox, Dept. of Defense Analysis + +Associate Judges + +Michael Jaye, Dept. of Defense Analysis + +Robert Burks, Dept. of Defense Analysis + +David Olwell, Dept. of Systems Engineering +—all from the Naval Postgraduate School, Monterey, CA + +Richard West, Emeritus Professor + +Thomas Fitzkee, Mathematics Dept. both from Francis Marion University, Florence, SC + +Jay Belanger, Truman State University, Kirksville, MO + +Jack Picciuto, Director of Operations Analysis and Planning at IT Cadre, Ashburn, VA + +# Triage Session at Carroll College + +Head Judge + +Marie Vanisko + +Associate Judges + +Kelly Cline, Terry Mullen, John Scharf, Eric Sullivan, and Theodore Wendt + +—all from Dept. of Mathematics, Engineering, and Computer Science, Carroll College, Helena, MT + +# Triage Session in China + +Head Judge + +Yongji Tan, Fudan University, Shanghai + +Associate Judges + +Zhijie Cai, Fudan University, Shanghai + +Yuan Cao, Fudan University, Shanghai + +Xongda Chen, Tongji University, Shanghai + +Zhongwen Chen, Soochow University, Suzhou + +Hengjian Cui, Capital Normal University, Beijing + +Jianping Du, Zhengzhou Information Science and Institute, Zhengzhou + +Mingfeng He, Dalian University of Technology, Dalian + +Zhiqing He, East China University of Science and Technology, Shanghai + +Zhuguo He, Beijing University of Posts and Telecommunications, Beijing + +Liangjian Hu, Donghua University, Shanghai + +Haiyang Huang, Beijing Normal University, Beijing + +Guangfeng Jiang, Beijing University of Chemical Technology, Beijing + +Luming Jiang, East China Normal University, Shanghai + +Yalian Li, Chongqing University, Chongqing + +Laifu Liu, Beijing Normal University, Beijing + +Liqiang Lu, Fudan University, Shanghai + +Jiangwen Xu, Chongqing University, Chongqing + +Wenjuan Wang, University of Rochester, U.S.A. + +Jinhai Yan, Fudan University, Shanghai + +Jun Ye, Tsinghua University, Beijing + +Qixiao Ye, Beijing Institute of Technology, Beijing + +Jian Yuan, Southwest Jiaotong University, Chengdu + +Hongyan Zhang, Central South University, Changsha + +Jie Zhou, Sichuan University, Chengdu + +Yicang Zhou, Xi'an Jiaotong University, Xi'an + +# Sources of the Problems + +The Keep Right Problem was contributed by Michael Tortorella (Dept. of Industrial and Systems Engineering, Rutgers University, Piscataway, NJ). + +The Coach Problem was contributed by William P. Fox (Dept. of Defense + +Analysis, Naval Postgraduate School, Monterey, CA). + +# Acknowledgments + +Major funding for the MCM is provided by the National Security Agency (NSA) and by COMAP. Additional support is provided by the Institute for Operations Research and the Management Sciences (INFORMS), the Society for Industrial and Applied Mathematics (SIAM), and the Mathematical Association of America (MAA). We are indebted to these organizations for providing judges and prizes. + +We also thank for their involvement and unflagging support the MCM judges and MCM Board members, as well as + +- Two Sigma Investments. "This group of experienced, analytical, and technical financial professionals based in New York builds and operates sophisticated quantitative trading strategies for domestic and international markets. The firm is successfully managing several billion dollars using highly-automated trading technologies. For more information about Two Sigma, please visit http://www.twosigma.com." + +# Cautions + +To the reader of research journals: + +Usually a published paper has been presented to an audience, shown to colleagues, rewritten, checked by referees, revised, and edited by a journal editor. Each paper here is the result of undergraduates working on a problem over a weekend. Editing (and usually substantial cutting) has taken place; minor errors have been corrected, wording has been altered for clarity or economy, and style has been adjusted to that of The UMAP Journal. The student authors have proofed the results. Please peruse these students' efforts in that context. + +To the potential MCM advisor: + +It might be overpowering to encounter such output from a weekend of work by a small team of undergraduates, but these solution papers are highly atypical. A team that prepares and participates will have an enriching learning experience, independent of what any other team does. + +COMAP's Mathematical Contest in Modeling and Interdisciplinary Contest in Modeling are the only international modeling contests in which students work in teams. Centering its educational philosophy on mathematical modeling, COMAP serves the educational community as well as the world of work by preparing students to become better-informed and better-prepared citizens. + +# About the Author + +Dr. William P. Fox is a professor in the Department of Defense Analysis at the Naval Postgraduate School and teaches a three-course sequence in mathematical modeling for decision making. He received his B.S. degree from the United States Military Academy at West Point, New York, his M.S. at the Naval Postgraduate School, and his Ph.D. at Clemson University. Previously he has taught at the United States Military Academy and at Francis Marion University, where he was the Chair of Mathematics for eight years. He has many publications and scholarly activities including books, chapters of books, journal articles, conference presentations, and workshops. He directs several mathematical modeling contests through COMAP: HiMCM and MCM. His interests include applied mathematics, optimization (linear and nonlinear), mathematical modeling, statistical models for medical research, and computer simulations. He is President-Emeritus of the NPS Faculty Council and President of the Military Application Society of INFORMS. + +# Editor's Note + +The complete roster of participating teams and results is too long to reproduce in the Journal. It can be found at the COMAP Website, in separate files for each problem: + +http://www.comap.com/undergraduate/contestsmcm/contestst/2014/results/2014_MCM_Problem_A_Results.pdf +http://www.comap.com/undergraduate/contestsmcm/contestst/2014/results/2014_MCM_Problem_B_Results.pdf + +# Media Contest + +This year, COMAP again organized an MCM/ICM Media Contest. + +Over the years, contest teams have increasingly taken to various forms of documentation of their activities over the grueling 96 hours—frequently in video, slide, or presentation form. This material has been produced to provide comic relief and let off steam, as well as to provide some memories days, weeks, and years after the contest. We love it, and we want to encourage teams (outside help is allowed) to create media pieces and share them with us and the MCM/ICM community. + +The media contest is completely separate from MCM and ICM. No matter how creative and inventive the media presentation, it has no effect on the judging of the team's paper for MCM or ICM. We do not want work on the media project to detract or distract from work on the contest problems in any way. This is a separate competition, one that we hope is fun for all. + +Further information about the contest is at + +http://www.comap.com/undergraduate/contestms/mcm/media.html. + +There were 34 entries—31 of them from Dalian Maritime University! (Come on, you other teams!) + +Outstanding Winner: + +Dalian Maritime University, Dalian, China (Xiaonan Wang, Hang Li, Yang Cui) + +Finalists: + +Dalian Maritime University (Lei Zheng, Xufan Liu, Yuhan Weng) +Dalian Maritime University (Tianzi Yang, Guirong Zhang, Yuyan Qiao) + +The remaining entries were judged Meritorious Winners. + +Complete results, including links to the Outstanding videos, are at http://www.comap.com/undergraduate/contestms/mcm/contestss/2014/solutions/index.html. + +# Keep Right to Keep “Right” + +Yaofeng Zhong Yunyi Zhang Xiao Zhao + +Tsinghua University Beijing, China + +Advisor: Zhiming Hu + +# Abstract + +Our goal is a model to evaluate the performance of the keep-right-except-to-pass (KRETP) rule and other alternatives, by simulating the traffic flow on a freeway. We construct models to analyze five influencing factors. Then we integrate multiple criteria to judge the performance of nine rules using a fuzzy synthetic evaluation (FSE). + +Our basic model focuses on lane-changing behavior, an essential component of overtaking (passing). + +We extend our model with a cellular automaton-based approach. We assume that the drivers will change the lane with a specific probability if trigger and safety conditions are satisfied. We simulate traffic flow on a long section of a freeway, controlling occupancy, varying the number of lanes, maximum speed limit, minimum speed limit, and signaling behavior. + +In addition to KRETP, we examine four other rules by revising the laws governing the cells in the cellular automaton. Then we design five improved rules. + +We choose flow rate and average speed as traffic flow criteria, sharp braking frequency as a safety criterion, and satisfaction and standard deviation of speed as experience criteria. Then we use a fuzzy synthetic evaluation technique to integrate these criteria to determine the performance of each rule. We find that in a light traffic, a partial-assigned-lane-and-keep-right rule performs the best, while in heavy traffic, a different-speed-limit-on-each-lane rule is preferred. + +We change the probability of lane-changing to adjust our model to a country such as Great Britain. Moreover, we also simulate a freeway fully controlled by an intelligent system. + +Additionally, we refine our extended model by considering the on- and off-ramps. We adopt open boundary conditions and assume that the vehicles flowing in are Poisson-distributed. + +# Introduction + +A freeway is a controlled-access highway designed for high-speed vehicles. It provides an unhindered flow of traffic with no traffic lights or intersections. The Keep-Right-Except-To-Pass (KRETP) rule, also known as "Slower Traffic Keep Right," is often employed in right-hand traffic to raise traffic flow. In this paper, we simulate different rules for overtaking and compare them so as to attempt to find an optimal rule. + +# Restatement of the Problem + +We are required to build a mathematical model to analyze the performance of KRETP and alternative rules. We have two subproblems: + +- Build a model that can simulate the overtaking process. +- Propose mathematical criteria to determine the performance of a rule. + +In the first step, we build a model with inputs such as the speed limit plus other factors. In the second step, we consider the tradeoff between traffic flow, safety, and other factors. + +# Literature Review + +Nagel and Schreckenberg [1992] built a model to simulate freeway traffic, a simple cellular automaton model known as the "N-S model." They defined a one-dimensional lane. In their model, each site may be occupied by one vehicle or else be empty. Each vehicle has an integer velocity between 0 and $v_{\mathrm{max}}$ . At each time step, four sub-steps are performed: acceleration, slowing down, random speeding or slowing, and car motion. + +Rickert et al. [1996] introduced a model with two parallel lanes. Several conditions have to be fulfilled before a vehicle changes lanes: + +- no other vehicle is in the way, +- other lanes are better, and +- no collision will occur. + +They too simulated using a cellular automaton, with reasonable results. + +A multi-lane model does not have to be lane-symmetric. Differences may include different speed limits on each lane, different kinds of vehicles, etc. Chowdhury et al. [1997] created a model with different kinds of vehicles with different maximum speeds. They showed that even if the share of "slow cars" is relatively low, "fast cars" can move only at a low speed. However, Knospe and Santen [1999] suggested that the influence of "slow cars" might have been overestimated. + +# Assumptions and Justifications + +- No pedestrian can affect the vehicles on freeways. Usually, pedestrians have no access to freeways, let alone crossing a freeway. +- We ignore crosswinds during overtaking. This impact is negligible compared with that of the headwind. +- Drivers cannot drive in the emergency lane or on the shoulder. +- The freeway is completely flat and straight, with no curves or slopes. This assumption allows us to focus on the nature of overtaking. +- We assume that all drivers act based on the same set of rules. Drivers may be aggressive or not, but both groups follow the same rules. + +# Model Overview + +Most research for traffic flow can be classified as either microscopic and macroscopic. Since macroscopic methods are difficult to apply to our problem, we approach the problem with microscopic techniques. + +We focus on the incentive for changing lanes and the conditions for a successful lane-change. We treat changing-to-the-left-lane behavior and changing-to-the-right-lane behavior differently. This model gives us some intuition about the rule and serves as a stepping stone to our later study. + +Our extended model views the problem from a wider perspective. We consider a section of freeway and divide it into lattice cells. Then we run a cellular automaton to simulate the behavior of vehicles. We derive the laws governing the cells according to the analysis of our basic model. Moreover, using periodic boundary conditions, we treat the freeway as a "ring road" so as to accurately control the density. Thus, we call it a "Ring Road" model. + +Our refined model adds an entrance ramp and an exit ramp to our cellular automaton, with laws for entering and exiting vehicles. We use a Poisson distribution to simulate vehicles moving in from the start point. + +We use the extended "Ring Road" model as a standard model to analyze the problem and all results have this model at their cores. + +# The Keep-Right-Except-To-Pass Model + +# The Basic Lane-changing Model + +The basic model is a microscopic approach. A typical overtaking behavior consists of five actions: + +- signal for three seconds, + +Symbol Table. + +Table 1. + +
SymbolDefinitionUnits
Constants
λMean of Poisson distributionunitless
pslowProbability that a vehicle slows down randomlyunitless
pleftProbability that a vehicle shifts to the left lane when possibleunitless
prightProbability that a vehicle shifts to the right lane when possibleunitless
pexitProbability that a vehicle wants to move off through the exit rampunitless
Variables
vsSpeed of vehicle scell/time step
vexpectExpected speed of vehicle scell/time step
vlfSpeed of the vehicle in the left lane in frontcell/time step
vlbSpeed of the vehicle in the left lane behindcell/time step
vrfSpeed of the vehicle in the right lane in frontcell/time step
vrbSpeed of the vehicle in the right lane behindcell/time step
tTimetime step
Dl,f,gapLeft front gapcell
Dl,b,gapLeft back gapcell
Dr,f,gapRight front gapcell
Dr,b,gapRight back gapcell
vehiclei jThe jth vehicle in theith laneunitless
vj(t)Speed of vehicle ij at tth time stepcell/time step
vj,expectExpected speed of vehicle ijcell/time step
gapi j(t)Front gap of vehicle ij at tth time stepcell
xj i (t)Location of vehicle ij at tth time stepcell
lfgapjLeft front gap of vehicle ijcell
lbgapjLeft back gap of vehicle ijcell
rfgapjRight front gap of vehicle ijcell
rbgapjRight back gap of vehicle ijcell
lbvjSpeed of the vehicle behind vehicle ij in the left lanecell/time step
rbvjSpeed of the vehicle behind vehicle ij in the right lanecell/time step
overAverage speed at tth time stepcell/time step
NNumber of vehicles passing a certain point on the highwayunitless
N(t)Number of vehicles on the highway at tth time stepunitless
Nj(t)Number of vehicles on the jth lane at tth time stepunitless
Nshift(t)Number of vehicles changing lanes at tth time stepunitless
texpectedExpected timetime step
tactualActual timetime step
aijValue of the jth criterion of theith ruleuncertain
uj0Value of the jth criterion of the ideal schemeuncertain
rijRelative deviation of the jth criterion of theith ruleunitless
vjcoefficient of variation of the jth criterionunitless
wjWeight of the jth criterionunitless
FiRelative deviation of theith ruleunitless
+ +- change lane, +accelerate, +- signal back, and +- change back to the former lane. + +Among these actions, lane-changing is the most crucial part. + +# Changing to the Left Lane + +There are two main considerations [Chowdhury et al. 1997]: + +- a reason or a trigger consideration, and +- a safety consideration. + +The former means that the vehicle ahead moves slowly enough, triggering the driver to overtake it. The latter indicates that the driver will take safety into account. In other words, if there is a high-speed vehicle driving on the left lane, the driver will choose to stay in the current lane to avoid collision. Based on these considerations, we can introduce some mathematical intuition into the problem. Figure 1 illustrates the situation that the red (dark) car intends to change to the left lane. + +![](images/7ae93e3e84cb223a3aad7029109b9849b491d4ea17b706ef9d3733f1fd143e70.jpg) +Figure 1. Change to the left lane. + +We give mathematical expressions for basic conditions for a change to the left lane: + +- Trigger: The speed of the cars satisfies $v_{\text{expect}} > v_s$ . +- Safety: The left back gap should satisfy $D_{l,b,gap} > (v_{lb} - v_s)t$ . + +Additionally, the car attempts to accelerate (it is unreasonable to change to the passing lane while slowing down). So the left front gap should satisfy + +$$ +D _ {l, f, g a p} > \left(v _ {s} - v _ {l f}\right) t. +$$ + +# Changing Back to the Right Lane + +After accelerating and passing the slow vehicle ahead, the driver tends to change back to the former lane due to KRETP. However, this lane-changing behavior is subject to the following constraints: + +- There is no incentive to pass the car in the current passing lane. Otherwise, on a multi-lane freeway, the driver might prefer to pass the car ahead and hence change again left rather than return to the former lane. +- It is safe to change back. The driver should pass the slow car and ensure that there is no collision when changing back. +- After changing back to the former lane, the driver can maintain a relatively high speed. Otherwise, the driver will want to pass more than one car in the overtaking process. + +The first constraint can be treated as in the changing-to-the-left-lane situation as mentioned above and the other conditions can be stated as the following mathematical expressions: + +- Safety: The right back gap should satisfy $D_{r,b,gap} > (v_{rb} - v_s)t$ . +- To pass more cars: We need $D_{r,b,gap} > (v_s - v_{rf})t$ . + +![](images/f86ef56f4c20a039ab3adffd8caa6550001a4ebcbf7cd4588d274c108a78efa6.jpg) +Figure 2 illustrates this change-back-to-the-right-lane situation. +Figure 2. Change back to the right lane. + +# The Extended "Ring Road" Model + +To understand how the rule works in a traffic flow, we have to analyze the behavior of vehicles over a period of time. One intuition for modeling the problem is to think of it as a stochastic process. Therefore, we use a cellular automaton to simulate the behavior of vehicles on a freeway. + +A cellular automaton is a discrete model that describes the time development of a system; it treats time as a discrete variable. The model requires an initial configuration and a set of laws that determine how the system develops. At every time step, the cellular automaton advances incrementally and all the laws are implemented. + +# Assumptions of the Model + +- Drivers follow the rules with a specified probability. A driver might not want to change lanes even if all the conditions are satisfied. We assume that a driver changes to the left lane or to the right lane with probabilities $p_{\text{left}}$ or $p_{\text{right}}$ , when possible. +- All drivers tend to drive as fast as possible while keeping a safe following distance. Nearly every driver wants to drive faster on the freeway as long as there is enough time for him to react if the vehicle ahead decelerates. Also, the maximum speed is limited by the vehicle type and the speed limit of freeway. +- Drivers are "myopic": They can see only one vehicle in front, one behind, and several in the neighboring lanes. A driver can see the vehicle in front directly and the vehicle behind with the help of rearview mirrors. Moreover, a driver can turn the head to left or right to see cars in the immediately neighboring lanes but not in lanes beyond those. +- Drivers make decisions only according to their own interest. Because drivers are "myopic," they cannot know the conditions of the whole freeway. Consequently, they make greedy decisions so as to traverse the freeway in the shortest time. + +Additionally, to implement a cellular automaton, we propose the following assumptions: + +- Lane-changing does not cost additional time. Although the distance traveled in lane-changing is longer compared with staying in one lane, drivers tend to accelerate to change lanes. Thus, we suppose that lane-changing costs the same amount of time as staying in the current lane. +- Each cell represents a $4\mathrm{m} \times 6\mathrm{m}$ area. The road's length is 2,000 cells. A lane's width is 1 cell. We divide a multilane freeway into equally-partitioned lanes. We choose a length of $12\mathrm{km}$ to simulate because of the trade-off between time complexity and the completeness of the model. Each array of cells represents a lane. +- A time step represents 1 second. Such an assumption is made by nearly all cellular automaton techniques. +- We run 20,000 time steps and analyze the last 1,000 steps. This procedure ensures that we obtain steady-state conditions. + +- While all vehicles trend toward the maximum speed, every single vehicle randomly slows down with probability $p_{\text{slow}}$ . This randomized slowing is characteristic of traffic flow. +- Acceleration is done steadily while any kind of deceleration can be done in one time step. Steady acceleration is an energy-saving behavior, and drivers will decelerate to avoid possible collisions. + +# Characteristics of Vehicles + +We classify the vehicles into three groups: + +- Cars: Cars are small vehicles, which can have high speeds. +- Buses: Buses are large vehicles, and their speeds can be relatively high. +- Trucks: Trucks are large vehicles that can have only lower speeds. + +Then we define the characteristics of the three types of vehicles: + +- Occupancy: Each car occupies one cell. Since a typical car's length is $3.6 \, \text{m} - 4.6 \, \text{m}$ , a car cannot fully occupy a cell (which has length $6 \, \text{m}$ ). We place the car in the middle of the cell and treat the space in front of and behind as safe distance. Accordingly, each bus and each truck occupies two cells with safe distance preserved. +- Maximum speed: Cars: 6 cells per time step (130 km/h); buses: 5 cells per time step (108 km/h); trucks: 3 cells per time step (65 km/h). +- Percentage: We assume that cars account for $60\%$ of the traffic flow, buses $30\%$ , and trucks $10\%$ , based on the data collected by Anhui [2013]. + +# Laws Governing the Cellular Automaton + +Our cellular automaton is implemented by laws sequentially implemented in every time step. These laws are based on the previous analysis and are expressed from a computational perspective. + +1. Moving: These laws are set based on the assumptions of the model and the characteristics of the vehicles. We denote the $j^{\text{th}}$ vehicle in lane $i$ by $\text{vehicle}_j^i$ . We point out some notation in Figure 3, and we diagram the algorithm in Figure 4. + +1. Determine the speed: + +Our three laws below were implemented sequentially, so we use $\left(t + \frac{1}{3}\right),\left(t + \frac{2}{3}\right)$ to denote intermediate times between $t$ and $t + 1$ . + +(a) Acceleration: All drivers tend to drive as fast as possible: + +$$ +\text {I f} v _ {j} ^ {i} (t) < v _ {j, \text {e x p e c t}} ^ {i}, \quad \text {t h e n} \quad v _ {j} ^ {i} \left(t + \frac {1}{3}\right) = v _ {j} ^ {i} (t) + 1, +$$ + +where $v_{j}^{i}(t)$ is the speed of vehicle at time $t$ , and $v_{j, \text{expect}}^{i}$ means the expected speed of vehicle. + +![](images/0302455eb1016f8169b8ca86c03398ab0b61f1008931bb17b8fd8c062254befc.jpg) +Figure 3. Clarification of some notation. + +![](images/22e6af2ccc71c0d61a208094f6113d672564a15fd611abe92cebe1e73d5433d6.jpg) +Figure 4. Flow chart of the cellular automaton. + +(b) Randomized slowing: Every vehicle randomly slows down by 1 cell/sec with probability $p_{\mathrm{slow}}$ . Mathematically, + +$$ +v _ {j} ^ {i} \left(t + \frac {2}{3}\right) = v _ {j} ^ {i} \left(t + \frac {1}{3}\right) - 1. +$$ + +(c) Deceleration(because of other vehicles): To maintain a safe following distance, a cell cannot be occupied by more than one vehicle at the same time step. + +$$ +\text {I f} v _ {j} ^ {i} \left(t + \frac {2}{3}\right) > g a p _ {j} ^ {i} (t), \quad \text {t h e n} \quad v _ {j} ^ {i} (t + 1) = g a p _ {j} ^ {i} (t), +$$ + +where $gap_j^i (t)$ is the gap between vehicle and the vehicle ahead. + +2. Determine the location: We derive the locations based on the speeds: + +$$ +x _ {j} ^ {i} (t + 1) = x _ {j} ^ {i} (t) + v _ {j} ^ {i} (t + 1), \qquad g a p _ {j} ^ {i} = x _ {j} ^ {i + 1} - x _ {j} ^ {i} - 1, +$$ + +where $x_{j}^{i}$ is the location of vehicle $_j^i$ + +# 3. Lane-changing: + +(a) Changing to the left lane + +i. Based on trigger criterion, the driver tends to change to the left if the vehicle ahead moves so slowly that the driver fails to reach the driver's expected speed: + +$$ +g a p _ {j} ^ {i} (t) < v _ {j, e x p e c t} ^ {i}. +$$ + +ii. Then, taking the acceleration criterion into account, we have + +$$ +g a p _ {j} ^ {i} (t) < l f g a p _ {j} ^ {i}, +$$ + +where $lfgap_{j}^{i}$ is the left front gap of vehicle $_j^i$ + +iii. Lastly, considering safety, we have + +$$ +l b g a p _ {j} ^ {i} > l b v _ {j} ^ {i}, +$$ + +where $lbgap_{j}^{i}$ is the left back gap of vehicle $_{j}^{i}$ and $lbv_{j}^{i}$ is the speed of the vehicle behind in the left lane. + +(b) Changing to the right lane + +i. If any one of the rules in 3(a) above is not satisfied, the driver cannot change to the left lane. + +ii. To consider safety, we have + +$$ +r f g a p _ {j} ^ {i} > v _ {j} ^ {i}, +$$ + +where $rfgap_j^i$ is the right front gap of vehicle $j^i$ . + +iii. To consider another safety criterion, we have + +$$ +r b g a p _ {j} ^ {i} > r b v _ {j} ^ {i}, +$$ + +where $rbgap_j^i$ is the right back gap of vehicle $j^i$ and $rbv_j^i$ is the speed of the vehicle behind in the right lane. + +# Modeling Using Periodic Boundary Conditions + +To run a cellular automaton, we need to specify the boundary conditions and the initial condition. Boundary conditions determine how vehicles move into and out of the system; an initial condition determines the initial distribution of vehicles and their speeds. + +Inspired by Nagel and Schreckenberg [1992], we use periodic boundary conditions. Periodic boundary conditions assume that the vehicles moving out of the freeway immediately appear again at the front of the system, so the total number of vehicles is constant. Thus, we can accurately define a constant system density and study the performance of the rule with varying density. Moreover, periodic boundary conditions turn our road into a closed system, so it is similar to all vehicles moving on a circle. So we name our extended model the "Ring Road" model. + +# The Refined Model with Ramps + +We refine our model by adding entrance and exit ramps and applying open boundary conditions. + +Entrance ramps give vehicles a chance to accelerate to the expected speed. However, most ramps are too short to allow speeding up to, say, $100\mathrm{km / h}$ . As a result, vehicles in the right-most lane might have to slow down to let vehicles enter, or incoming vehicles might have a hard time entering. Both cases might have deleterious effects on the traffic flow. + +Likewise, vehicles have to decelerate to enter an exit ramp, where similar problems can occur. + +To introduce ramps in our cellular automaton, we add the following assumptions. + +# Additional Assumptions in the Refined Model + +- Exit ramp: The overlap between the freeway and the exit ramp ranges from the $850^{\text{th}}$ cell to the $900^{\text{th}}$ cell, which is $300 \mathrm{~m}$ . +- Entrance ramp: The overlap between the freeway and the entrance ramp ranges from the $1100^{\text{th}}$ cell to the $1500^{\text{th}}$ cell, which is $300 \mathrm{~m}$ . + +# Additional Laws + +- Off-ramp law: If the vehicle wants to leave the freeway, it is not allowed to change to left lane after it reaches the $700^{\text{th}}$ cell. At the same time, it will slow down to 3 cells per second. If the vehicle is in the rightmost + +lane between the $850^{\mathrm{th}}$ cell and the $900^{\mathrm{th}}$ cell, we assume that it can move into the ramp at the next time step if it wants to. + +- On-ramp law: A vehicle on the entrance lamp, if the laws of changing to the left lane are satisfied, can enter the freeway during the next time step. Otherwise, it will continue to move forward. + +# Modeling Using Open Boundary Conditions + +Our model with ramps is no longer a closed system. Thus, we must use open boundary conditions to determine how vehicles flow in. Considering that the amount of traffic is stochastic and the inputs to the system are discrete, a generally-used approach is to model the entry of vehicles as a Poisson process. Consequently, we assume that the number of vehicles flowing in from the starting point in any interval of length $t$ is Poisson-distributed with mean $\lambda t$ . + +Vehicles tend to move off through the exit ramp with a probability $p_{\mathrm{exit}}$ . We try to let the number of vehicles from the entrance ramp equal the number of vehicles from the exit ramp. However, as discussed above, some vehicles may fail to exit the freeway and others may fail to enter. We view both kinds of events as bad characteristics of the traffic flow. + +# Results: Influencing Factors + +We give definitions of light and of heavy traffic. Then we explicitly define four factors and change one factor each time in order to analyze how it influences the performance of the KRETP Rule. + +From simulations with our cellular automaton, we find that the traffic flow can be classified into two kinds, as shown in the time-space diagrams in Figure 5. The diagrams show the trace of every vehicle in the simulation. A gentle trace indicates a low speed; a steep trace indicates a high speed. + +- In Figure 5a, we see that vehicles with a high speed can continue at the high speed, which indicates that they are not constrained by slower vehicles. This is light traffic. +- In Figure 5b, no vehicle can reach a high speed, which indicates congestion. This is heavy traffic. + +# Variables and Criteria + +We choose as our variables + +the number of lanes, +the maximum speed limit, +- the minimum speed limit, and +- signaling behavior. + +![](images/a3ecce912c5761e11945217fc051216b3f9ad37fe8394132a62cabe3e612b15d.jpg) +(a) Time-space diagram (occupancy $= 0.1$ ). +Figure 5. Time-space diagrams for different occupancy levels. + +![](images/4315db83cc28eaa3253e3b11ce6bf53a43a81171581abedf1982e3d6b78b826c.jpg) +(b) Time-space diagram (occupancy $= 0.4$ - + +To judge the effectiveness of a rule, we propose the following criteria: + +- Flow rate: The number of vehicles passing a point per unit time: + +flow rate $= \frac{N}{T}$ , where $N$ vehicles pass a point in time $T$ . + +- Average speed: The average speed of all vehicles passing a point on a highway or a lane over a specified time period. + +$$ +\text {a v e r a g e} \overline {{v}} (t) = \frac {1}{N (t)} \sum_ {j = 1} ^ {3} \sum_ {i = 1} ^ {N _ {j} (t)} v _ {j} ^ {i} (t), +$$ + +where $N(t)$ is the number of vehicles on the freeway, and $N_{j}(t)$ is the number of vehicles in the $j^{\text{th}}$ lane. + +- Lane utilization ratio: the ratio between the number of vehicles in the lane to the total number of vehicles on the freeway. + +$$ +\text {l a n e} \quad \text {u t i l i z a t i o n} \quad \text {r a t i o} _ {j} = \frac {N _ {j} (t)}{N (t)}. +$$ + +- Sharp braking frequency: We do not consider accidents in our simulation, since we view accidents as abnormal events and it is difficult to consider abnormal events in microscopic models. Instead, we use sharp braking frequency as an indicator of unsafety. Sharp braking frequency occurs when a vehicle's speed decreases by more than 2 cells per time step. If this happens in our simulation, we assume that it would likely cause an accident in reality. + +- Shift ratio: The number of lane shifts per unit time. + +$$ +\text {s h i f t} \quad \text {r a t i o} = \frac {N _ {\text {s h i f t}} (t)}{N (t)}, +$$ + +where $N_{\mathrm{shift}}(t)$ is the number of vehicles changing lanes at the $t^{\mathrm{th}}$ time step. + +- Satisfaction: If a vehicle fails to reach its maximum speed, the driver's satisfaction will decrease. We define the expected time $t_{\text{expect}}$ as the time that it takes to drive a given distance at the maximum speed. We define the actual time $t_{\text{actual}}$ as the time that it actually takes to drive that distance. Then we derive our criterion by dividing expected time by actual time, so the value of satisfaction ranges from 0 to 1: + +$$ +\text {s a t i s f a c t i o n} = \frac {t _ {\text {e x p e c t}}}{t _ {\text {a c t u a l}}}. +$$ + +- Standard deviation of speed: People might feel uncomfortable if the vehicle keeps accelerating and decelerating. We use the standard deviation of speed to measure this kind of discomfort: + +$$ +\text {S t d .} = \frac {1}{N (t)} \sum_ {j = 1} ^ {3} \sum_ {i = 1} ^ {N _ {j} (t)} \sqrt {\sum_ {t = 1} ^ {T} [ v _ {j} ^ {i} (t) - \bar {v} (t) ] ^ {2}}. +$$ + +We analyze each of our variables in light of each of our criteria. + +[EDITOR'S NOTE: For each of the criteria for each of the variables, the authors offer graphs. We cannot reproduce all of them here; we offer instead in most cases just the authors' conclusions from their figures.] + +# Variable: Number of Lanes + +- Flow rate: With low occupancy, the road resources are not fully utilized, so the flow rate is low. With high-occupancy, there may be congestion. We aim for an optimal occupancy. +From Figure 6, we see that the optimal occupancy for a 3-lane freeway and a 4-lane freeway is between 0.2 and 0.3. Thus, in our following analysis, we let an occupancy of 0.1 represent light traffic and an occupancy of 0.4 represent heavy traffic. +- Average speed: As occupancy increases, the average speed tends to decrease due to congestion. Surprisingly, the curves for 3 lanes and for 4 lanes in Figure 7 coincide! This result suggests that different numbers of lanes may make little or no difference for average speed. +- Lane utilization ratio: As expected, our model demonstrates that utilization of the leftmost lane increases with occupancy. + +![](images/d22b1ea41a58d291ce4cbc35e8363ad03b7a5775d14fe4f271ba0bd7a6abb4b1.jpg) +Figure 6. Total flow rate vs. occupancy, for 3 lanes (bottom) and for 4 lanes (top). + +![](images/470a34fe5903f9d45293c60b9c45c3c4abedca9256be8936efb8c3bb92140882.jpg) +Figure 7. Average speed vs. occupancy, for 3 lanes and for 4 lanes. + +- Sharp braking frequency: Also as expected, for light traffic, as occupancy increases, sharp braking increases, because the average speed is high. In heavy traffic, the average speed is low, so sharp braking decreases correspondingly. +- Shift ratio: For low occupancy, the shift ratio is high, probably because of the low number of cars; but it levels off as occupancy increases. +- Satisfaction: As occupancy increases, speeds slow, so satisfaction decreases. But at the same occupancy, there is little difference between a 3-lane and a 4-lane freeway in terms of satisfaction. +- Standard deviation of speed: The shapes of the curves for standard deviation are similar to the speed curves and unrelated to the number of lanes. + +# Variable: Maximum Speed Limit + +We study a maximum speed limit of 4 cells per second (86 km/h) and of 5 cells per second (108 km/h) and also no maximum speed limit. + +- Flow rate and average speed: Vehicles can move at high speed in light traffic, so the speed limit must have a significant influence on the flow rate and on the average speed. In heavy traffic, however, few vehicles can reach the speed limit. However, the speed limit may avoid sharp changes in following distance and hence be beneficial to traffic flow. Therefore, an appropriate speed limit might result in a higher flow rate. We see from Figure 8 that the results from simulations of our model meet our expectations. + +![](images/f18a818de0886ac97b8f9573e08eb01d22d68a342f86a0faa986b30f3e526cbd.jpg) +(a) Flow rate. + +![](images/bd986f0ea816b13f0711acd7153918ce0fc04c8d5d6b32cfa80ae5110e8b713b.jpg) +(b) Average speed. +Figure 8. Flow rate vs. speed limit, and average speed vs. speed limit, for light traffic (left bars) and for heavy traffic (right bars). + +- Sharp braking frequency and shift ratio: Figure 9 demonstrates that in light traffic, the speed limit can reduce the sharp braking frequency and the shift ratio, which is beneficial to safety. But there is little effect in heavy traffic. +- Lane utilization ratio: As expected, utilization ratio of the leftmost lane in light traffic decreases under a lower speed limit because the limit reduces a driver's willingness to overtake. + +# Minimum Speed Limit + +We study the influence of a minimum speed limit by presenting two cases: a minimum speed limit of 3 cells per second (65 km/h), and no limit. By a minimum speed limit, we mean that vehicles will not go more + +![](images/507bcb57d1b7117fa91d5ec85ad66d506924aec24f38ea65224e40fe507cef71.jpg) +(a) Sharp braking frequency. + +![](images/81df9fc7f5e48e2e1f47607b6396e11c67e37d49287ccbf56a7d02f432df3216.jpg) +(b) Shift ratio. +Figure 9. Sharp braking frequency vs. speed limit, and shift ratio vs. speed limit, for light traffic (left bars) and for heavy traffic (right bars). + +slowly due to random slowing. However, they can decelerate due to safety considerations. + +- Flow rate and average speed: Opposite to the maximum speed limit, the minimum speed limit plays an important role in heavy traffic but is of little importance in light traffic. +- Sharp braking frequency and shift ratio: In light traffic, vehicles cannot slow down in advance due to the speed limit, which in turn increases the sharp braking frequency. In heavy traffic, the frequency decreases. The reason might lie in the low shift ratio and the fluent traffic flow; the conditions for lane-changing are difficult to satisfy. +- Lane utilization ratio: In light traffic, the ratio is of little influence. In heavy traffic, the ratio tends to evenly distribute between the three lanes due to the difficulty of changing lanes. + +# Signal Before Shifting + +It is a rule to signal before changing lanes; we add this factor into our model. We assume that a driver must signal first if the lane-changing conditions are satisfied. The related vehicle might decelerate to give way to the driver; then the driver can change lanes in the next time step. We compare the cases with and without this signal mechanism. + +- Flow rate and average speed: With signaling, acceleration is constrained, which reduces the flow rate and the average speed. Figure 10 illustrates this change. + +![](images/c0761dca498e5200d48cff099242059066221b625e4d7e46ad8353ff99f4899d.jpg) +(a) Flow rate vs. signaling. + +![](images/4867cfe46088c91897270fabce26110fce81d9b5588e4b05650238a9dc7019d3.jpg) +(b) Average speed vs. signaling. +Figure 10. Flow rate and average speed vs. signaling, for light traffic (left bars) and for heavy traffic (right bars). + +- Sharp braking frequency and shift ratio: Because of vehicles' giving way behavior, the shift ratio increases and vehicles have to respond to the signal behavior, which will increase sharp braking (Figure 11). + +![](images/fded56d9339d3342ff41befc863369cfc7b8b8267e6ea43af30477bfcd73c972.jpg) +(a) Sharp braking frequency vs. signaling. +Figure 11. Sharp braking frequency and shift ratio vs. signaling, for light traffic (left bars) and for heavy traffic (right bars). + +![](images/41df46b36de2c2803a28a25182eb16c5e702a4f5a9819fd8ae51b06d14529be2.jpg) +(b) Shift ratio vs. signaling. + +# Conclusions + +- The number of lanes is not influential under any circumstances. +- The maximum speed limit plays a significant role in light traffic but is of no importance in heavy traffic; for a minimum speed limit, the reverse. +- Signaling behavior reduces the flow rate and average speed while enhancing safety. + +# Results: The Optimal Rule + +We examine five basic rules and design four improved rules. To determine the performance of the rules, we implement a fuzzy synthetic evaluation to consider all the criteria. + +# Basic Rules of Overtaking + +We take a three-lane freeway as an example to state the rules. Apart from the KRETP rule, we present four other basic rules: + +- The free-overtaking rule: Drivers can overtake or change lanes as they wish. +- The no-overtaking rule: Vehicles randomly move into the freeway, but once they are on the freeway, they must stick to their current lane. This rule can be implemented by replacing the dashed lines separating lanes with full lines. This lane-marking bans drivers from changing lanes. +- The different-speed-limit-on-each-lanerule: The maximum speed limit ranges from lowest in the rightmost lane to highest in the leftmost lane. +- The complete-assigned-lane rule: Cars are assigned to the leftmost lane, buses to the middle lane, trucks to the rightmost lane. No vehicle can change lanes after it moves onto the freeway. + +# Criteria for the Rules + +We adopt flow rate and average speed as criteria for the quality of traffic flow, sharp braking frequency as the criterion for safety, and satisfaction and standard deviation of speed as criteria for people's experience. We analyze the five basic rules with each criterion to see their performance in both light and heavy traffic. + +- Flow rate and average speed: + +In light traffic, the no-overtaking rule performs the worst because the road resources cannot be fully used. There is no significant difference among other rules (Figure 12(a)). + +![](images/84954b558271d300dc69429af53a46bde3d3325ea577cda7e925df2ca9f416a9.jpg) +(a) Flow rate vs. rule. + +![](images/59046236e301cbe88f0ebbd9fb62e9c45dca26f591e0173eb685de8e8ff8d4f1.jpg) +(b) Average speed vs. rule. +Figure 12. Flow rate and average speed vs. rule, for light traffic (left bars) and for heavy traffic (right bars). + +In heavy traffic (Figure 12(b)), the complete-assigned-lane rule behaves the worst: The leftmost lane, which is assigned to cars, suffers bad congestion while the other two lanes are relatively available. The different-speed-limit-on-each-lane rule performs the best: This rule bans large vehicles from entering the passing lanes and prevents a large number of cars from entering the rightmost lane. + +# - Sharp braking frequency: + +The results are presented in Figure 13. In light traffic, the no-overtaking rule performs the worst because vehicles at high speeds encountering slow vehicles can only brake instead of changing lanes. The well-performing rules are KRETP and the complete-assigned-lane rule. The latter too bans overtaking, but the speed is relatively even, so sharp braking seldom occurs. + +In heavy traffic, the free-overtaking rule performs the worst. The different-speed-limit-on-each-lane rule performs the best because it can reasonably allocate the vehicles to three lanes by speed; thus, traffic can flow freely. + +# - Satisfaction and standard deviation of speed: + +Satisfaction rises with speed (no surprise there). In terms of the standard deviation of speed: In light traffic, different-speed-limit-on-each-lane performs the best. Other rules perform almost the same. In heavy traffic, the complete-assigned-lane rule does worst, because of low speed. In this case, the criterion cannot accurately indicate people's experience. + +![](images/3f1b6d5350eaf319de7a84782ef5e53f5e0b447e1eea0092610dc5acfc5213dd.jpg) +Figure 13. Sharp braking frequency vs. rule, for light traffic (left bars) and for heavy traffic (right bars). + +# Fuzzy Synthetic Evaluation for Basic Rules + +We obtain different results using different criteria. If we want a unique answer, we have to combine criteria into a single criterion. The relative importance of the criteria is hard to determine. Since we have no other information, we implement a fuzzy synthetic evaluation (FSE) [Sadiq et al. 2004], which determines the weight of each criterion based on data. + +One way to determine the weights is the coefficient of variation method. If a criterion can differentiate the rules evaluated, the method will assign a large weight to the criterion. + +In the following subsections, we use light traffic to introduce the process of FSE. Heavy traffic can be processed in the same way. + +# Identify alternatives and attributes + +The alternatives are the five basic rules and the attributes are the five criteria. The values of each attribute for each alternative are listed in Table 2, from which we derive the ideal alternative: + +$$ +u = (u _ {1} ^ {0}, u _ {2} ^ {0}, u _ {3} ^ {0}, u _ {4} ^ {0}, u _ {5} ^ {0}) = (0. 9 6 4, 4. 5 5 2, 0. 0 3 3, 0. 8 4 1, 0. 8 1 3). +$$ + +# Determine fuzzy evaluation matrix + +The membership function is defined as + +$$ +r _ {i j} = \frac {\left| a _ {i j} - u _ {i} ^ {0} \right|}{\max \{a _ {i j} \} - \min \{a _ {i j} \}} +$$ + +Table 2. Criteria results for basic rules in light traffic. + +
flow rateaverage speedsharp braking frequencysatisfactionstd. deviation of speed
keep-right-except-to-pass0.964.550.040.841.15
free-overtaking0.934.200.080.791.36
no-overtaking0.632.800.090.531.42
different-speed-limit-on-each-lane0.854.130.060.780.81
complete-assigned-lane0.934.260.030.811.48
+ +Then we have the fuzzy evaluation matrix + +$$ +\mathbf {R} = \left[ \begin{array}{l l l l l} 0. 0 0 0 & 0. 0 0 0 & 0. 1 2 7 & 0. 0 0 0 & 0. 5 0 8 \\ 0. 1 0 8 & 0. 2 0 1 & 0. 7 5 0 & 0. 1 8 0 & 0. 8 1 4 \\ 1. 0 0 0 & 1. 0 0 0 & 1. 0 0 0 & 1. 0 0 0 & 0. 9 0 1 \\ 0. 3 5 7 & 0. 2 4 2 & 0. 5 0 4 & 0. 2 0 5 & 0. 0 0 0 \\ 0. 0 9 6 & 0. 1 6 9 & 0. 0 0 0 & 0. 1 0 4 & 1. 0 0 0 \end{array} \right]. +$$ + +Using the coefficient of variation method, we define $v_{j}$ and $w_{j}$ as: + +$$ +v _ {j} = \frac {s _ {j}}{\overline {{x}} _ {j}}, \qquad w _ {j} = \frac {v _ {j}}{\sum_ {i = j} ^ {5} v _ {j}}. +$$ + +Then we calculate the weighting vector + +$$ +w = (0. 2 4 3, 0. 2 2 6, 0. 1 6 4, 0. 2 5 1, 0. 1 1 7). +$$ + +# Aggregate using a fuzzy operator + +We use a fuzzy operator to aggregate and obtain the relative deviation: + +$$ +F _ {i} = \sum_ {j = 1} ^ {5} w _ {j} r _ {i j}, +$$ + +which measures the distance from a specific alternative to the ideal alternative. The lower the value, the better the alternative. + +# The Results + +The relative deviations in both cases are listed in Table 3. Smaller is better. + +In light traffic, KRETP has an absolute advantage over other rules; in heavy traffic, the different-speed-limit-on-each-lane rule is the best. + +Table 3. Relative deviations of different rules, in light traffic and in heavy traffic. + +
RuleLight trafficHeavy traffic
keep-right-except-to-pass0.080.56
free-overtaking0.340.57
no-overtaking1.000.63
different-speed-limit-on-each-lane0.280.16
complete-assigned-lane0.210.67
+ +# Improved Rules of Overtaking + +We propose four new rules by modifying / combining the five basic rules. + +- The partial-assigned-lane rule: Cars are assigned to the middle lane and the leftmost lane with permission to change between these two lanes. Buses and trucks are assigned to the rightmost lane and are banned from changing lanes. We present this rule because cars account for $60\%$ of vehicles and have relatively high speed. +- The truck-on-rightmost-lane-only rule: Trucks must stick to the right-most lane, while cars and buses can change among the three lanes. We present this rule because trucks have the lowest speed and they may constrain the speed of others if they are in a passing lane. +- The minimum-speed-on-leftmost-lane rule: We set a minimum speed for the leftmost lane. +- The partial-assigned-lane-and-keep-right rule: We combine the partial-assigned-lane rule and KRETP: Vehicles are assigned by the partial-assigned-lane rule and cars overtake by KRETP. + +# Fuzzy Synthetic Evaluation for All Rules + +To test the performance of the improved rules, we apply a fuzzy synthetic evaluation to all rules, with the results in Table 4. + +# Conclusions + +- The different-speed-limit-on-each-lane rule is the best in heavy traffic and hence should be used during rush hour. +- The partial-assigned-lane-and-keep-right rule is the best in light traffic and hence should be used at other times. + +# Sensitivity Analysis + +We test our model in both light traffic and heavy traffic for various changes in our assumptions. The analysis proves that our model is not unduly sensitive. + +Table 4. Relative deviations of different rules in light and in heavy traffic. + +
Light trafficHeavy trafficOverall
keep-right-except-to-pass0.140.520.27
free-overtaking0.370.520.42
no-overtaking0.990.580.85
different-speed-limit-on-each-lane0.330.120.26
complete-assigned-lane0.260.710.41
partial-assigned-lane0.060.560.23
trucks-on-rightmost-lane-only0.170.530.29
minimum-speed-on-leftmost-lane0.230.320.26
partial-assigned-lane-and-keep-right0.000.560.19
+ +# Percentages of Vehicles + +Despite the data from Anhui [2013], the percentages of vehicles of different types may vary. Therefore, we change the percentage of large vehicles $(40\%)$ by up to $15\%$ . We observe a $17\%$ increase in sharp braking frequency in light traffic, but the other criteria change little. In heavy traffic, all criteria change little. Hence, our model can be used on freeways with varying percentages of vehicles. + +# Random Slowing + +The probability $p_{\mathrm{slow}}$ describes random deceleration. We assumed $p_{\mathrm{slow}} = 0.2$ , since few data are available. If we increase it by $15\%$ , sharp braking frequency rises $16\%$ , which is acceptable. + +# Willingness to Change Lane + +A driver might choose not to change lanes even if all the other conditions are satisfied. We assumed that the probabilities of willingness to change to the left lane and right lane are 0.5 and 0.7, respectively. We change the probabilities by up to $15\%$ proportionally. The maximum deviation is $7\%$ , which indicates good robustness. + +# Further Discussions + +# Countries that Drive on the Left + +Although we can imagine that driving on the left mirrors driving on the right, there is one difference, which is human beings: A left-handed person is always a left-handed person. We assumed in our model that the probabilities of being willing to change to the left lane and to the right lane were 0.5 and 0.7. If this tendency does not change under any circumstances, in a left-driving country, drivers will have a higher tendency to move to the + +right lane—there, the passing lane. This is exactly the same as if we had swapped the two probabilities in our original model. + +Our simulation shows a maximum change of $5\%$ in any of our criteria. So we can safely conclude that even if willingness to change lanes differs between left and right, the deviations are small enough to ignore. + +# Modifications for an Intelligent System + +We propose two intelligent systems: + +- The semi-intelligent system: This system forces vehicles to change to the right lane if the conditions are satisfied, while whether to change to the left lane relies upon human judgment. +- The complete intelligent system: This system not only forces vehicles to change to the right lane if the conditions are satisfied, but also forces vehicles to change to the left lane if the corresponding conditions are satisfied. + +We run our simulation to examine the performance of the intelligent systems. + +- Flow rate and average speed: + +In light traffic, the flow rate of the complete intelligent system increases slightly, while that of semi-intelligent system decreases slightly, compared with the situation without an intelligent system. The small impact is due to rich passing-lane resources. + +In heavy traffic, the flow rates of the intelligent systems increase, because forcing vehicles to move back to the rightmost lane releases more passing-lane resources. + +- Sharp braking frequency and shift ratio: In light traffic, the shift ratio and sharp braking frequency both change slightly. In heavy traffic, the shift ratio increases significantly, as expected. In light traffic, the sharp braking frequency decreases greatly if a complete intelligent system is implemented: If a driver is about to be constrained by the vehicle ahead, the driver can change to the left lane in time to avoid sharp braking. +- Lane utilization ratio: + +The overall changes are relatively small. + +# Additional Research on the Refined Model with Ramps + +We present a refined model with ramps. Due to open boundary conditions, we use $\lambda$ to control the occupancy, which determines whether the traffic is heavy or light. Then we vary the value of $p_{\mathrm{exit}}$ to study the model. We have some interesting findings. + +- Flow rate: The probability $p_{\text{exit}}$ to exit has little, if any, impact on the flow rate, partly because we equalize the number of vehicles entering with the number exiting. + +- Average speed: In the previous analysis, the average speed was always consistent with the flow rate. Adding ramps, however, breaks this consistency, especially in heavy traffic. As $p_{\mathrm{exit}}$ increases, a large number of vehicles need to exit through the off-ramp. They decelerate in advance, which causes low average speed. +- Lane utilization ratio: + +In heavy traffic, $p_{\mathrm{exit}}$ increases utilization of the rightmost lane, because vehicles need to use the rightmost lane to exit. + +# Failure Ratio + +Some vehicles might fail to exit as desired, a common situation in the real world. In light traffic, $p_{\mathrm{exit}}$ is irrelevant; vehicles can move to the rightmost lane with ease. In heavy traffic, when $p_{\mathrm{exit}}$ is low, the average speed is relatively high, which makes moving to the rightmost lane difficult; when $p_{\mathrm{exit}}$ becomes high, the average speed slows down—vehicles have more time to move to the rightmost lane, which in turn reduces the failure ratio. + +# Strengths and Weaknesses + +# Strengths + +- Our sensitivity analyses show that our models are fairly robust to changes in parameter value. +- We take into account different types of vehicles, based on data. We consider the lengths of vehicles and different maximum speeds, which makes the model closer to reality. +- We come up with various criteria to compare different situations. Hence, an overall comparison can be made based on these criteria. +- The results of our models also agree with common sense and experience. +- We offer a refined model to consider the role of ramps. + +# Weaknesses + +- Factors of human judgments may be over-simplified. To consider that a driver may randomly decelerate and not choose to overtake when possible, we simply define a probability respectively. The actual situation may be more complicated. +- Some of the parameters are based on semi-educated guesses because few data are available. However, based on our sensitivity analysis, they will not make a great difference if slightly changed. +- We did not consider look-ahead by each driver. In our model, drivers change their speed only based on the information of the previous time step. But in fact, they can look ahead further in time, make a prediction, and choose their speed in a more complicated way. + +# References + +Anhui Expressway Company Limited Network. 2013. Traffic flow on Tian-chang section of the national road 205. +http://www.anhui-expressway.net/enterprise/ +quarterselect1.aspx?Year=2013&RoadID=205%u56fd%u9053 +%u5929%u957f%u6bb5. +Chowdhury, D., D.E. Wolf, and M. Schreckenberg. 1997. Particle hopping models for two-lane traffic with two kinds of vehicles: Effects of lane-changing rules. Physica A: Statistical Mechanics and its Applications 235: 417-439. +Knospe, W., L. Santen, L., A. Schadschneider, A., and M. Schreckenberg. 1999. Disorder effects in cellular automata for two-lane traffic. Physica A: Statistical Mechanics and its Applications 265 (3): 614-633. +Nagel, K., and M. Schreckenberg. 1992. A cellular automaton model for freeway traffic. Journal de Physique France I, 2 (12): 2221-2229. +Rickert, M., K. Nagel, M. Schreckenberg, and A. Latour. 1996. Two lane traffic simulations using cellular automata. Physica A: Statistical Mechanics and its Applications 231 (4): 534-550. +Roess, R.P., E.S. Prassas, and W.R. Mcshane. 2004. Traffic Engineering. 3rd ed. Upper Saddle River, NJ: Pearson Education, Inc. +Sadiq, R., T. Husain, B. Veitch, and N. Bose. 2004. Risk-based decision-making for drilling waste discharges using a fuzzy synthetic evaluation technique. *Ocean Engineering* 31 (16): 1929-1953. + +![](images/fb3638cbe7ad267085093a9d1effc1969e36a4041d737a7076bde577b8283762.jpg) +Yunyi Zhang, Xiao Zhao, and Yaofeng Zhong. + +# Judges' Commentary: The Keep Right Papers + +Kelly Black + +Dept. of Mathematics + +Clarkson University + +P.O. Box 5815 + +Potsdam, NY 13699-5815 + +kjblack@gmail.com + +# Introduction + +The questions for the Keep Right Problem of the 2014 MCM required teams to examine traffic rules and determine a way to assess different rules and practices. The specific task was to examine the "keep right except to pass" rule. Teams were asked to determine how to balance different concerns such as safety and traffic flow. + +The majority of teams developed computational models to simulate traffic. The teams generally extended and adapted standard models to use multiple lanes, with the majority of drivers respecting various general rules. Many teams examined a variety of different rules, and one of the primary difficulties was to determine ways to analyze the results of the team's simulations. + +This commentary is an overview of the approaches used by the teams on this problem. I first examine the problem itself and discuss the different approaches. Next, I discuss issues associated with presenting the results of a model. Finally, I provide an overview of various topics. + +I do not provide an overview of the judging process. This is an important topic discussed in the commentaries from previous years [Black 2009; 2011; 2013]. I highly recommend that both advisors and students read some of the commentaries from previous years that do include information about the judging process, since that information can help to motivate the importance of different aspects of a report. + +# The Modeling Problem + +I discuss the questions posed as part of the Keep-Right Problem and offer an overview of the modeling approaches employed. + +Prior to reading papers, the judges read the problem statement carefully. After reading the statement, each judge reads a random sample of papers. This is done so that the judges get an idea of what the teams are able to address in the short time allotted. We are acutely aware that the time and resource constraints make it difficult to address the problem, and we make every effort to let the teams' efforts—rather than the judges' expectations—drive the event. + +The majority of papers made use of either a physics-based model or a cellular automata model. It was not uncommon to read papers that included both approaches, and many teams examined a variety of rules and made comparisons between the differences in their models. + +# The Questions + +There are three parts in the original problem statement. + +- The first part of the problem requires teams to "build and analyze a mathematical model to analyze the performance" of a traffic rule, the rule that requires drivers to stay to one side of the road and change lanes only to pass another car. The important aspect, which is open to interpretation, is to "analyze" the model. A wide range of approaches was attempted. At the extremes, some teams focused only on flow rates, and some teams focused only on safety issues. The best papers examined a combination and examined the relationship between these two aspects. +- The second part of the problem required the teams to make an argument as to whether or not the rule made sense in a country where drivers stay on the left side of the road. This was a question that received less attention. The vast majority of teams mentioned it or examined a few simulations, and most teams provided broad insight into the problem. +- The final part was to examine the impact of intelligent systems. The teams were asked to determine what would happen if all of the driving was automated, and the teams were asked to determine how the analysis might change. This part of the question was wide open to interpretation. The wording was vague, and it was interpreted in a wide variety of ways. This was an opportunity for a team to excel and do something that could set their paper apart. + +# Models + +The majority of teams constructed a computational model and conducted simulations that made use of one of two approaches. + +- The first common approach was a mechanics-based physics model. +- The second approach was a cellular automata model based on either the Biham-Middleton-Levine model [1992] or the Nagel-Schreckenberg traffic model [1992]. A small number of teams attempted to model traffic flow using a continuous model that results in a partial differential equation [Lighthill and Whitham 1955; Richards 1956]. + +In both the physics and the cellular automata approaches, the teams constructed computational models and ran multiple simulations under a variety of conditions. Many of the higher-ranked papers first presented a simple model and then looked at a succession of more complicated models. The teams often discussed the relative shortcomings of the models and proposed fixes to address the issues. In doing so, the teams constructed a sequence of models, with analysis, provided a critical review of their models, and posed adaptations to improve their models. + +The teams often proposed a set of rules and governing equations for how to react over discrete time steps in their computational model. These rules could be adapted to examine different circumstances with respect to passing other cars and changing lanes. The models had a wide range of features such as how to handle the inflow and the outflow, as well as on- and off-ramps. + +Presenting rules in a structured way is a challenge. Many teams simply provided a list of rules and relationships with little or no discussion or motivation. Reading these papers and trying to understand the models was difficult, and it was not clear what the teams actually did and what they simply repeated from what they found in the literature. The papers that were most warmly received by the judges included a narrative that described the relationships, discussed the motivation for the relationships, and offered insights into the individual terms. Additionally, many teams included a flowchart with a brief discussion of the chart in their narrative. Such aids were immensely helpful in trying to understand a team's model. + +Finally, a smaller number of teams attempted to construct continuous models based on partial differential equations. Such models were problematic: + +- First, it was difficult to incorporate multiple lanes in the resulting models. +- Second, the resulting models often led to nonlinear hyperbolic equations, and they often admit shocks in their solutions. Trying to approximate or find analytic solutions to the resulting equations is problematic. + +# Presenting Results + +One of the biggest challenges in the problem was for the teams to present their results in a coherent, structured way. The majority of teams conducted Monte Carlo simulations. That is, they had to examine the results of multiple simulations, and those simulations were conducted using probabilistic models. Assembling the resulting data, analyzing it, and presenting the results is a difficult task. + +The primary way that results were given was to provide sample means from multiple runs. Additionally, it was common to present the results from a single simulation. The results were often given in the forms of tables and graphical representations. Again, the tables often provided sample means, but it is was rare to provide any indication of the variation and distribution in the sample data. + +With respect to graphical representation, a common figure was of a "time-space" chart representing the traffic density in both space and time. Common issues associated with such figures is that they were presented with little discussion or were poorly annotated. Teams that were able to provide good descriptions of every figure with proper annotation of the plots had a considerable advantage with respect to how a judge reacted to their paper. The kind of figures necessary to convey the important features of the complex data sets generated require detailed descriptions. A team that simply presented a figure with no discussion or had figures that lacked labels on the axes and titles was at a severe disadvantage. + +Finally, most teams generated results that were sample data based on a probabilistic model. Few teams provided adequate statistical measures of their data. Even fewer teams provided any sense of the distribution of their data, with histograms and boxplots an exceedingly rare occurrence. This has been the case for as long as I have been a judge! Such basic practices to communicate and interpret stochastic data were acutely missing this year. It is clear to me that we are failing to provide for our students with respect to developing their understanding about what data is, thinking about sampling, and how to analyze data. + +The few teams that recognized that they had data and provided the most basic of statistical analyses had an immediate advantage. Simply reporting a sample standard deviation was enough to make a paper stand out from the rest of the field! The few teams that discussed the distribution of their results in the slightest way immediately demonstrated an understanding of the nature of their work in a way that very few other teams were able to do. + +# Other Themes + +In this section, I discuss a number of topics that are always important—and require discussion every year. First, I give a few notes on the summary and the introduction. Next, I discuss strengths and weaknesses, followed by sensitivity. Finally, I offer a discussion on writing in general. + +# The Summary and Introduction + +Every year, the summary is given a nontrivial weight in the judging. The summary is the first thing that a judge will read, and it is the first impression. In the early rounds of the judging, the relative importance of the summary is magnified. By the later rounds, it is not as important; but that is partly because most of the papers still being read in the later rounds have well-written summaries. + +My overall impression is that this year the summaries appeared to be better than in previous events. A good summary should do three things: + +- It should provide a context and overview of the problem. +- It should give the reader a good idea of the general approach. +- Finally, it should include an explicit statement of specific results. + +The next thing that a judge will read is the introduction. Many teams borrow heavily from the summary for their introduction, and that is a good thing. The introduction should include the same things that are in the summary, and we understand that the teams operate under difficult time constraints. There are some important additions, though, that should be in the introduction: + +- First, the introduction should provide more context and background information about the problem. The best papers provided background information about the different modeling approaches and gave the reader a sense of the history of the problem. A team can set the tone for the paper by immediately demonstrating a basic understanding of the problem and letting the reader know what they think are the core ideas. +- The introduction should give an explicit statement of the contents and structure of the paper. The team should tell the reader what to expect and mitigate any surprises. After reading the introduction, the reader should know what to expect and understand the structure of the whole paper. + +# The Conclusion + +One aspect of the paper that is often overlooked is the conclusion. It is not uncommon, even in the best papers, to have a short conclusion that + +provides little insight into the problem or the paper itself. The writing of the conclusion is an opportunity for the team to wrap up loose ends and to remind the reader of the full spectrum of activities that have been discussed in the paper. + +When a paper ends with a weak statement and no overview of the results or approach, it is a let-down. The conclusion is the last chance impression on the person reading the paper. The teams should take advantage of the conclusion to remind the reader about the tremendous effort required to do the work they are presenting. + +# Strengths and Weaknesses + +A critical part of the modeling process is to stop and engage in a critical view of the model. A team should identify the things that are best and the things that need to be improved, so that the team can demonstrate that they understand and can perform this aspect of the modeling process. No model is perfect, and each model can provide insights due to specific strengths. + +Identifying the strengths and weaknesses of a model is something that is stated in the requirements and other materials for the MCM. Every year, the judges assign a relative weight for this part of a paper; and when we pick up a paper we expect to see insights into a model's strengths and weaknesses. Most teams include a separate section in which a bulleted list is given, although few provide an adequate introduction and transition for their lists. + +We understand that there are enormous time constraints, so a bulleted list is good. The strengths and weaknesses is not just a bulleted list, though. There should be some commentary within the narrative itself that also discusses these issues. In this year's contest, a large number of teams developed multiple models. The best teams provided transitions that included a discussion of their motivation for their improvements, provided an explicit acknowledgment of the role of model refinement, and identified their model's relative strengths and weaknesses. + +# Sensitivity + +Another aspect of modeling that is given a heavy weight is sensitivity of a model. This is an extremely important idea, yet it is the one part of the modeling process that is given the least attention. A team that performs a structured and detailed sensitivity analysis will stand out. Few teams do it, though. + +Some teams have a section on sensitivity, but most of the discussion about sensitivity is either superficial or does not adequately demonstrate an understanding of what sensitivity is. It is important for a team to look at individual terms or parts of a model and ask what happens to the measured responses under some small change. That change can be as simple as + +looking at a small change in the value of a parameter or a more-nuanced look at what happens to a small change in the model itself. A team can perform a small bit of relatively easy analysis, and that can make a big difference in how a paper is received. + +# Writing + +Writing and grammar are important. Every year, we see teams that appear to have created excellent models and probably performed an excellent set of analyses on their models. Unfortunately, in the eyes of someone reading a paper, the work cannot be better than the writing submitted. A paper with poor grammar or a poor overall structure will not receive a high rating. + +Students need to have experience writing mathematics. We want our students to be able to share their ideas. Writing mathematics, though, is a skill, and it is a different skill from mathematics. Small things can make a big difference: + +- All equations should be numbered. (This makes it easier to discuss the ideas in the paper with others.) +- A table of contents makes it easier for the reader to understand how a paper is structured. +- Equations and expressions should be part of a sentence and have proper punctuation. +Incomplete sentences. Really bad. +- Spell-checkers can suggest the wrong word. Spell-checkers are not your friend. +- Transitions are absolutely vital. +- There is a difference between a citation and a reference. Both are necessary. + +Students do not get many opportunities to engage in the full process of creating and bringing together complex mathematical ideas, performing an analysis of their ideas, and writing a complete report of all of their work. It is not something that they will do outside of a mathematics course. The inclusion of an expression in a sentence is not intuitive, and students will not know how to do it simply by reading or picking it up on the street. Most students have no issue about including both citations and a list of references in papers for their other courses, but they will not automatically bring that skill into mathematical writing unless explicitly reminded that it is still important. + +Students need practice writing in a mathematical context. Writing a full report such as those submitted in the MCM is fundamentally different from + +almost all of the writing that they normally do. Advisers who take the time to have students practice this skill will be doing an enormous favor for their students, and that practice will aid them well beyond just this event. + +# Conclusions + +To address the Keep-Right Problem, teams were required to develop a model to describe traffic flow. The majority of teams made use of a computational model and examined the results of Monte Carlo simulations. Most teams extended existing models, and the primary differences among entries was in the way that the teams interpreted and presented the results of their simulations. + +This was a difficult task in that for most cases the data generated is stochastic in nature. Few teams examined their data using formal statistical techniques, and few were able to present the nature of the variation in their data in a formal manner. Presenting the results of this kind of data in a formal manner is a difficult task, and those who were able to make good use of figures and tables and were able to discuss them in a structured manner had an advantage in the way that their paper was perceived. + +In addition to the problems associated with discussing and presenting their model, the teams had to address other important tasks. As is the case every year, the importance of the summary, conclusion, and writing cannot be over-stated. Also, the importance of providing a critical view of the model is vital, and this year the importance of determining the sensitivity of the model had a larger weight than usual. + +# References + +Biham, O., A.A. Middleton, and D. Levine. Self-organization and a dynamical transition in traffic-flow models. Physical Review A 46 (10) (1992): R6124-R6127. http://arxiv.org/pdf/cond-mat/9206001.pdf. +Black, Kelly. *Judge's Commentary: The Outstanding Traffic Circle papers.* The UMAP Journal 30 (3) (2009): 305-311. +_____. Judges' Commentary: The Outstanding Snowboard Course papers. The UMAP Journal 32 (2) (2011): 123-129. +______ Judges' Commentary: The Ultimate Brownie Pan papers. The UMAP Journal 34 (2-3) (2013): 141-149. +Lighthill, M.J., and G.B. Whitham, On kinematic waves. II. A theory of traffic flow on long crowded roads. Proceedings of the Royal Society of London 229 (1178) (1955): 317-345. https://amath.colorado.edu/sites/default/files/2013/09/1710796241/PRSA_Lighthill_1955.pdf. + +Nagel, Kai, and Michael Schreckenberg, A cellular automaton model for freeway traffic. Journal de Physique I France 2 (12) (December 1992): 2221-2229. http://hal.archives-ouvertes.fr/docs/00/24/66/97/PDF/ajp-jp1v2p2221.pdf. + +Richards, P.I. Shock waves on the highway. Operations Research 4 (1) (1956): 42-51. + +# About the Author + +Kelly Black is a faculty member in the Dept. of Mathematics and Computer Science at Clarkson University. He received his undergraduate degree in Mathematics and Computer Science from Rose-Hulman Institute of Technology and his master's and Ph.D. degrees from the Applied Mathematics program at Brown University. He has wide-ranging research interests, including laser simulations, ecology, and spectral methods for the approximation of partial differential equations. + +# Author's Commentary: The Keep Right Papers + +Michael Tortorella + +Dept. of Industrial and Systems Engineering + +Rutgers University + +Piscataway, NJ + +mtortore@rci.rutgers.edu + +# Introduction + +My intent in setting this problem was to begin to understand the global properties of the keep-right-except-to-pass (KRETP) rule. One has the intuitive sense that if everyone kept to this rule without exception, then traffic would eventually become chaotic (not necessarily in the technical sense) and throughput would decrease. If we think of the flow of traffic on a multi-lane highway as a stochastic process, the problem was intended to elicit an understanding of how the application of this particular control scheme alters the properties of the process. For example, does the process with KRETP become unstable or chaotic (in the technical sense) as the traffic density increases, and what implications would that have for throughput and safety? If so, might there be an alternative that works better? Or is any control scheme needed at all? + +# Disappointment + +A solution along these lines in full generality is certainly too much to ask of undergraduates, never mind undergraduates under severe time pressure; but I thought it possible that a simple Poisson process model for each lane, with switching between lanes as prescribed by the KRETP rule, might have been tried. I was disappointed to see that such an approach was not considered by any of the teams. The teams construed the problem very + +locally and narrowly, and were satisfied to apply a readily-available cellular automaton model, with minor modifications, to study throughput and safety along a linear stretch of freeway having no on- or off-ramps. The best papers did a creditable job with this pedestrian approach, but I couldn't help feeling that there was still something missing. It was as if the teams were solving a consulting problem and gave the client a solution that met the letter of the requirement but nothing more. + +I noticed the same phenomenon in the Snowboard Course (half-pipe) Problem from a few years ago [Giordano 2011; Black 2011]. When I set that problem, my intent was to see if the current half-pipe shape was optimal (in whatever sense would be defined by the teams). In the event, however, teams went online and found a standard definition of half-pipe, including shape, in Wikipedia; and thereafter they confined their solutions to fiddling with dimensions—which was far from interesting and far from what was intended. + +# Leaving Room for Creativity + +In the Keep-Right Problem, teams did well in understanding that it was necessary to create specific measures for throughput and safety so that these properties of the rule could be exposed. This activity was consistent with the MCM's intent of leaving room for team creativity in determining the specific mathematical constructs and expressions that teams will use to solve the problem. + +Well-written problems play to this strength, allowing teams a lot of freedom in constructing their approaches and solutions. I would prefer to see teams exercise much more creativity in general. I think this goal should motivate us to write problems in such a way that it becomes clear that finding a model that someone else has already constructed, and modifying it in some minor way, may produce a solution to the problem—but that such an approach is not consistent with the spirit of the MCM. I believe that spirit is to draw out the teams' own ideas, even if they are imperfect or not fully formed, so that they get some practice in going beyond the letter of the requirements to test their own skills in a more challenging way. So I believe that those of us who write problems can learn from this experience and pay more attention to whether a problem statement does enough to support this need without unnecessarily constraining the possible approaches that teams may consider. Indeed, properly written, the problem should encourage the team to range widely over possible solution approaches before settling on something to write up for the competition. (Coaches: Train your teams to spend time at the beginning of the contest period in brainstorming mode, looking for other connections—maybe even train using mind-mapping software to explicitly bring unusual ideas out into the open!) + +# Criteria for the Keep-Right Problem + +For the KRETP problem, certain aspects were essential. Papers that did not consider + +- three (or more) lanes of traffic, +- behavior in heavy traffic (a problem requirement), +- throughput and safety (another problem requirement), and +- entrances and exits on a limited-access highway + +were downgraded. Similarly, teams that found a model online and did not add any value of their own were not ranked well. + +Factors causing papers to be looked at more favorably included + +- explicit consideration of a tradeoff between throughput and safety, +- explicit consideration of speed variation, and/or +- the influence of new vehicle technologies (smart roads, inter-car communication, etc.). + +# Formulating Problems to Evoke Creativity + +Overall, as an author, I would have to say that I learned from this experience that the way that a problem is stated is important to getting good results. Problem requirements need to be explicitly called out. Perhaps even more important is the idea that problems should be written so as to encourage creativity over routine solution, and this demand places more responsibility on authors to anticipate the possible approaches that teams might take and subtly encourage approaches that promote more "interesting" solutions. + +# References + +Black, Kelly. Judges' Commentary: The Outstanding Snowboard Course papers. The UMAP Journal 32 (2) (2011): 123-129. +Giordano, Frank R. 2011. Results of the 2011 Mathematical Contest in Modeling. The UMAP Journal 32 (2) (2011): 99-107. + +# About the Author + +![](images/d381c275c868b38a3589ce23dd88e74478136302f1b7609a6f2c0d8c4ecdccb2.jpg) + +Mike Tortorella is Visiting Professor at RUTCOR, the Rutgers Center for Operations Research at Rutgers, the State University of New Jersey, and Managing Director of Assured Networks, LLC. He retired from Bell Laboratories as a Distinguished Member of the Technical Staff after 26 years of service. He holds the Ph.D. degree in mathematics from Purdue University. His current interests include stochastic flow networks, network resiliency and critical infrastructure protection, and stochas + +tic processes in reliability and performance modeling. Mike has been a judge at the MCM since 1993 and particularly enjoys the MCM problems that have a practical flavor of mathematical analysis of social policy. Mike enjoys amateur radio, playing the piano, and cycling. + +# Judge's Commentary: + +# The Ben Fusaro Award for 2014 + +Jerrold R. Griggs + +Dept. of Mathematics + +University of South Carolina + +Columbia, SC 29208 + +griggs@math.sc.edu + +homepage: http://www.math.sc.edu/\~griggs/ + +# Introduction + +The Ben Fusaro Award honors the Founding Director of the MCM (who continues to serve as a judge for the contest). First awarded in 2004, it recognizes an entry for an "especially creative approach" to the A Problem in the contest, which generally involves continuous mathematics. + +The Fusaro Award for 2014 goes to a team from Tsinghua University, Beijing, China, for their paper titled "Keep Right to Keep 'Right.'" This paper, one of the top group of papers designated as Outstanding, stands out for a remarkably flexible model, combined with exceptional research, detailed analysis, and clear writing. + +# The Problem + +The Keep-Right-Except-To-Pass-Rule Problem asked teams to build and analyze a mathematical model to analyze the performance of this rule in light and in heavy traffic. Is this rule effective in promoting greater throughput? If not, teams were to suggest and analyze alternatives that might promote greater throughput, safety, and/or other factors that they deemed important. + +# My Comments + +I offer my comments on the paper, organized by the categories deemed appropriate for this problem by the judges, who gave the most credit for + +model development, analysis/ validation, and conclusions. + +# Summary + +As always, a strong summary is essential. The abstract does a thorough job of describing the breadth of their model and methods. However, such a sophisticated model should include more information about the team's conclusions, besides the one sentence about which lane-changing rules work best. That sentence, their main conclusion, deserves to be featured more prominently in the summary. + +# Format, Clarity, Writing + +These are generally excellent in this entry. This paper is a pleasure to read, and it makes many interesting observations. The numerous charts and tables are informative. The organization is very good, particularly for a paper written over just a few days. + +One quibble is that the reader must hunt through the paper to find the conclusions. One wishes there were a section at the end to review the conclusions and bring closure to the whole project. An executive summary would be ideal. + +# Model Development + +Like many of the entries on the Keep Right Problem, this paper adapts the cellular automaton model for traffic flow from the literature, called the Nagel-Schreckenberg Model (or N-S model, for short). While many teams chiefly analyze a 2-lane model, similar to the N-S models in the literature, this paper treats 3- and 4-lane traffic throughout, which is something the judges wanted to see. + +Their "basic model" includes buses and trucks, not just cars, which is nice. They do a good job justifying their periodic boundary conditions, which is equivalent to saying their road is treated as a large ring. Their "refined model" incorporates entrance and exit ramps, another important feature only the very best entries treated well. + +They introduce 9 different rules for lane-changing, which is an exceptional number, and they successfully compare them by the multiple-criteria-decision-making tool called Fuzzy Synthetic Evaluation (FSE). While FSE is not familiar to most judges, the team gives a reasonable explanation of it. + +Models are compared under conditions of both light and heavy traffic, which is very important in the analysis. + +# Analysis/Validation + +The basic model is studied from several perspectives, including average speed, utilization of the different lanes, and driver satisfaction. The imposition of a maximum speed limit is studied, as is a minimum speed limit. An exceptional factor the team considers is signaling behavior. + +A sophisticated multiple-criteria evaluation method (FSE) is used to compare 9 different lane-changing rules. This is impressive. + +The paper includes sections that discuss left-hand side driving as well as intelligent systems. These are concise, and simulations are run. (Some papers went further with these topics.) + +However, the team's analysis is sensible. A strength of the paper is that the team tries to explain the conclusions suggested by their simulations. + +# Conclusions, Extras + +This study involves abundant simulations that appear to address all of the issues raised in the problem, which is extraordinary. It includes many extras, particularly the variety of lane-changing rules. As noted before, there could have been better closure. + +# Sensitivity Analysis + +The paper includes extensive analysis of several parameters in the model, more than almost all other entries did. + +# Strengths/Weaknesses + +The paper concludes with a collection of relevant observations. It is clear that with more time and effort the models could be expanded to encompass many more features. + +# Research + +We mention one more factor that was not a separate category for the judges: Research. + +This is another strength of this paper. It references several papers on the N-S model. It lists several Wikipedia articles on traffic, a traffic engineering text, and an engineering article using FSE. The team appears to draw useful information from all of these sources, with numerous literature citations in their text. By comparison, even many of the very good papers in the contest do a less-thorough literature search and / or fail to properly cite the literature when they take ideas from it. + +# About the Author + +Dr. Griggs is Carolina Distinguished Professor of Mathematics at the University of South Carolina, where he has supervised 15 doctoral dissertations as well as several master's and undergraduate theses. He is a Fellow of SIAM (the Society for Industrial and Applied Math), which he has served in various roles, including two terms as Editor-in-Chief of the SIAM Journal on Discrete Mathematics. He has judged the MCM since 1988, and he wrote or co-wrote six MCM problems. + +# Evaluation System for College Coaching Legends + +Feng Xiong +Wenchao Ding +Jingling Li + +Huazhong University of Science and Technology Wuhan, China + +Advisor: Zhibin Han + +# Abstract + +To evaluate the performance (evaluation grade) of a coach, we formulate metrics on five aspects: historical record, game gold content, playoff performance, honors and contribution to the sport. Moreover, each metric is subdivided into several secondary metrics, making for a three-tier hierarchical structure. Take playoff performance as an example: We collect post-season results (Sweet Sixteen, Final Four, etc.) each year from the NCAA official Website, from Wikimedia, and so on. + +First, we use the Analytic Hierarchy Process (AHP) to determine the weight of each metric on a coach's evaluation. Second, we use Fuzzy Synthetic Evaluation (FSE) to overcome the weakness of excessively subjective factors in AHP. The FSE model is based on data and generates a fuzzy matrix. After that, we apply the entropy method and linear weights to obtain the coaches' evaluation grades. + +To evaluate the accuracy of the two models, we define hit score to reflect the difference between our results and standard rankings from several authorities, such as ESPN and Sporting News. Take NCAA basketball as a case study: AHP receives a 78.8 hit score while FSE gets 81.8, which indicates that FSE performs better than AHP. Afterwards, we develop an Aggregation Model (AM) combining the two models based on hit score. The top 5 college basketball coaches, in order, are John Wooden, Mike Krzyzewski, Adolph Rupp, Dean Smith and Bob Knight. + +The time horizon does make a difference. According to turning points in NCAA history, we divide the previous century into six periods with different time weights, which leads to changes in the rankings. + +We apply our model to college women's basketball only to find that gender does not matter. + +The UMAP Journal 35 (2-3) (2014) 157-180. ©Copyright 2014 by COMAP, Inc. All rights reserved. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice. Abstracting with credit is permitted, but copyrights for components of this work owned by others than COMAP must be honored. To copy otherwise, to republish, to post on servers, or to redistribute to lists requires prior permission from COMAP. + +The model proves to be effective in other sports. The ranking of college football coaches is: Bear Bryant, Knute Rockne, Tom Osborne, Joe Paterno, Bobby Bowden; and the top 5 coaches in college hockey are Bob Johnson, Red Berenson, Jack Parker, Jerry York, Ron Mason. + +We conduct sensitivity analysis on FSE to find the best membership function and calculation rule, and also on the aggregation weight. We find that AM performs better than either AHP or FSE alone. As a creative use, we apply AM to pick the top 3 U.S. presidents: Abraham Lincoln, George Washington, Franklin D. Roosevelt. + +We discuss the strengths and weaknesses of our model and present a nontechnical explanation of it. + +# Introduction + +# Problem Background + +Sports Illustrated is an American sports media franchise owned by media conglomerate Time Warner. The magazine Sports Illustrated is looking for the "best all time college coach," male or female, over the previous century, in any sport. + +We face mainly four problems: + +- Articulate our own metrics and build a mathematical model. +- Set up the evaluation system for the performance of the model. +- Discuss how our model can be applied with a time factor taken into account, or across both genders and all possible sports. +- Analyze the influences of the parameters, then discuss whether our model could be applied more widely. + +# Previous Research + +Finding the best college coaches is an evaluation problem, and there are models to solve such problems. One is the Analytic Hierarchy Process (AHP), developed by Thomas L. Saaty [2008]. The AHP provides a comprehensive and rational framework for structuring a decision problem, representing and quantifying its elements, relating those elements to overall goals, and evaluating alternative solutions [Wikipedia 2014a]. + +Another model is the Fuzzy Synthetic Evaluation Model (FSE). Fuzzy mathematics is a branch of mathematics related to fuzzy set theory and fuzzy logic [Wikipedia 2014b; Zadeh 1965]. + +# Our Work + +We determine the best college coaches, male or female, in different sports. We begin with terminology, definitions, and assumptions. We then define our evaluation standard and the specific evaluation norms that we use in our models, and show some of the data that we collected. + +We build two mathematical models to choose the best college coaches, and then consider a combination of the two. We extend our models further to take time, gender, and sport into consideration. + +Finally, we provide an overview of our approach and give a nontechnical explanation of our models that sports fans will understand. + +# Symbols, Definitions and Assumptions + +# Symbols and Definitions + +# General Assumptions + +- The elements that we take into consideration play a vital role in the evaluation. +- Elements that we ignore do not influence the ranking. +- The data that we have collected is sufficient and accurate. +- There exists an objective and accurate ranking for coaches, and the rankings from selected media reflect the accurate ranking to some extent. + +# Articulate Metrics + +# Specify Evaluation Norms + +As for the evaluation standard for players, there are mainly five aspects that count: strength, speed, skill, defense, and offense. Similarly, a coach could be evaluated from five aspects: historical record, game gold content, playoff performance, honors, and contribution to the sports. + +# Historical record: + +The team's record undoubtedly accounts for the largest proportion in the coach evaluation. The team's record of total wins $a$ and total losses $b$ could directly reflect coaching ability. + +Table 1. +Symbols. + +
SymbolDefinition
Symbols for evaluation norms
aiwins in year i
bilosses in year i
Raverage SRS
Oaverage SOS
nknumber of times for each class of playoff
klweight of each award
cipoint for each aspect of contribution
Symbols for Analytic Hierarchy Process
Ajudging matrix
λmaxgreatest eigenvalue of A
CIindicator of consistency check
CRconsistency ratio
RIrandom consistency index
CWweight vector for criteria level
AWweight vector for components level
Y1evaluation grade for model I
Symbols for Fuzzy Synthetic Evaluation
Xigrades for each aspect
μj(Xij)membership function
Xfthe fuzzy matrix
pijcharacteristic weight
ejentropy for the evaluation grade
EWweight vector in entropy method
Y2evaluation grade for model II
Symbols for Aggregation Model
Daverage offset distance
WIweight for model I
Yevaluation grade for aggregation model
+ +# Game gold content: + +If all wins are against weak teams, the wins could not illustrate the real coaching ability. At the same time, the average point difference also makes a difference; it reflects the coaching style, whether a coach is conservative or radical. We choose the following two norms: + +- Simple Rating System (SRS): The simple rating system works by first finding how many points, on average, a team wins/loses by. For each game, the point differential is then weighted based on how much better or worse than average the point differential is. Let $R$ denote the total SRS: + +$$ +R = \frac {\sum_ {i} S R S _ {i}}{t}, +$$ + +where $SRS_{i}$ is the SRS value for year $i$ and $t$ is the number of years. + +- Strength of Schedule (SOS): Strength of schedule (SOS) refers to the difficulty of beating the opponent as compared to other teams' difficulty in doing so [Wikipedia 2014d]. This criterion is especially important if teams in a league do not play each other the same number of times. Let $O$ denote the total SOS: + +$$ +O = \frac {\sum_ {i} S O S _ {i}}{t}, +$$ + +where $SOS_{i}$ is the SOS value for year $i$ and $t$ is the number of years. + +# Playoff performance: + +Generally, during the regular season, teams play more games in their conference than outside it, but the country's best teams might not play against each other in the regular season. Therefore, post-season playoff performance, in terms of the rounds reached, is important in evaluating the coach. To quantify the aspect, we count the number of times $n_k$ that the coach's team(s) reached round $k$ of the playoffs. Let $m_{kl} = 1$ if the team reaches round $k$ in year $i$ and 0 otherwise. Then we have + +$$ +n _ {k} = \sum_ {i} m _ {k i}. +$$ + +# Honors: + +There are various honors, such as the Basketball Hall of Fame and the College Basketball Hall of Fame, in addition to such awards as the Naismith College Coach of the Year, Basketball Times National Coach of the Year, and so on. Different awards have different gold content. Let $k_{i}$ be the weight of an award in year $i$ . The total weights of all the awards for a coach is + +$$ +H = \sum_ {i} k _ {i}. +$$ + +# Contribution to sports: + +We divide the contribution into five parts: + +- Star players: Count how many star players the coach has trained. +- Coaching age: When the coaching career started and how long it lasted. +- Tactical innovation: Did the coach invent any tactical innovations? +- Performance in international competitions: Has the coach ever coached in international competitions? If so, how many gold or silver medals? +- Popularity: The number of the results when the coach's name is searched in Google. + +We assign points for each aspect above: 0 for mediocre, 1 for good, 2 for excellent, then add the points up to form the final grade in this aspect (the full mark is 10): + +$$ +C = \sum_ {i} c _ {i}. +$$ + +Figure 1 illustrates the evaluation norms. + +![](images/ec9a2eef2d234e60d06dbbd42d7fe80322b215912dd9ff3a33428df203bbc43c.jpg) +Figure 1. First-level evaluation criteria. + +# Collect Data + +We use men's basketball as an example. We choose the 70 coaches in the National Collegiate Basketball Hall of Fame [Wikipedia 2014c]. We also select five other college coaches who are not in the Hall of Fame but still made significant contributions. + +Combining data from Sports Reference [2013], a Website with specific data about coaches, with statistics from searching Wikipedia, we arrive at relative statistics for those 75 college coaches. + +![](images/a5c23981bc3dd055b90dab051e3eb61aec2a58ff2ab5fe805ecbadfcbef573bc.jpg) +Figure 2. Second-level evaluation criteria. + +In Table 2, FR, SR, SS, EE, FF, RU, and CH refer to rounds achieved in the NCAA Basketball Tournament: First Round, Second Round, Sweet Sixteen, Elite Eight, Final Four, Runner-Up, and Champion. + +Table 2. Sample from data on basketball coaches. + +
CoachfromtoyearswinslossesSRSSOSFRSRSSEEFFRUCH
Jim Boeheim190519954871925915.817.2758112121
Jim Calhoun197220014087738212.644.745545103
Larry Brown1979201392108313.085.950310111
Mike Krzyzewski197520133997530220.168.782662344
+ +We also collected the college basketball coaching record for each season for every coach. In Table 3, we show Larry Brown as an example. + +Table 3. Sample coaching record, for Larry Brown. + +
SeasonwinslossesSRSSOSAP PreAP HighAP FinalResult
1979-80221015.676.187NCAA Runner-up
1980-8120714.895.266310NCAA Second Round
1983-8422109.765.861717NCAA Second Round
1984-8526811.846.27199Z13NCAA Second Round
1985-8635423.1810.42522NCAA Final Four
1986-87251113.367.738620NCAA Sweet Sixteen
1987-88271115.7110.7777NCAA Champions
2012-131517-0.59-1.33
2013-1418513.882.45
+ +# Preprocess the Data + +When we collect data from the Internet, we notice that some data are missing—e.g., SRS and SOS. Given that fact, we have to preprocess the data. We fill in the data mainly based on interpolation according to the ranking generated by the other metrics. + +# Two Models for Coach Ranking + +# Model I: Analytic Hierarchy Process (AHP) + +When we try to obtain the weight of the five aspects of the first-level evaluation and the weight of several second-level evaluation criteria, subjective judgment is ill-considered. So we choose the Analytic Hierarchy Process (AHP) as the way to combine the weighting coefficients of all the indicators in the evaluation system. + +# The three-level hierarchy structure + +The three-level hierarchy structure that contains the criteria level and components level is shown in Table 4. + +Table 4. +The three-hierarchy structure of our model. + +
GoalCriteriaComponents
Influence of the coachHistorical RecordWins +Losses
Game Gold ContentSRS +SOS
Playoff PerformanceFirst Round +… +Champion
HonorsDifferent Awards +Hall of Fame
Contribution to sportsStar Player +Coaching Age +Tactical Innovation +International Games +Popularity
+ +# Obtain the index weights + +- Determine the judging matrix. We use the pairwise-comparison method and 1-9 method of AHP to construct the judging matrix $A = (a_{ij})$ : + +$$ +a _ {i j} = a _ {i k} a _ {k j}, +$$ + +where $a_{ij}$ is set according to the 1-9 method. + +- Calculate the eigenvalues and eigenvectors. The greatest eigenvalue $\lambda_{\max}$ of matrix $A$ has corresponding eigenvector $u = (u_1, \ldots, u_n)^T$ . Then we normalize $u$ by + +$$ +x _ {i} = \frac {u _ {i}}{\sum_ {j} u _ {j}}. +$$ + +- Do a consistency check. The indicator of consistency is + +$$ +C I = \frac {\lambda_ {\mathrm {m a x}} - n}{n - 1}, +$$ + +where $n$ is the dimension of the matrix. + +The expression of consistency ratio is + +$$ +C R = \frac {C I}{R I}. +$$ + +Having confirmed the weighting coefficients of all the indicators in the evaluation system, now we quantify the importance of coaches. + +$CW_{i}$ denotes the weight of criteria level factor $i$ , where $AW_{j}$ is the weight of secondary critical level factor $j$ for the $i$ th critical level, $mi$ denotes the total number of secondary critical factors, and $F_{j}$ denotes the secondary critical level factor. + +The evaluation grade $Y_{1}$ should be + +$$ +Y _ {1} = \sum_ {i = 1} ^ {5} C W _ {i} \sum_ {j = 1} ^ {m i} A W _ {j} F _ {j}. +$$ + +# Results and analysis + +We obtain the following results: + +- Judging matrix: + +$$ +A = \left[ \begin{array}{l l l l l} 1 & 5 & 5 / 9 & 1 & 1 \\ 1 / 5 & 1 & 1 / 9 & 1 / 5 & 1 / 5 \\ 7 / 5 & 7 & 1 & 7 / 5 & 9 / 5 \\ 1 & 5 & 5 / 7 & 1 & 9 / 5 \\ 1 & 5 & 6 / 7 & 1 / 5 & 1 \end{array} \right] +$$ + +Weight vector of criteria level: + +$$ +C W = \left[ \begin{array}{c c c c c c} 0. 1 9 9 6 & 0. 0 3 9 9 & 0. 3 0 9 3 & 0. 2 4 1 9 & 0. 2 0 9 2 \end{array} \right]. +$$ + +For this level, $CI = 0.301$ , $CR = 0.0269$ , satisfying the criterion for consistency of $CI / RI < 0.1$ . + +- Weight vector of components level: + +- Historical Record: $A W_{1} = \left[ \begin{array}{cc} 1.5 & -0.5 \end{array} \right]$ . +- Game Gold Content: $A W_{2} = \left[ \begin{array}{ll}0.75 & 0.25 \end{array} \right]$ . +-Playoff Performance: + +$$ +A W _ {3} = \left[ 0. 0 0 7 9 0. 0 1 5 7 0. 0 3 1 5 0. 1 2 6 0. 2 5 2 0. 5 0 3 9 \right]. +$$ + +All of these weight vectors satisfy $CI / RI < 0.1$ . + +As for Honors and Contribution to the Sport, we have taken so many awards and factors into consideration, making it hard to determine a weight vector. We arrive at an approximate solution by assigning equal weights to these factors. + +Finally, we can obtain the final rankings of the top 10 college basketball coaches using the AHP model. + +Table 5. +The top 10 college basketball coaches' grades. + +
RankNameGrade (Y1)RankNameGrade (Y1)
1Mike Krzyzewski0.84266Roy Williams0.5637
2John Wooden0.73347Bob Knight0.5479
3Adolph Rupp0.60488Phog Allen0.4788
4Jim Boeheim0.59859Rick Pitino0.4683
5Dean Smith0.584410Lute Olson0.4132
+ +# Conclusions + +- Analyzing the weight vector of criteria level, the highest weight is for Playoff Performance. +- SOS plays a less-important role than SRS in determining the Game Gold Content, and the weight of the Game Gold Content is the lowest. + +# Model II: Fuzzy Synthetic Evaluation (FSE) + +# Quantify grades in the five aspects + +Fuzzy set theory was designed to supplement the interpretation of linguistic or measured uncertainties for real-world random phenomena. + +We have already articulated our metrics for ranking the five aspects: historical record, game gold content, playoff performance, honors, and contribution to sports. Before using fuzzy set theory, we calculate the grades in each of the five aspects using the collected data. + +- Calculation rule for historical record: + +$$ +X _ {1} = \lambda_ {\mathrm {w i n}} a - \lambda_ {\mathrm {l o s e}} b, +$$ + +where $a$ is the number of wins, $b$ is the number of loses, $\lambda_{\mathrm{win}}$ is the weight for a single win, and $\lambda_{\mathrm{lose}}$ is the weight for single loss: + +- Calculation rule for game gold content: + +$$ +X _ {2} = R \left(1 + \frac {O}{O _ {\mathrm {m a x}}}\right), +$$ + +where $R$ is the value of SRS, $O$ is the value of SOS, and $O_{\mathrm{max}}$ is the maximum value of SOS in the Strength of Schedule system. + +A coach with higher SRS will also have higher grade in this aspect because the team is always far ahead of its opponents. At the same time, the higher SOS is, the harder games are. So we let the SOS be an addition to SRS. + +- Calculation rule for playoff performance: + +$$ +X _ {3} = \sum_ {k = 1} ^ {7} 2 ^ {k} n _ {k}, +$$ + +where $n_k$ is the number of appearances for level $k$ of the playoffs. + +In the playoffs, the number of teams decreases by half from one level to the next, hence the weight increases exponentially by a factor of 2. + +- Calculation rule for honors: + +$$ +X _ {4} = H, +$$ + +where $H$ counts up the awards by weight. + +- Calculation rule for contribution to sports: + +$$ +X _ {5} = C. +$$ + +# Determine membership functions + +A fuzzy set is defined in terms of a membership function that maps the domain of interest onto the interval [0, 1]. The value of the membership function represents the degree, or weighting, that the domain item belongs to the set. + +Let $X_{ij}$ denote the $X_{j}$ value for coach $i$ and $X_{j(\max)}$ be the maximum value for all the coaches. Here we use the normalization function as membership function: + +$$ +\mu_ {i j} (X _ {i j}) = \frac {X _ {i j}}{X _ {j (\mathrm {m a x})}}. +$$ + +Let $N$ be the total number of coaches. Then we have the $N \times 5$ matrix + +$$ +X _ {f} = \left[ \begin{array}{c c c} \mu_ {1} (X _ {1, 1}) & \ldots & \mu_ {1} (X _ {1, 5}) \\ \vdots & \mu_ {1} (X _ {i, j}) & \vdots \\ \mu_ {1} (X _ {N, 1}) & \ldots & \mu_ {N} (X _ {N, 1}) \end{array} \right]. +$$ + +# Determine the weights using entropy method + +The entropy method [Dahiya et al. 2007] states that, subject to precisely stated prior data (such as a proposition that expresses testable information), the probability distribution that best represents the current state of knowledge is the one with largest entropy. + +To use the entropy method, there are 5 steps: + +- Calculate the characteristic weight $p_{ij}$ for the $i$ th coach's $j$ th evaluation grade $(X_{ij})$ based on the normalized fuzzy matrix + +$$ +p _ {i j} = \frac {X _ {f (i , j)}}{\sum_ {i = 1} ^ {N} X _ {f (i , j)}}. +$$ + +- Calculate the entropy for the evaluation grade: + +$$ +e _ {j} = \frac {- 1}{\ln N} \sum_ {i = 1} ^ {N} p _ {i j} \ln p _ {i j}. +$$ + +- Calculate the diversity factor for the evaluation grade: + +$$ +g _ {j} = 1 - e _ {j}. +$$ + +- Determine the weight for each evaluation grade: + +$$ +w _ {j} = \frac {g _ {j}}{\sum_ {j = 1} ^ {5} g _ {j}}. +$$ + +- Determine the FSE evaluation grade for each coach: + +$$ +Y _ {2} = W X _ {f}. +$$ + +Table 6. Results of FSE analysis using entropy. + +
X1X2X3X4X5Y2
John Wooden0.040.060.150.130.080.871
Mike Krzyzewski0.070.060.100.130.080.863
Adolph Rupp0.060.060.080.110.020.675
Dean Smith0.060.060.080.040.050.609
Bob Knight0.050.060.060.070.050.605
Roy Williams0.060.050.060.040.080.587
Jim Boeheim0.060.030.050.030.010.586
Phog Allen0.040.040.050.050.050.487
Henry Iba0.040.040.040.040.020.466
Lute Olson0.060.040.040.060.0770.454
gj-0.99-1.00-1.04-0.96-0.930.434
wj0.180.160.230.200.220.871
+ +# Results and analysis + +Characteristic weight, entropy, diversity factor and weight are shown in Table 6. We find: + +- The weights for each aspect are close to one another. +- Playoff performance $(X_{3})$ plays the most important role (with 0.23 weight) in FSE evaluation. +- At the same time, coaches who have amazing game gold content (with only 0.16 weight) might not stand out. + +# Models Combination + +# Evaluation of Individual Model + +To compare our two models (AHP and FSE), we define the average offset distance $D$ . + +We collect ranked lists of the top 10 NCAA basketball coaches from several authoritative media such as ESPN, Bleacher Report, Yahoo Sports, and Sporting News (e.g., Merron [2009]). We compare our results to those lists, and average offset distance reflects the difference. + +We use the first-order Minkowski distance to denote the average offset distance of the top 10: + +$$ +D = \frac {1}{1 0 n} \sum_ {i = 1} ^ {1 0} \sum_ {j = 1} ^ {5} | j - r _ {j} |, +$$ + +where + +- $n$ is the number of top-10 ranking lists, +- $j$ is the rank in the $i$ th list, and +- $r_j$ is the ranking of that $j$ th coach in our results. + +So $|j - r_j|$ is the difference between the result in the media and ours, and $D$ gives the average difference. + +$D_{\alpha}$ is the average offset distance of the top 5, and $D_{\beta}$ is the average offset distance of 6th through 10th. + +We define hit score as + +$$ +g = \frac {9 0 0}{9 + D}, \qquad 0 < g < 1 0 0. +$$ + +Table 7. Results for offset distance. + +
AHPFSE
1.751.15
3.102.85
D2.42.0
83.788.7
73.475.9
g78.881.8
+ +# Conclusions + +From the results in Table 7, we conclude: + +- Vertical comparison: For both AHP and FSE, $D_{\alpha} < D_{\beta}$ , meaning that the results are more reasonable for the top 5 than for the top 10. +- Horizontal comparison: FSE performs better than AHP in both the top 5 and the top 10. + +# Aggregation Model + +AHP is a subjective method, it largely depends on artificial scoring; relatively, Fuzzy Synthetic Evaluation is an objective method, it depends on data. To comprehensively consider the effect of subjective and objective factors, we adopt a linear weighted method: + +$$ +Y = w Y _ {1} + (1 - w) Y _ {2}, +$$ + +where $w$ and $(1 - w)$ are weights that add to 1, $Y_{1}$ is the evaluation grade from the AHP model, and $Y_{2}$ is the evaluation grade from the FSE model. + +To determine the weights, we take $D$ (average offset distance) into consideration. Since smaller average offset distance means the more accurate results, we assign higher weight to the mode with smaller $D$ . Then we get + +$$ +w = \frac {D _ {2}}{D _ {1} + D _ {2}}. +$$ + +# Results and Analysis + +Table 8. Rankings according to the different models. + +
RankAHPFSEAM
1Mike KrzyzewskiJohn WoodenJohn Wooden
2John WoodenMike KrzyzewskiMike Krzyzewski
3Adolph RuppAdolph RuppAdolph Rupp
4Jim BoeheimDean SmithDean Smith
5Dean SmithBob KnightBob Knight
6Roy WilliamsRoy WilliamsJim Boeheim
7Bob KnightJim BoeheimRoy Williams
8Phog AllenPhog AllenPhog Allen
9Rick PitinoRick PitinoRick Pitino
10Lute OlsonHenry IbaHenry Iba
+ +Table 9. Ranking comparison among the models. + +
AHPFSEAM
Top 5 hit score83.788.788.7
Top 10 hit score78.881.882.6
+ +# Conclusion + +- All our models perform better for the top 5 than the top 10. This fact shows that the top 5 coaches in college basketball history are less controversial than the top 10. +- The results of AM and FSE are very similar. They have the same hit score for the top 5; but for the top 10, AM has a higher hit score. These results show that using the combination can improve our model. +- Our model AM's final result is: The top 5 coaches in college basketball are John Wooden, Mike Krzyzewski, Adolph Rupp, Dean Smith, and Bob Knight. + +# Extend Our Models + +# Gender Does Not Matter + +Now we take gender into consideration. We still use basketball as an example, and rank the top 10 college women's basketball coaches for the previous century. Searching the Internet, we collected data on about 50 college women's basketball coaches with 600 wins [Wikipedia 2014e] and 5 other coaches who have established outstanding traditions, earned many awards, and garnered recognition for their colleges. Then we ranked them with our models. In Table 10, we compare with the ranking at Yahoo [Michael 2010]. + +Table 10. Women's coaches ranked by our AM model and as ranked by Yahoo. + +
RankAMAM gradeYahoo
1Pat Summitt0.85Pat Summitt
2Geno Auriemma0.84Geno Auriemma
3Tara VanDerveer0.75Leon Barmore
4Leon Barmore0.72C. Vivian Stringer
5C. Vivian Stringer0.61Tara VanDerveer
6Sylvia Hatchell0.59Jody Conradt
7Jody Conradt0.57Kay Yow
8Kay Yow0.55Gail Goestenkors
9Sue Gunter0.48Sylvia Hatchell
10Gail Goestenkors0.44Sue Gunter
+ +Using the average offset distance, all results of our models are in agreement within reasonable error range (hit score $= 87.6$ ), so that we can safely conclude that our models can be applied in general across both genders. + +# Time Factor Does Make a Difference + +# Why does time factor matter? + +The NCAA Basketball Tournament started in 1939. In the years since, the number of teams participating has increased, the competition has become fiercer, and the tournament has gained in popularity—all of which influence the quality of the evaluation grades. + +To quantify the time factor, we attach a weight (1 to 10) to different time periods, mainly based on the turning points that occurred in the period. + +Tale 11 shows the critical years in the NCAA history [Wikipedia 2014f]. + +Table 11. Weights for different periods in NCAA history. + +
YearsTurning pointsWeight wi
1913–1939No national tournament.5
1939–1951Two college tournaments: NIT and NCAA, with 8 teams in NCAA.6
1951–197516 teams in NCAA, NIT became second-class competition.7
1975–198032 teams.8
1980–198548 teams.9
1985–201464 teams (plus play-ins).10
+ +# How does the time factor matter? + +The weights $w_{i}$ must be applied in calculating $a, b, R, O,$ and $n_k$ . + +For AHP, the top 5 and top 10 hit scores remain nearly unchanged, but several adjoining coaches with close grades change places. + +For FSE, both the top 5 and the top 10 hit scores decrease somewhat, and there are larger changes in the rankings. The model appears to be easily influenced by time weights. + +The AM model too appears to be easily influenced by time weights, because of the weighting of FSE. + +Table 12 shows the final ranking according to AM. Every coach's rank changes except for Bobby Knight at rank 5. + +Table 12. Ranking according to AM without and with time weights. + +
AM without weightGradeAM with weightGrade
John Wooden0.857Mike Krzyzewski0.920
Mike Krzyzewski0.830John Wooden0.778
Adolph Rupp0.654Roy Williams0.643
Dean Smith0.602Jim Boeheim0.632
Bob Knight0.590Bob Knight0.612
Jim Boeheim0.588Dean Smith0.606
Roy Williams0.580Adolph Rupp0.571
Phog Allen0.485Rick Pitino0.517
Rick Pitino0.467Lute Olson0.460
Henry Iba0.431Phog Allen0.435
top 10 hit score82.6top 10 hit score76.6
top 5 hit score88.7top 5 hit score85.5
+ +# What is the effect of considering time? + +- The rankings of coaches in earlier eras fall to some extent. Take Phog Allen, for example. He is known as the "Father of Basketball Coaching," but most of his games occurred in 1920-1959, which means that the + +NCAA had not started or though started the teams were few. The time weight for that era is relatively low, thus making his ranking fall. + +- Coaches in recent years enjoy some superiority. Take Roy Williams and Adolph Rupp, for example. The two coaches' performance are quite close to each other. Rupp was even better in the historical record; but due to the time weight, the historical record for Adolph Rupp does not count that much, and Roy Williams is ahead of him. +- Introducing time weights does not necessarily mean a higher hit score. + +# The Model Works in Other Sports, Too + +There are four steps to apply the model in any sport as you want. + +- Step 1: Adjust the metrics according to the sport. +Different sports may have different playoff rules, so the metric for Playoff Performance should be adapted. Take football, for example: A ranking of playoff bowl games can replace round of the NCAA Basketball tournament. +- Step 2: Adapt the calculation rules according to the feature of the sports. For example, for football, Bowl game should be assigned another weight according to its gold content. +- Step 3: Adjust the time weight according to the history of the sport. For example, for football, before 2006, there was no BSC Bowl. +- Step 4: Solve the aggregation model again and analyze the results. + +Following the four steps presented above, we apply the model to determine the top 5 coaches in other two other sports, football and hockey (Table 13). + +Table 13. The top 5 coaches in football and in hockey. + +
FootballGradeHockeyGrade
Bear Bryant0.887BobJohnson0.896
Knute Rockne0.866RedBerenson0.873
Tom Osborne0.854JackParker0.853
Joe Paterno0.787JerryYork0.776
Bobby Bowden0.786RonMason0.763
+ +# Further Discussion + +# Sensitivity Analysis for FSE + +# Vary membership function + +For FSE, there are also other available membership functions. + +[EDITOR'S NOTE: The authors examine a parametrized version of their original membership function, as well as parametrized versions of two alternative membership functions. They find that the membership function + +$$ +\mu_ {j} (X _ {i j}) = \left(\frac {X _ {i j}}{X _ {j (\mathrm {m a x})}}\right) ^ {k} +$$ + +with $k = 3$ is the most appropriate in terms of effect on hit score.] + +# Vary calculation rule + +Here we focus on figuring out how the hit score will change with the ratio $\lambda_{\mathrm{win}} / \lambda_{\mathrm{lose}}$ , that is, the relative benefit of a win compared to a loss. + +![](images/522d0d6eafef36a1517cf02ab001f6a67e68cc7b9074657520264e174c1922a7.jpg) +Figure 3. Sensitivity analysis to varying the weights of winning and losing. + +If we attach the same weight to winning a game and losing a game, the model will have a poor hit score. If the ratio of the weight of wins and losses is too high, that too will also lead to a bad result. We conclude that the model performs best when the weight of winning a game is twice that of losing a game. + +# Sensitivity Analysis on Aggregation Weight + +We analyze how hit score (for AM) and rankings change with varying the weight for AHP $(w)$ . + +![](images/d82b7aa07d95f2bb786ad90d2e0c40916c4604a77c9337a3c0bb2910893c8003.jpg) +Figure 4. Sensitivity analysis of hit score to weight of AHP. + +Since AHP is less accurate than FSE, the hit score of AM would be optimal when the weight of AHP is small. But when the weight of AHP is zero, the hit score doesn't reach the maximum; the maximum hit score is reached when the weight of AHP is 0.1-0.2. + +# Exploration: Evaluating the Best President + +Now we use our models to find the top 10 presidents of the United States. We collect relative data from the Internet [Wikipedia 2014g]. + +A president can be evaluated on five aspects: personal qualities, presidential achievements, leadership qualities, failures and faults, and popular opinion (Figure 5). + +The personal qualities include imagination, intelligence and being willing to take risks, while the presidential achievements can be valued in terms of domestic accomplishments, executive appointments, foreign policy accomplishments, and ability to compromise. Leadership qualities can be measured by party leadership ability and relations with Congress. We also take popular opinion into consideration, in terms of polls from C-SPAN, ABC News, Washington College, Gallup, Rasmussen, and 2012 Gallup. + +![](images/de3aaa507e1896a821ddae668828b8155af37ac86879a9eff02d72ec3b1b5126.jpg) +Figure 5. Aspect norms for President of the USA. + +The resulting ranking of presidents of the United States is shown in Table 13. + +Table 13. Ranking of U.S. presidents. + +
RankNameRankName
1Abraham Lincoln6Harry S. Truman
2George Washington7Woodrow Wilson
3Franklin D. Roosevelt8Dwight D. Eisenhower
4Thomas Jefferson9James K. Polk
5Theodore Roosevelt10Andrew Jackson
+ +# Strengths and Weaknesses + +# Strengths + +- Our metrics for assessment include all the important elements of a coach. Time factor, gender, and category are all discussed in the model. +- We evaluate the performance of a coach from 5 specific perspectives. +- We set up two different models to form an aggregation model (AM). AHP includes more subjective factors, while FSE appears to be more objective. The aggregation model is devoted to make clear the tradeoff between AHP and FSE. + +# Weaknesses + +- We adopt in total 18 indicators to evaluate a coach, but there are still others that we do not take into consideration. +- Weights are everywhere in the model, but some weight assignments might not be the best. + +# References + +Dahiya S., B. Singh, S. Gaur, et al. 2007. Analysis of groundwater quality using fuzzy synthetic evaluation. Journal of Hazardous Materials 147 (3): 938-946. +Merron, Jeff. 2009. The Wizard still ranks No. 1. http://sports.espn.go.com/espn/page2/story?page=list/050304/collegehoopscoaches. +Michael, Patrick. 2010. Top 10 women's basketball coaches in NCAA history. http://sports.yahoo.com/ncaa/football/news?slug=ac-7168152. +Saaty, Thomas L., and Kirti Peniwati. 2008. Group Decision Making: Drawing out and Reconciling Differences. Pittsburgh, PA: RWS Publications. +Sports Reference. 2013. http://www.sports-reference.com/cbb/coaches/. +Wikipedia. 2014a. Analytic hierarchy process. http://en.wikipedia.org/wiki/Analytic_hierarchy_process. +______ 2014b. Fuzzy mathematics. http://en.wikipedia.org/wiki/Fuzzy_mathematics. +______ 2014c. National Collegiate Basketball Hall of Fame. http://en.wikipedia.org/wiki/National_Collegiate_Basketball_Hall_of_Fame. +2014d. Strength of schedule. http://en.wikipedia.org/wiki/Strength_of_schedule. +2014e. List of college women's basketball coaches with 600 wins. http://en.wikipedia.org/wiki/List_of_college_women's_basketball_coaches_with_600 Wins. +2014f. College basketball. http://en.wikipedia.org/wiki/College_basketball. +2014g. Historical rankings of Presidents of the United States. http://en.wikipedia.org/wiki/Historical_rankings_of_Presidents_of_the_United_States. + +Zadeh, L.A. 1965. Fuzzy sets. Information and Control 8: 338-353. + +# Nontechnical Explanation + +For better or worse, coaches are often the faces of college sports programs. Different from players, who stay only for a few years, coaches can exert longer influence in the college games. Here is a list of the top 5 coaches in the college basketball, college football, and college hockey: + +
RankBasketballFootballHockey
1John WoodenBear BryantBob Johnson
2Mike KrzyzewskiKnute RockneRed Berenson
3Adolph RuppTom OsborneJack Parker
4Dean SmithJoe PaternoJerry York
5Bob KnightBobby BowdenRon Mason
+ +The rankings proved to be a difficult task. First, we choose some coaches, who are in the Hall of Fame or who have established outstanding traditions and earned many awards, as our ranking candidates. Then, searching the Internet or other data sources, we try to collect relevant data as detailed as possible. After choosing proper data, we calculate rankings. What's more, we search for existing rankings on the Internet to serve as an evaluation criterion. + +We evaluate the coaches in our list of candidates from five aspects. The best college coaches tend to have a good win-loss record. What's more, SRS (Simple Rating System) and SOS (Strength of Schedule) can reflect coaching ability. We also examine each coach's success in the post-season. Taking basketball as an example, the performance could be valued by counting the number of times appearing in the NCAA Tournament as Champion, Runner-up, Final Four, Sweet Sixteen, Second Round, and First Round. In many cases, we take into account coaches' contribution to the sport, as well as honors, such as awards or being in the Hall of Fame. + +After collecting and choosing coaches' detailed data, we define the importance of those aspects that can measure coaches' ability, and use the results to give each coach a score. The higher the score, the higher the rank. + +We use the data on the best college basketball coach, John Wooden, as an example. In his college coach career, his team won 826 games; and during his sixteen years in the NCAA tournament, he won 10 championships and 12 straight trips to Final Four. John Wooden has been recognized a tremendous number of times for his achievements, including being recognized for his impact on college basketball as a member of the founding class of the National Collegiate Basketball Hall of Fame. He was also named the Sporting News "Greatest Coach of All Time." With so many honors and awards, Wooden gets the highest score when we rank the coaches and is worthy of the title of best college basketball coach. + +![](images/88f0ab4a50f4aa4e2a68abbc740e872b860c956d6e71eabce56f98bbe85a4725.jpg) +Wenchao Ding, Jingling Li, and Feng Xiong, with team advisor Zhibin Han. + +# Judges' Commentary: The Coach Papers + +Robert Burks + +Defense Analysis Dept. + +Naval Postgraduate School + +1 University Circle + +Monterey, CA 93943-5000 + +reburks@nps.edu + +# The Problem + +Sports Illustrated, a magazine for sports enthusiasts, is looking for the "best all-time college coach," male or female, for the previous century. Build a mathematical model to choose the best college coach or coaches (past or present) from among either male or female coaches in such sports as college hockey or field hockey, football, baseball or softball, basketball, or soccer. Does it make a difference which time line horizon that you use in your analysis, that is, does coaching in 1913 differ from coaching in 2013? Clearly articulate your metrics for assessment. Discuss how your model can be applied in general across both genders and all possible sports. Present your model's top 5 coaches in each of 3 different sports. + +In addition to the MCM format and requirements, prepare a 1-2-page article for Sports Illustrated that explains your results and includes a nontechnical explanation of your mathematical model that sports fans will understand. + +# Introduction and Overview + +The Coach Problem focused on identifying the factors or metrics for success as a college coach. The problem required students to develop a modeling approach based on these metrics to determine the best coach + +across all sports, gender, and time. In addition, there was the traditional required nontechnical (Sports Illustrated article) paper. + +I start this commentary with a short review of the mechanics of this year's judging process. I follow the mechanics with a discussion and observations from the judging on various elements of the problem. I then discuss the importance of sensitivity analysis, assumptions and identifying the strengths and weaknesses of a developed model. I finish by addressing some points concerning communication and conclude with a summary. + +# The Process + +Dr. Kelly Black provided an excellent overview of the judging process in his commentary for the Ultimate Brownie Pan Problem of 2013 [2013]. However, I believe it is beneficial to once again review several elements of the process for this year's problem. In general, it is important to understand that the criteria used to identify good papers gradually change as the judging progresses through the triage and final rounds, with the final papers standing out as the best under a wide variety of criteria. + +# Triage + +The primary objective of the triage is to identify the papers that should be given more detailed consideration from the judges. + +Every paper is read by at least two judges seeking to determine if the paper contains all of the necessary elements that make it a candidate for more-detailed readings. If a paper addresses all of the issues and appears to have a reasonable model, then judges are likely to identify it as a paper that deserves more attention. + +A paper must be clear and concise to do well in the triage, and the paper's summary is critical at this point in the judging. A good summary provides a brief overview of the problem, the paper's structure, and specific results stated in a clear and concise manner. Small things that make a paper stand out include having a table of contents and ensuring that all required questions are addressed in the paper. + +Many papers do not do well in the triage because the summary fails to address all of the questions, and the judge decides that a team's efforts will not compare well with the better papers. For example, one critical question overlooked by many papers this year was how their model could be applied across both genders and all possible sports. Fully developing all of the required elements is a critical area often overlooked in papers. + +The sensitivity analysis remains one of the weakest elements in many papers, and these papers do not do well during the triage. + +In addition, it is vital that the team express their general approach and results as clearly and concisely as possible in the nontechnical position pa + +per. This means providing a broad overview of the problem, the approach, and specific results in clear and concise nontechnical terms. In other words: Can the article be read and understood by someone without an education in mathematics? + +These small things make it much easier for a judge to identify the team's effort and for the paper to do well in the triage round. However, the best models and the best effort is not effective if the results are not adequately communicated. It is important to remember that this is a modeling competition and that effective communication is a critical part of the modeling process. + +# Final + +The final consists of multiple rounds of judging over several days. As the rounds progress, the judging criteria shift from identifying papers that warrant further consideration to a process to identify the very best papers. + +The first round of the final begins with each judge reading a set of papers and then all judges meeting to discuss the key aspects of the question and what should be included in a "good" paper. This year these aspects included, in addition to all of the required elements: + +- a clear discussion of the assessment metrics, +- how and why these metrics were weighted, +- the incorporation of time in the analysis, and +- emphasis placed on the sensitivity analysis portion of the paper. + +As the final progresses, each paper is read multiple times, with the final set of papers being read by all judges. In these last rounds, the modeling process and the mathematical integrity of a paper begin to identify the Outstanding papers in the competition. + +# The Questions + +This year's Coach Problem consisted of three major components: + +- The first component required teams to determine the mechanism for selecting the best college coach. +- The second component required teams to address the impact of time on their analysis. +- The last component focused on gender as a factor in the best coach selection process and how their models could be applied across all sports. + +# The Best College Coach + +One major aspect a team must address is what is meant by the term "best." Was it number of wins? Was it number of years coaching? Was it popularity? Was it some combination of a set of factors? Many teams did not take the time to clearly develop the purpose of their model or to define "best" but immediately began modeling this aspect of the problem. + +In general, teams did not take an approach of developing a generic definition of coaching success. It appears that many teams started their modeling process by first selecting a sport then developing "successful" coaching metrics based on that sport. This approach had a tendency to result in some sport-specific metrics. For example, the number of Bowl Games was a popular metric for college football. This metric becomes a clear problem when attempting to apply the model across all sports, and such a problem should be addressed in the paper. + +Better papers first considered carefully the definition of "best" in terms of generic coaching success, then discussed how it could be measured. + +The judges were not looking for a specific set of assessment metrics but were looking for those papers that clearly identified and developed their metrics. Most teams developed a set of metrics (anywhere from 5 to 15) that they collectively modeled to develop rankings in each of the three required sports. The better papers tended to develop a set of global metrics that could be applied across all sports and genders and then applied them individually to a set of three different sports. + +Many papers treated this requirement as a multi-objective decision-making problem. Popular modeling approaches included the Analytical Hierarchical Process (AHP), Principal Component Analysis (PCA), Fuzzy Comprehensive Evaluation, TOPSIS, Artificial Neural Network, and Dynamic Network Analysis (DNA). However, by far, the most common modeling approach was the AHP. The large volume of papers utilizing the AHP would seem to suggest that this problem was developed for this approach—but that is not the case. There were several very successful papers that did not utilize the AHP approach. + +However, many teams, regardless of the modeling approach, failed to recognize the inherit nature of having to assume some sort of weighting mechanism for their metrics in the modeling process. The judges considered the discussion of weighting as a critical criterion for "good" papers. The better papers recognized this, discussed how they developed their weights, and then conducted a sensitivity analysis on their assumed weights. + +# The Impact of Time + +Has the nature of coaching changed over time, or is a coach in 1913 the same as a coach in 2013? An analysis of the impact of time had mixed + +results and was one of the weaker modeling components in many papers. It appears that many teams ran short of time and provided little analytical effort to this component. The analysis of time was viewed by the judges as a critical criterion for "good" papers. The judges were looking for some recognition that in 100 years of college sports history, the rules of the game, the number of games, the nature and duration of training programs, the social environment, and a host of additional factors have changed—and these changes may influence the metrics used in their model. + +Many teams approached this aspect as a time-series analysis problem. These teams evaluated how their model's metrics may have changed over time, to see if there was a correlation between their metric and year. If a team discovered a correlation, typically it was simply noted in the paper that time was important and influenced their model's ranking; but the team usually made no adjustments to their models. The better papers adjusted their metrics, usually the weighting mechanism, and generated a new coach ranking. + +# Gender + +The last question examined how the model could be applied across both genders and all sports. This requirement consisted of two distinct discussion points, addressing both genders and modeling across all sports. Most teams addressed the gender requirement in their paper but were weaker at analyzing how their model could be applied across all sports. + +The judges were not looking for any one particular approach to address the gender requirement. Most papers addressed this requirement by modeling a traditional women's college team and ranking the best set of woman coaches. This was presented as evidence that the model worked for both genders. However, a handful of teams used their model to rank a combined list of men- and women-coached teams, developing a top-5 ranking that contained both men and women coaches. The judges viewed this as a superior approach to model the requirement. + +In terms of applying the model across all sports, most teams provided their rankings for three different college sports as evidence that the model applied across all sports. Only a handful of papers used the developed model to produce a single ranking that encompassed all sports and genders as evidence. The judges viewed this as a superior approach that went beyond the basic question and requirement of the problem. + +# Analysis: Assumptions, Sensitivity + +The judges realize the limited time available to the teams to complete their models is a considerable constraint, and they do not expect perfect models. However, the judges do expect teams to analyze their models in + +a structured way and to critically assess their models. A vital part of the mathematical modeling process is this critical analysis of the model. This analysis ranges from examining the impact of the basic assumptions on the modeled conclusions to examining the shortcomings of the techniques employed in the model. + +As in previous years, the judging criteria placed a large emphasis on assumptions and sensitivity analysis. Many papers neglected to fully consider these issues and were scored lower by the judges. + +# Assumptions + +The basic assumptions that a team makes is the starting point for their modeling efforts. The judges did not place restrictions on the basic assumptions other than that they need to make sense and be necessary. However, simply listing assumptions is not enough; papers should include a discussion of why they are making the assumption and their potential impact/influence on the model. + +It is also important to recognize that stating the assumption is not the end of the process, but that examining the impact on the modeled conclusions when a change in the assumption takes place is a vital part of the modeling process. If changing an assumption results in a change in the coach rankings, then the team should indicate that as a potential weakness. + +# Sensitivity Analysis + +Sensitivity analysis was appropriate and necessary for all modeling approaches. For AHP, sensitivity analysis would have involved varying the weights (or pairwise rankings) to explore what conditions would cause the alternative ranking to change. Many papers included a sensitivity analysis section in their paper but only addressed the theoretical aspects of sensitivity analysis as opposed to actually changing the value of an assumption or parameter to understand the impact. + +# Communication + +Papers were judged on the quality of the writing, with special attention to the summary and to the nontechnical (Sports Illustrated) article. In general, the quality of writing is continuing to improve. The strongest summaries this year included a definition of what the team meant by "best," a general overview of the modeling process, and an explicit result of the model analysis. + +The judges continue to be surprised by the number of papers where the summary only describes what the team will attempt without telling the results. + +Similarly, many of the nontechnical articles focused more on the mathematics and modeling process than on the details of the rankings. A nontechnical article does not mean that numbers are not included. It means that the article can be read meaningfully by someone without an education in advanced mathematics. + +# Conclusions + +The Outstanding teams modeled and presented all the aspects of the problem described in the problem statement, including the fully-developed standard elements (assumptions, sensitivity analysis, strengths and weaknesses, etc.), developed an effective model, explained the modeling choices made, and were clearly and concisely written. The judges continue to be impressed with the quality of the submissions, especially considering the time constraints. The growth in the quality and number of submissions is very encouraging to those who work to promote the practice of good mathematical modeling. + +# Reference + +Black, Kelly. Judges' Commentary: The Ultimate Brownie Pan papers. *The UMAP Journal* 34 (2) (2013): 141-149. + +# About the Author + +Robert Burks is a senior analytic consultant in the Dept. of Defense Analysis at the Naval Postgraduate School. He received his undergraduate degree in Aerospace Engineering from the United States Military Academy, his Master's in Operations Research from the Florida Institute of Technology, and his Ph.D. in Operations Research from the Air Force Institute of Technology. He has wide-ranging research interests, including diffusion of information and epidemiology agent-based modeling. Dr. Burks served as both a triage and final judge on the Coach Problem. + +# Author's Commentary: The Coach Papers + +William P. Fox + +Dept. of Defense Analysis + +Naval Postgraduate School + +1 University Circle + +Monterey, CA 93943-5000 + +wpfox@nps.edu + +# Problem B: College Coaching Legends + +Sports Illustrated, a magazine for sports enthusiasts, is looking for the "best all-time college coach," male or female, for the previous century. Build a mathematical model to choose the best college coach or coaches (past or present) from among either male or female coaches in such sports as college hockey or field hockey, football, baseball or softball, basketball, or soccer. Does it make a difference which time line horizon that you use in your analysis, that is, does coaching in 1913 differ from coaching in 2013? Clearly articulate your metrics for assessment. Discuss how your model can be applied in general across both genders and all possible sports. Present your model's top 5 coaches in each of 3 different sports. + +In addition to the MCM format and requirements, prepare a 1-2-page article for Sports Illustrated that explains your results and includes a nontechnical explanation of your mathematical model that sports fans will understand. + +# Introduction and Overview + +The problem was deliberately written to have a potentially overwhelming amount of data, and to force the student teams to decide what "metrics" they needed to consider to choose the all-time best coach. Good simplifying assumptions could be made so that the problem would be tractable in the + +time allowed yet would still provide useful insights. It was also written so that it could be attempted by students with only lower-division college mathematics. + +To recognize the increasing international diversity of the student teams in the MCM, the teams were allowed to choose among all the college sports for either or both male or female athletics. Since the teams participating in the MCM had to analyze a minimum of three college sports, many chose both male and female teams. The most common sports chosen were basketball, football, and baseball. There was some confusion on the part of some international entries, which used professional sports coaches, not college sports coaches. At least one team from the People's Republic of China chose to model college sports in their own country, which was acceptable. + +As has been the norm recently, there were required elements in the problem statement. Almost every paper this year included the required nontechnical position paper, although the disparities in the article were remarkable. Any article for a sports magazine that does not contain the list of best all-time coaches was considered an extremely poor article! + +This commentary will discuss the various elements of the problem, with observations from the judging from the my perspective as the problem's author. I will conclude with a summary. + +Readers interested in a discussion of the mechanics of the judging process will find a very good report on the process in [Black 2013]. + +# "Best" College Coach of All Time + +Models are constructed to answer questions. Here, we are asked to identify the best all-time college coach. But what did a team mean by "best"? Most teams did not do a very good job of defining what metrics they needed. Most teams dived into the Internet and found lots of data on wins, losses, years coaching, and institutions coached at. + +I felt that the number of athletes graduating from the institution with the sports program should be a metric that should be considered—but less than a handful of teams even addressed this point. + +The coach's popularity was a variable of interest by many teams. How does one measure popularity over the past 100 years? Again, many teams never stopped to define clearly the purpose of their models, but plunged immediately into chosen models without linking assumptions to the model building process. + +The better teams considered carefully what they meant by "best" and included a discussion in their restatement of the problem or elsewhere in the submission. + +# Models Chosen + +Of the 2,871 papers from 606 schools with teams that chose the Coach Problem, well over $90\%$ chose the Analytical Hierarchy Process (AHP) as their model. + +AHP is a good tool to rank alternatives based on decision-makers' weights, using pairwise comparison with a consistency ratio less than 0.1. Every one of the papers that used AHP used subjective inputs for obtaining the weights; very few teams did sensitivity analysis on those criterion weights. No teams found the breaking points of the weights that actually altered the ranking of the number-one coach. + +Most papers used the AHP or else TOPSIS (Technique of Order Preference by Similarity to Ideal Solution) to incorporate the different elements of the solution into one decision model. Teams used their own judgment to estimate the criterion weights in each case. Since different teams provided different weights and inputs, solutions varied widely even using the same sports and coaches. + +The other $10\%$ of the teams used a variety of methods from networks, principal component analysis, genetic algorithms, a grey model [Julong 1989; Giannelli n.d.], linear regression, ...; and the list goes on. + +Most teams avoided the issue of uncertainty, assuming that the data were accurate but the weights were not and hence needed some analysis. The best teams included an assessment of the sensitivity of their models to changes in their inputs. + +Modeling assumptions were very poor across the board by teams. Some teams assumed away the timeline and gender issue, yet these were main parts of the modeling questions: + +- Timeline: Two issues that were readily apparent over the 100 years were salaries and schedule. Few teams addressed these issues. +- Gender: We will say that teams that included female sports automatically considered gender. Those that did not had to do more than just lightly discuss this. One team said that the proportion of differences in coaches were significant but that gender was not an issue. + +# Sensitivity Analysis and Model Testing + +As in previous years, the judging criteria for this problem considered sensitivity analysis as a main component of good analysis for "coaching legends." Many papers neglected to consider these issues and scored lower as a result. + +Sensitivity analysis was appropriate for all elements of the models. For AHP or TOPSIS, sensitivity analysis would have involved varying the + +weights (or pairwise rankings) to explore what conditions would cause the alternative ranking to change. + +Model testing took several forms. For prediction models, graphical methods for examining residuals of historical data were often used. Statistical tests of significance were used for regressions. Consistency checks, such as $CR < 0.1$ , were used for the AHP. The better papers used these methods and others to convince the reader that the models selected were appropriate. + +# Communication + +Papers were judged on the quality of the writing. Special attention was paid to the abstract and to the nontechnical article. + +The quality of writing, in general, is improving from year to year. This is notable in the papers that come from countries where English is not the primary language spoken. About $70\%$ of the Outstanding papers this year were from teams where English was a second language, and that was a record. + +The strongest abstracts/articles included a definition of what the team meant by "best," the results of the model, and a simple explanation of how the answer was found. The judges continue to be surprised by the number of papers where the abstract and even the article for Sports Illustrated only describes what the team will attempt without describing what they found. + +A nontechnical letter or article does not mean that numbers are not included. Rather, it means that it can be read meaningfully by someone without an education in advanced mathematics. Too many of the articles omitted all details of the solution as well as the solution itself! + +Papers that labeled figures and tables with informative captions scored higher than those that did not. + +The quality of citations was also a discriminator. Papers that cited their sources and provided complete references formatted according to a recognized standard scored higher than those that did not. + +Several of the very best papers were a joy to read. The explanations were clear and complete, and the phrasing was almost lyrical. The judges will continue to value outstanding writing. + +# Summary + +The Outstanding teams: + +- modeled all the aspects of the problem described in the problem statement, + +- included the standard contest discussions (assumptions, sensitivity analysis, strengths and weaknesses, etc.), +had defensible and useful models, +- explained the modeling choices made, and +were well written. + +Papers that listed model #1 through model #N, without ever reconciling which model was best and which was used in the final analysis, were generally not Outstanding. + +The judges were pleased by the teams' submissions. The topic allowed for a wide range of solutions, and the allowed choice of sports provided a diversity of solutions. The growth in the quality and number of submissions is very encouraging to those who work to promote the practice of good mathematical modeling. + +# References + +Black, Kelly. Judges' Commentary: The Ultimate Brownie Pan papers. *The UMAP Journal* 34 (2) (2013): 141-149. +Giannelli, Carlo. n.d. More details about Grey Model. http://lia.deis.unibo.it/research/SOMA/SmartBuffer/Client/htmlDocs/detailsMobPred.html. +Julong, Deng. 1989. Introduction to Grey System theory. Journal of Grey Systems 1: 1-24. http://www.researchinformation.co.uk/grey/IntroGreySysTheory.pdf. + +# About the Author + +Dr. William P. Fox is a professor in the Department of Defense Analysis at the Naval Postgraduate School and teaches a three-course sequence in mathematical modeling for decision making. He received his B.S. degree from the United States Military Academy at West Point, New York, his M.S. at the Naval Postgraduate School, and his Ph.D. at Clemson University. Previously he has taught at the United States Military Academy and at Francis Marion University, where he was the Chair of Mathematics for eight years. He has many publications and scholarly activities including books, chapters of books, journal articles, conference presentations, and workshops. He directs several mathematical modeling contests through COMAP: HiMCM and MCM. His interests include applied mathematics, optimization (linear and nonlinear), mathematical modeling, statistical models for medical research, and computer simulations. He is President-Emeritus of the NPS Faculty Council and President of the Military Application Society of INFORMS. + +# Judges' Commentary: + +# The Frank Giordano Award for 2014 + +Marie Vanisko + +Dept. of Mathematics, Engineering, and Computer Science + +Carroll College + +Helena, MT 59625 + +mvanisko@carroll.edu + +# Introduction + +For the third year, the MCM is designating a paper with the Frank Giordano Award. This designation goes to a paper that demonstrates a very good example of the modeling process. Having worked on the contest since its inception, Frank Giordano served as Contest Director for 20 years. As Frank says, + +It was my pleasure to work with talented and dedicated professionals to provide opportunities for students to realize their mathematical creativity and whet their appetites to learn additional mathematics. The enormous amount of positive feedback I have received from participants and faculty over the years indicates that the contest has made a huge impact on the lives of students and faculty, and also has had an impact on the mathematics curriculum and supporting laboratories worldwide. Thanks to all who have made this a rewarding and pleasant experience! + +The Frank Giordano Award for 2014 goes to a team from Huazhong University of Science and Technology, School of Mathematics and Statistics, in Wuhan, Hubei, China. This solution paper was in the top group, receiving the designation of Outstanding, and was characterized by + +- a high-quality application of the complete modeling process, with clear justifications and examples of how the models could be applied to the coaching data, including an extension of the first two models to a third that gave better results; + +- a careful analysis of the parameters and demonstrated sensitivity analysis: +- originality and creativity in the modeling effort to solve the problem as given and to extend the process to selecting the top U.S. presidents; and +- clear and concise writing, making it a pleasure to read. + +# The Coach Problem + +Sports Illustrated, a magazine for sports enthusiasts, is looking for the "best all-time college coach," male or female, for the previous century. Build a mathematical model to choose the best college coach or coaches (past or present) from among either male or female coaches in such sports as college hockey or field hockey, football, baseball or softball, basketball, or soccer. Does it make a difference which time line horizon that you use in your analysis, that is, does coaching in 1913 differ from coaching in 2013? Clearly articulate your metrics for assessment. Discuss how your model can be applied in general across both genders and all possible sports. Present your model's top 5 coaches in each of 3 different sports. + +In addition to the MCM format and requirements, prepare a 1-2-page article for Sports Illustrated that explains your results and includes a nontechnical explanation of your mathematical model that sports fans will understand. + +# Solution by the Team + +# Executive Summary Sheet and Sports Illustrated Article + +The team's summary was well done and gave the reader a good idea of what to expect. It contained the appropriate specifics with regard to techniques used and comparison of techniques and was both concise and thorough. + +Despite a few grammatical errors, the team's article, written in an appropriate nontechnical manner, served as an informative and inviting overview of the issues involved in selecting the top coaches. The decision to highlight John Wooden as a way to clarify the process was excellent. + +# Assumptions + +The assumptions made were very general and somewhat generic. The paper would have been stronger if assumptions were made that applied directly to the models used. + +# The Models and Methods + +The metrics were clearly articulated, with details on how the values associated with each metric were to be determined. The team's explanations included examples, so that the reader could see precisely how coaches' scores were determined. Very few other papers were as thorough in their explanations. The methods used were the Analytical Hierarchy Process (AHP) and Fuzzy Synthetic Evaluation (FSE). After applying each method and comparing the results, they discussed the subjectivity of the AHP method versus the objectivity of the FSE method. To aggregate the results from both methods, they adopted a linear weighted model, Aggregation Model (AM). + +# Testing Their Models + +The primary focus in testing their model was men's college basketball. After determining the top coaches for each method (AHP, FSE, and AM), the team also computed "hit scores" for each by comparing their results to published ratings. They also commented on how, in all their models, the top five positions were less controversial than the top 10. + +# Extending and Testing Their Models + +The team began by applying their model to women's basketball, which has both male and female coaches. After computing the AM scores for these coaches, they determined the top 10 in this field and found their results largely agreed with published rankings. Thus, they concluded that gender was not an issue in their method for determining top coaches. + +The team next considered the time factor. Considering that the NCAA basketball tournament did not begin until 1939, then grew from 8 to 16 teams in 1951, and to 32 teams in 1975, 48 teams in 1980, and 64 teams in 1985, the team assigned different weights to each of these periods in their analysis. In applying this factor to each of their three models, they found that the top-10 lists changed their ordering somewhat, but the overall "hit scores" did not change significantly. In their analyses of these results, the team highlighted selected coaches whose positions had changed and explained why that happened. This was a very good example of what distinguished their paper from others. + +Finally, the team extended their AM model to football, first listing the metrics that would be different for football versus basketball (for example, bowl games instead of tournaments). Although data for the football coaches were not shown, results for the top five football coaches were given. The team also listed results for the top five hockey coaches, but no change in metrics nor coach data were given. The paper would have been stronger had it been more thorough in application of the AM model to hockey and had it shown the data for football, hockey, and women's basketball coaches. + +# Sensitivity Analysis + +In applying sensitivity analysis, the team demonstrated that their AM model performed better than either their AHP or FSE model alone. Their use of graphs and examples lent clarity and credibility to their analysis. This again distinguished their paper from others. + +# Extending Their Model Beyond Sports + +As an exploration in applying their model more broadly, this team developed metrics to determine the top U.S. presidents. After taking personal qualities, presidential achievements, and leadership qualities into account, they ranked the top 10 presidents. This was what an MCM judge would see as a value-added feature, because it showed that the team was embracing the concept of mathematical modeling, recognizing that the same model can often be applied to very different circumstances. + +# Recognizing Limitations of the Model + +Recognizing the limitations of a model is an important last step in the completion of the modeling process. The team commented on the subjectivity of their weight assignments. + +# References and Bibliography + +The list of references was thorough, and it was very good to see specific documentation of where those references were used in the paper. + +# Conclusion + +The careful exposition in the development and application of the mathematical models, together with the extensions and sensitivity analysis, made this paper one that the judges felt was worthy of the Outstanding designation. The team is to be congratulated on their thoroughness, their clarity, and using the mathematics they knew to create and justify their models. Their presentation made this a very enjoyable and understandable read. + +# About the Author + +Marie Vanisko is a Mathematics Professor Emerita from Carroll College in Helena, Montana, where she taught for more than 30 years. She was also a Visiting Professor at the U.S. Military Academy at West Point and taught for five years at California State University, Stanislaus. She chairs the Board of Directors at the Montana Learning Center on Canyon Ferry Lake and serves on the Engineering Advisory Board at Carroll College. She has been a judge for the MCM for 19 years and for the HiMCM for 10 years. + +# Our Story with the MCM + +Labin Wen + +Jingyuan Wu + +Cong Wang + +Shanghai Jiao Tong University + +Shanghai, China + +# Introduction + +Just over a year ago, we participated in the 2013 Mathematical Contest in Modeling (MCM) and won the Outstanding winner award for our work on the The Ultimate Brownie Pan Problem. We feel very proud of ourselves because there were only 6 Outstanding winners out of more than 2,000 teams (less than $1\%$ ) who worked on this problem. It seems like just a few weeks ago when we were talking about square or circular ovens. + +![](images/95e67f06985ef97092cc254b07c451d358f56bd05e65a938af1be9aac2e30a64.jpg) +Figure 1. The team's certificate. + +The UMAP Journal 35 (2-3) (2014) 201-208. ©Copyright 2014 by COMAP, Inc. All rights reserved. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice. Abstracting with credit is permitted, but copyrights for components of this work owned by others than COMAP must be honored. To copy otherwise, to republish, to post on servers, or to redistribute to lists requires prior permission from COMAP. + +All of us were junior s in the Physics Dept. of Shanghai Jiao Tong University at that time. We teamed up for the challenge of the MCM for reasons we discuss later. But none of us had any experience related to mathematical modeling. + +What is more embarrassing is that we did not have much time to prepare for the contest after we had decided to enter it. Because the contest was very close to the Chinese Spring Festival, when we had to go home from the university, we then had to communicate with one another in a very inconvenient way—by Web chat application. + +However, after our methodical preparation for about 20 days before the competition, we finished our work quite successfully during the competition, although with some further work undone due to the limited time. We can never forget the moment when we heard the exciting news a few months later that we had gotten a top award! + +The good news was published on the official Website of our university very quickly. Some of our classmates and teachers congratulated us on our achievement. + +Now, we are about to graduate [this essay was written in March 2014]. Jingyuan and Cong are planning to study abroad, and the MCM Outstanding award will help. Libin will start his Ph.D. work at Shanghai Jiao Tong University next year, majoring in physics. We are going to different places, a few months apart; but our MCM story will always unite us. + +# Labin Wen's Experience + +![](images/c7a6eff2eb4ac8dbc93a8838a67224c4759dbaefaccd0b7b8db8bb7ed677a01f.jpg) +Figure 2. Libin Wen. + +I'll begin by telling why I decided to participate in the competition. To be honest, I knew nearly nothing about the MCM until one day when Jingyuan + +asked me whether I intended to take part in this competition. + +I did a survey on the topic of MCM and found that it was something that I was actually familiar with. The general procedure of mathematical modeling is to analyze a practical problem, describe it in a mathematical way, and finally solve it. I suddenly realized that I had always been good at solving problems in this way. + +For example, I once analyzed the efficiency of sleep modes of human beings because I felt that my daily life system was not as good as I desired. And I heard that some famous men were used to some special sleep mode, like Leonardo da Vinci, who, it was said, engaged in polyphasic sleep (sleeping multiple times in a 24-hour period). To solve this problem, I defined a quantity to describe how energetic or how tired a person was. Then I thought about the effects of different sleep modes on this quantity. In fact, what I did was to build a model to solve the problem. I wanted to know the differences between the monophasic sleep mode and the polyphasic sleep mode—and then I designed my own sleep mode. So it was quite practical because I was just designing something for myself, to improve the quality of my life. + +The point is that I made a model out of my own need for one. And the real needs were just the motivation of mathematical modeling. At that time, I wondered whether my idea about mathematical modeling was right. I knew some guys who had been in these kinds of competition, who thought that mathematical modeling was just games for those competitive men who had spent a lot of time on the preparation. That kind of attitude definitely hurt my confidence. But finally, I decided to give it a try since it was my first time and I need never worry if I got just a Successful Participation award. Besides, I would have a lot of time during the winter vacation; doing something academic would make me feel great. Of course, It was somewhat difficult to refuse the invitation of the two girls. + +Some interesting things occurred during our preparation and competition. I got excited when I noticed that all of our members' family names started with letter "W": namely "Wang," "Wen," and "Wu." And so I announced that our team name should be "Group 3W." What a domineering name! + +Another amazing story was about my sister and her husband. When I went home a few days before the competition and visited them, they asked me what mathematical modeling is after all. OK, it was indeed difficult to explain clearly to them. So I started searching for a possibly vivid example. The situation was that we were together enjoying the "hot pot," a famous kind of Chinese food, which uses a pan to heat the food. I took it as an example and explained, "For example, mathematical modeling studies something like the pan's heating. If we change the shape of the pan, the food may be heated more evenly. To find the optimal shape of the pan, we can set up a model to solve this problem. And this is mathematical modeling." I assumed that they understood my explanation by their reaction. What + +made me excited was that the Problem A of the 2013 MCM was exactly about oven pans! Once I saw this question, I could not wait to let the others know this interesting story. What a genius I was! I could predict the questions. + +We chose the Ultimate Brownie Pan Problem because we were familiar with the physical principles of the heating process. What impressed me later was that I seemed to realize the benefits of teamwork. Before, I was accustomed to thinking alone. And I never thought about how others could help me out when I was solving a real problem like this. When we discussed together, I found that the others are smarter than I, with very quick minds. Every time when I proposed something, they could think very quickly and give some new ideas. That made perfect sense because I was a slow-mind. I usually thought on a simple issue again and again. And that was the essence of teamwork. We could make up for each other. + +Before the competition, I made a schedule to let every member know what we should do and when. It worked very well, though we got behind schedule late in the competition. We had not been planning to stay up late any night. But on the last night, we found that doing an abstract (summary) was not as easy as we expected. Adding the finishing touches, we spent the whole night on the work, not going to bed until 7 A.M. I got more and more excited when it got closer and closer to the end of the contest. When we checked the final work again and again and determined that it had been well done, I felt really satisfied. We had successfully finished four days of work! + +Around 6 A.M. that morning, my mother was wakened by the noise from my room and found that I had stayed up through the night. She got very worried about my health and made breakfast for me immediately. I went to bed after breakfast that morning. And later I dreamt about the MCM. I dreamt that our work was not valued by the judges, which was totally different from the result that would come out few months later. Thus, when I heard from Jingyuan that we had gotten the Outstanding Winner designation, I just could not believe it—it was definitely a surprise. + +Though I am writing about our MCM story more than a year later, it is just like few days ago. The short four days seem to have given me many many memories. And what I described above is just part of them. I hope that my experience can give you a little feeling about the MCM. Do not believe those rumors about how difficult it is! + +# Jingyuan Wu's Experience + +When Cong Wang asked me during the summer holiday in 2012 whether I had interest in participating in the 2013 Mathematical Contest in Modeling (MCM), I hesitated since I had no experience in mathematical modeling before. + +![](images/d5291ef815d838803dd780127e266d18a00660b0e552299aaf8ce8f7ffa6fd71.jpg) +Figure 3. Jingyuan Wu. + +In 2013, I took a course called "Computational Physics." The main aim of this course was to enable students to apply some classic models and algorithms to solve physical problems. It was the first time that I learned about mathematical modeling and felt the charm of it. + +After one semester's study, I could handle some complex problems, such as the analysis of the behavior of a chaos system and simulation of the growth of clusters. As I gained confidence and passion for mathematical modeling, I recalled the conversation with Cong. As a result, Cong and I decided to take part in 2013 MCM and asked our classmate Libin Wen, who was good at programming, to join us. That was how we, three amateurs, formed a team. With little experience in mathematical modeling, we had never dreamed of winning an Outstanding prize, or even an ordinary prize. What we had done was to try our best to solve the problems and enjoy this exciting journey. + +Before the contest, we spent much of our spare time doing preparatory work. We searched almost all the reference books in mathematical modeling in our library and studied classic models. Since this time was during our winter holiday, we were all back in our hometowns and could not contact each other frequently. But we held meetings online every week and chatted about each member's progress. Due to our unremitting efforts and perseverance, we absorbed some classic models, became familiar with programming, and learned how to find information efficiently. A few weeks before the contest, we picked problems in previous contests to train on. We searched for the background knowledge related to the problems, proposed + +our own models, and finally gave solutions to the problems. After comparing our answers with the answers of the Outstanding Winners, I found that we always paid more attention to figure out sophisticated solutions to the mathematical equations and forgot the importance of combining the theory with the realistic application; we always stayed in a stereotyped thinking pattern and forgot the significance of innovative ideas. Realizing the difference between us and the Outstanding Winners, we gradually learned how to think about a problem. + +To take full advantage of our strength in physics, we chose to work on the The Ultimate Brownie Pan Problem. Our task was to determine the optimal shape of a baking pan by considering heat distribution and space utilization. We gave the analytic solution to the heating distribution of the baking pan with particular shapes and implemented Matlab scripts to analyze the heating process of baking pans with various shapes. Based on reasonable assumptions, we created a Rounded-Rectangle Model. During the optimizing process, what made us surprised and excited was a fixed point derived from our mathematical model. After delving into this fantastic point deeply, we drew the conclusion that this fixed point corresponds to a specific shape that would allow manufacturers to meet different customers' preferences equally. + +During the contest, we had a pleasant cooperation. We separated our jobs according to our strengths and thought about problems together, which allowed us to work efficiently and effectively. We slept merely 6 hours every day and stayed up all night to finish the paper on the last day of the contest. Although we were too tired to think at the end, we still tried our best to carry out the task and work till the last second. The four days was short, but it meant a lot to us. The four-day journey was full of our efforts, our brainstorming, our perseverance, our excitement, and our sweat. + +When we heard the news that we had won the Outstanding designation, we were all excited and could not believe our ears. As a team that was participating in the MCM for the first time, we were so lucky that our paper gained the favors of the referees. The award gave me an extra bonus and helped me to get a National Scholarship. Besides, I am glad to see that our success inspires more students in my department to take part in mathematical modeling contests. Some of them often ask me about the techniques in mathematical modeling and the experience in MCM. + +Finally, thanks a lot to the MCM for the award and this wonderful experience! + +# Cong Wang's Experience + +The four days experience of competing in the MCM is significant and memorable to me. During that exhausting but exciting period, our group spent all our time on the contest from 9 A.M. to nearly 1 A.M. the next + +![](images/c536b942ed4ea2ba8fbee901239685eba69b3c9cc4f7e696d0bfd1fcfa7a539c.jpg) +Figure 4. Cong Wang. + +morning. Furthermore, the last day, we stayed up the whole night to modify the language of our paper until we were so sleepy that we could hardly open our eyes. + +During these days, we did everything as fast as we could to meet the demand of our schedule, which was quite painful. However, when I am reminded of this period, I consider it valuable and happy. What we gained is much more than what we paid in these days. + +The four days in the contest, though tiring and exhausting, left a very deep impression on me and helped me to make the decision of changing my major. My undergraduate major is physics, though I am more interested in applied science. In the first two years of my study in the university, I tried different fields, such as chemistry and biology. However, I had not found my favorite field until the MCM contest. When we finally solved the problem, I gained so much joy that I had never felt before, which inspired my interest in modeling and designing. I decided to pursue a master's degree in a field related to modeling and designing. Furthermore, when I apply for such programs, our MCM award is good evidence to support my potential in this field. + +I would like to share some tips about what may have helped us in the competition. Thanks to our careful preparation, we did not waste much time. Because we were not in the same cities, we had to try four kinds of communication software before we could enable our discussion through the Internet. We practiced using Latex for writing our paper, so it would be more beautiful and concise. We worked together to study algorithms. + +These preparations were quite essential for us to cooperate more fluently and save time in the competition. + +Apart from the preparation, I consider the procedure that we followed to be very important in the contest. We had to weigh the advantages of taking certain steps and decide the priority of the steps. The problems that are given usually can be solved and extended in several aspects. For example, our problem was to design an appropriate shape of the pan, which could be warmed up most effectively and evenly by the oven. For this problem, we could discuss many aspects of the pan, such as material of the pan, heat distribution of the oven, and so forth. If we had taken all these factors into consideration, the problem would have turned out to be really confusing and complicated. Therefore, what we did first was to simplify the problem. We added several constraints to make the problem easier; it still contained enough factors but we could handle it. After we finished the basic and simplest modeling, we could extend the model to a more practical one one step at a time. As a consequence, we could make sure that we had the ability to address each problem that we confronted and would not spend too much time on the factors that we could not handle. I think that our highly-organized procedures not only saved time for calculating and simulating but also reduced the time of writing the paper. Therefore, we had more time to spend on optimizing our method of modeling. + +[EDITOR'S NOTE: The team's entry in the 2013 MCM can be found on the 2013 MCM-ICM CD-ROM, which contains the press releases for the two contests, the results, the problems, unabridged versions of all the Outstanding papers, and judges' commentaries. Information about ordering is at http://www.comap.com/product/cdrom/index.html or at (800) 772-6627.] + +# First Experience with Modeling + +Matthew Marner + +Princep Shah + +Dobromir Yordanov + +Amanda Beecher (team advisor) + +{mmarner,pshah2,dyordan1,abeecher}@ramapo.edu + +Ramapo College of New Jersey + +Mahwah, NJ 07430 + +# Inexperienced + +The 2014 COMAP competition was Ramapo College's first team in a modeling competition, as well as our first experience with a mathematical modeling competition. Despite the facts that + +- no member of the team had taken a mathematical modeling course, +we had no formal preparation for this contest, and +we had finalized the team only three days before the start, + +we were able to earn a Meritorious designation. + +Through this competition, we learned that mathematical modeling is the ability to take the mathematics learned academically and apply it to real-world situations. There are no prerequisites for this way of thinking, but we found our ability to use data effectively, write well, and be creative were imperative for our success. We had not experienced anything like mathematical modeling before this contest, and it was such a great learning opportunity. It taught us how to process questions and tackle real life situations. Since this competition took place, mathematics and other mathematically-based sciences have become more relatable and exciting, because now we understand how elements of these courses are simply ways of understanding real life. + +The UMAP Journal 35 (2-3) (2014) 209-213. ©Copyright 2014 by COMAP, Inc. All rights reserved. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice. Abstracting with credit is permitted, but copyrights for components of this work owned by others than COMAP must be honored. To copy otherwise, to republish, to post on servers, or to redistribute to lists requires prior permission from COMAP. + +# Uneasy + +At the beginning of the contest, we were uneasy about the nature of the competition and what would be required of us. This feeling of unease was contrasted by our combined passion for mathematics and our confidence in our own abilities. Dobri's progression through advanced mathematics and computer science classes as a dual major added some confidence in our attempt. He was clearly the most technically experienced and established himself as the leader of the group. Princep expressed that his secondary education in Nepal incorporated a high degree of advanced mathematics, but thus far he had taken only general education courses towards his finance degree at Ramapo. Although it comforted Matt to be in the company of such experience, he felt behind, since he was just beginning Calculus I and Computer Science I in his program in chemistry. + +# Melding + +Despite the uncertain nature of our preparation, we found our medley of skills beyond mathematics aided in translating academic mathematics into a practical solution. We began our work the evening that the problems were announced, trying as a group to select one. + +The problems seemed impossible. As we discussed potential strategies for each problem, our ideas were all over the place and none seemed particularly effective. + +Dobri was very confident in his ability to process data for use in algorithms, so we chose the Coach Problem; sports data are easy to find. From the very beginning, we all agreed that we should measure a coach's ability by the improvement of the team under the coach's mentoring and not just by the raw performance. This required us to define what we meant by "improve a team" and measure "ability." + +# Our Approach to the Coach + +We looked at different ways to judge performance in team-based games, but there seemed very few in line with our idea of improvement. To quantify performance, Dobri found a single-opponent game-scoring algorithm used extensively in chess called the Elo rating system [Wikipedia 2014]. This method provides a way of assigning every competitor a numeric rating at all points in their history. + +To implement the Elo rating system, we had to make modifications so that input values were scores from the specific sport, and adjust the scheme for the number of points that are common to that competition. Also, our data were organized by team rather than by coach, so we had to re-work the data + +to reflect the individual coaches and the ratings of their teams. Each of these steps required independent algorithms outside the rating system itself. + +We attempted to use a change in Elo score to rate the coaches. However, we found many inconsistencies with this approach, particularly with coaches who changed teams several times in their careers. Thus, our final model analyzed trends to track consecutive streaks of improvement or degrading performance as given by the Elo score. + +Our results showed that top-rated coaches throughout their careers had consistently improved the Elo scores of their teams, which was in line with our vision of a successful coach. Our methodology used a lot of coding to get data into a usable format, so we could create and implement our algorithm; and we had to make modifications to account for inconsistent results. + +# Liberal Arts Value + +Furthermore, we worked very hard on our paper and executive summary to ensure that it was well-written and coherent. Our liberal arts education helped us coherently express our ideas in this paper. With two non-native speakers on the team, it was sometimes difficult to express in English the point we wanted to make. However, despite our suffering in literature and humanities courses, we believe that these courses greatly improved our ability to express our ideas correctly. + +# Our Impressions + +We feel that this experience was by far the most productive any of us has had over a weekend, and that made us feel awesome. It was definitely a lot of work, but it also was fun collaborating and doing nonstandard, actual problem-solving mathematics and reasoning. + +In the end, we were very proud of our results and incredibly pleased with our unexpected recognition. We had put a lot of effort into our models and our paper and believed that we had done great work. However, we never imagined that we would get to be in the top $10\%$ in the world and top $5\%$ in the U.S. It was such a shock and one of the proudest moments for each of us academically. We couldn't wait to get the certificates and start enjoying the press! + +It had been difficult to complete this work and write the paper in only four days. It was a very busy and stressful long weekend, so learning that we had done so well made our sacrifices a bit more tolerable. As trailblazers for the Ramapo College mathematical modeling reputation, we can't wait to see what we (and, we hope, other students) will accomplish in the future. + +# Reference + +Wikipedia. 2014. Elo rating system. http://en.wikipedia.org/wiki/Elorating_system. + +# About the Authors + +Matthew Marner was a sophomore at Ramapo College. His academic concentration is in chemistry but he has taken a medley of courses, including mathematics and computer science. Spring 2014 was Matt's first experience with calculus and with computer science. Taking both these courses at once was a difficult transition from the less complex math and science courses that came before; however, the difficulty of the courses paid off in increased analytic and computational skills, which contributed greatly to the success of the team. + +Princep Shah will be sophomore at Ramapo College in Fall 2014. He is from Nepal and is studying finance with a minor in mathematics. He will be serving as a Residence Assistant and also as a peer facilitator. "I have not set any specific goals yet, but I want to see myself working in the field of investment banking after I graduate." + +Dobromir Yordanov, an international student from Bulgaria, will be a senior in Mathematics and Computer Science at Ramapo. "Some of the more exciting classes I've taken are Stochastic Calculus for Finance, Abstract Algebra, Financial Modeling, and Topology. Next year, I'm also planning to take Artificial Intelligence. Going to grad school was never really in question; but as a double major, it's really difficult to decide what program to go after. So far, I am leaning towards mathematics, most likely algebra, topology, or number theory. I have done independent research in cryptography and I intend to do bioinformatics research next year. Currently, I'm a software engineering intern at Google, Inc. working on YouTube; I have years of previous experience in the field, with my first full-time programming job at age 16. Outside of school and work, I've been involved with math club and personal programming projects." + +Amanda Beecher is an Assistant Professor of Mathematics at Ramapo College. She earned her Ph.D. from the University at Albany, SUNY in commutative algebra. After finishing her doctoral work, she held a three-year postdoctoral appointment at the United States Military Academy at West Point. Amanda has served as an advisor for the MCM and also as a triage grader, referee, and final judge for the ICM. + +![](images/b886e20edf9f9f8b538a06326cdfc429ededb3b8db07d7aca04c7ee74e291a6c.jpg) + +Amanda Beecher (team advisor), Dobromir Yordanov, Princep Shah, and Matthew Marner. + +# Model Students + +Robert Emro + +Assistant Director, Marketing and Communications + +College of Engineering + +Cornell University + +Ithaca, NY 14853 + +rbe7@cornell.edu + +Against upper-class mathematics majors, they didn't dream they would make it past the qualifying round, but a team of first-year engineering students thought it would still be fun to try the Mathematical Contest in Modeling in the spring of 2012. Singaporean Dennis Chua ('14 ChemE) first learned of the contest in Math 2930, Differential Equations for Engineers. Chua knew Alvin Wijaya ('15 EE) had already completed the class the previous fall and was taking a more advanced math class, so he asked the Indonesian if he wanted to give it a go. "We needed a third person," says Wijaya. "Jessie Lin seemed like the best choice because she had also taken that class the first semester." Jessie Lin ('15 EE) had competed in math contests in her native Shanghai, but none involved modeling and they were only a few hours long. The modeling competition took four days. "I like using math for applied stuff, that's why I became an engineer," says Lin. "Both of us (Alvin) are pretty good at math, so I thought, 'Why not try it?'" For the initial round of the contest, in which they were competing against other Cornell teams, they were asked to devise a model to determine whether a traffic light would improve pedestrian and vehicular traffic flow at the busy intersection of Tower Road and East Avenue, near Uris Hall. + +Wijaya handled the research, making a field trip with Dennis to observe another traffic light on campus, near Carpenter Hall. "I trekked down there at night while it was raining to see how a traffic light works," he says. "The problem statement was super vague, so we went the extra mile and did primary research which is really helpful, because I'm pretty sure not one of the other groups thought of doing that." + +"I remember us freezing out there," says Chua. "It was fun." For four days, the three spread out on the floor of Alvin's or Dennis's dorm room to work on their model. "It was like lack of sleep and lots of coffee," says Lin. + +The UMAP Journal 35 (2-3) (2014) 215-218. ©Copyright 2014 by COMAP, Inc. All rights reserved. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice. Abstracting with credit is permitted, but copyrights for components of this work owned by others than COMAP must be honored. To copy otherwise, to republish, to post on servers, or to redistribute to lists requires prior permission from COMAP. + +"We had a good time. Because we spent so much time together those four days we got to know each other pretty well and became good friends." + +"I basically did like all the math parts. I came up with all the formulas," continues Lin. "We actually had a really good team because we were good at different stuff." + +"I was the guy that did most of the coding. I was the kid who knew how to use MATLAB," says Chua. "It was crazy; I was literally writing code for every single minute of the day for four days before the competition. It was a ton of code. My computer couldn't take the programming I was doing. I wanted to run 100,000 iterations but I pressed Enter and it died. I tried just ten and it did the same thing. Finally I tried three and it worked." The team figured a properly timed light would improve flow at the intersection, but did not recommend it. "We found out that by varying the length of the red and green lights, and synchronizing with the lights at the bridge on North Campus, it would improve it," says Wijaya. "But in the end, we concluded that a traffic light there would be bad because students would cross against the light. We determined pedestrian bridges would be better. During class transition periods it would be so dangerous for people to cross there." + +The three submitted their paper with little hope of winning. In fact, Chua did not even go with Lin and Wijaya to hear the winners announced. "I was a freshman and I thought there was absolutely no way I could have made it to the next round," he says. "We knew we did well, but because we were competing against junior and seniors, people who had done the competition once or twice, we just felt that experience-wise, we were defeated really bad," says Wijaya. "But we felt we had a chance. We went there not expecting much, maybe top five and we were announced as the second-place winner." + +Second place qualified them for a berth in the international competition, in which they would be up against more than 3,600 teams from all over the world. This time they chose Problem A: The Leaves of a Tree, which tasked them with developing mathematical models to estimate the actual weight of the leaves on a tree and to describe and classify them. "They give you one piece of paper with the problem, no information at all," says Chua. "There's no guidelines. There's no data for you to check your models on. It was a really, really a broad thing that required you to spend a whole day just zooming in on the ideas you want to focus on." + +Again, Wijaya did the research, finding values to plug into their model, including tree height, branch angles, and bifurcation rates. "We had no clue about trees and leaves and all this bio stuff, so we had to do a ton of research," says Chua. "That really got me looking into bio stuff. I'm now doing all I can to become a biomedical engineer. Hoping to do a biomedical master's when I graduate." He's now assisting biomedical engineering assistant professor Jan Lammerding with muscular dystrophy research. "Jesse was the math whiz who turned out all the formulas," says Chua. "She came up with all these crazy formulas. I don't even know how + +she did it." + +Holed up in a dorm room together for hours on end with little sleep, the students sometimes worried they weren't up to the challenge. "Especially me, because I did the coding, so if I fail, this is not happening," says Chua. "Many times I had to rethink the strategy. Most of the time I was just learning, reading from my textbook. We hit a lot of walls where it was 'Dennis cannot code that.' I'm not a computer science major." + +After four days, the students emerged with a simulation-based approach using probabilistic and dynamic models based on established research. "You put all the parameters into this model, like say it's in the alpine region, at what temperature, what time of year, what height of tree you're looking at, and out comes the weight of all the leaves on the tree," says Chua. "For our other model, the result that comes out is what kind of leaves does this tree typically have. Does it have a sword-shaped thin leaf? Does it have a big palm leaf?" + +Despite all their hard work, the students had small expectations. "I honestly had no confidence that we were going to win. We were against computer science seniors. They have some really solid coding skills," says Chua. "I knew when the results were going to come out and I didn't bother checking, because there's no way you're going to win. You're one team from Cornell, only freshman, there's no way! We didn't find out until two months later when Alvin checked the Website." + +What they found out was that they finished ahead of all other U.S. teams attempting that problem, including MIT, UCLA, and Harvey Mudd College, landing them the Mathematical Association of America Award. Lin couldn't attend, but that summer, Chua and Wijaya presented the team's winning project at the largest annual mathematics conference, MathFest 2012, hosted by the association in Madison, Wis. They were interviewed by renowned mathematician Frank Morgan, the Atwell Professor of Mathematics at Williams College, and subsequently featured in his blog entry about the MathFest 2012 in The Huffington Post. + +"I would say one of the things that I learned from this is confidence," says Lin. "Even though you are a freshman or sophomore, you can do some pretty amazing stuff." + +# [EDITOR'S NOTE: + +This article is reprinted with permission from Cornell Engineering Magazine (online edition, Summer 2013) through the courtesy of its author, Robert B. Emro.] + +![](images/7910b28097b2dc57d61cea079e426a0eb1b4531750a2674b285d0404b8f2dce4.jpg) +Figure 1. From left to right: MCM founder Ben Fusaro (Florida State University), Alvin Wijaya, Dennis Chua, and Mathematical Association of America President Paul Zorn (St. Olaf College). + +# [EDITOR'S AFTERNOTE: + +Bingxuan Dennis Chua also led a team of Cornell engineering students to first place in the IBM Watson Two Worlds Case Competition, with a plan to apply supercomputer Watson's capabilities to technical support for consumer electronics. + +In addition to other achievements during his three years at Cornell, Dennis was president of a dance group, was part of a medical brigade in Peru, and nursed lions in South Africa. + +He was named a 2014 Merrill Presidential Scholar, the highest recognition given by Cornell University to a graduating senior. + +In July 2014, Tau Beta Pi, the engineering honor society, named him among five laureates in its recognition of engineering students who have excelled in areas beyond their technical majors. + +After graduation, Dennis began a career in investment banking at Goldman Sachs in New York. + +# ICM Modeling Forum + +# Results of the 2014 Interdisciplinary Contest in Modeling + +Chris Arney, ICM Director + +Dept. of Mathematical Sciences + +U.S. Military Academy + +West Point, NY 10996 + +david.arney@usma.edu + +# Introduction + +A total of 1,028 teams from six countries spent a weekend working on an applied modeling problem involving the health of planet Earth in the 16th Interdisciplinary Contest in Modeling (ICM) $^{\text{®}}$ . This year's contest began on Thursday, February 6, and ended on Monday, February 10, 2014. During that time, teams of up to three undergraduate or high school students researched, modeled, analyzed, solved, wrote, and submitted their solutions to an open-ended interdisciplinary modeling problem concerning the state-changes associated with the health of planet Earth. After the weekend of challenging and productive work, the solution papers were sent to COMAP for judging. Six of the papers were judged to be Outstanding by the expert panel of judges. + +COMAP's Interdisciplinary Contest in Modeling (ICM) involves students working in teams to model and analyze an open interdisciplinary problem. Centering its educational philosophy on mathematical modeling, COMAP supports the use of mathematical tools to explore real-world problems. It serves society by developing students as problem solvers in order to become better informed and prepared as citizens, contributors, consumers, workers, and community leaders. The ICM is an example of COMAP's efforts in working towards these goals. + +This year's problem was challenging in its demand for teams to utilize aspects of science, mathematics, and analysis in their modeling and prob- + +lem solving. The problem required teams to investigate the relationships involved in network models for determining influence in a large co-author network (Paul Erdős's 511 co-authors) and measuring impact within a set of foundational papers within the discipline of network science. This problem required teams to mine a large data set and understand concepts from the informational sciences to build effective models for these complex phenomena. + +The problem contained many multifaceted issues to be analyzed and had several challenging requirements for innovative scientific and mathematical modeling and analysis. In addition to network modeling, informational analysis, and data collection, the teams had to explain the nature of influence and impact in an academic social network and show how their models could be used to help make informed decisions. This year's problem continued the ICM theme of network science for a third year. + +The problem also had the ever-present ICM requirements to use thorough data analysis, creative modeling, and scientific methodology, along with effective writing and visualization to communicate their teams' results in a 20-page report. All members of the 1,028 competing teams are to be congratulated for their excellent work and dedication to interdisciplinary modeling and problem solving. + +Next year's contest will add a second interdisciplinary problem to the available options for MCM/ICM contestants, for a total of four problems instead of three. We will continue the network science theme for one of the problems, and the second problem (Problem D for contestants) will focus on environmental issues. Teams preparing for the 2014 contest should consider reviewing interdisciplinary topics in the areas of network science and social network analysis for the C problem, and human-environment interactions in the areas of environmental science, climatology, food security, and geography for the D problem, and prepare and assemble their teams accordingly. + +Finally, we announce the forthcoming publication of the volume The Interdisciplinary Contest in Modeling: Culturing Interdisciplinary Problem Solving, edited by Chris Arney and Paul J. Campbell, which will appear later in 2014. Details are given in an announcement following this article. + +# A Brief History of the ICM + +As always, a panel of expert judges read the papers, judged their attributes, debated their merits, and decided on the rankings reported in this article. Looking at the range of topics over the 16 years (Table 1), the contest shows its interdisciplinarity with problems involving elements from chemistry, physics, biology, engineering, information science, medicine, business, and network science. The problems also show a balance of public (government) and private (business) issues. Including a second ICM prob + +lem for the 2015 contest will give teams more choice and provide variety in the contest problems. + +Results and winning papers from the first 15 contests were published in special issues of The UMAP Journal (1999-2013). In addition to this special issue of The UMAP Journal, COMAP offers a supplementary 2014 MCMICM CD-ROM containing the press releases for this contest and the MCM, the results, the problems, unabridged versions of the Outstanding papers, and judges' commentaries. Information about ordering is at http://www.comap.com/product/cdrom/index.html or at (800) 772-6627. + +Table 1. Participating teams and topics in the first 16 years of the ICM. + +
YearNumber of teamsTopic
199940Controlling the spread of ground pollution
200070Controlling elephant populations
200183Controlling zebra mussel populations
2002106Preserving the habitat of the scrub lizard
2003146Designing an airport screening system
2004143Designing information technology security for a campus
2005164Harvesting and managing exhaustible resources
2006224Modeling HIV / AIDS infections and finances
2007273Designing a viable kidney exchange network
2008380Measuring utility in health care networks
2009374Balancing a water-based ecosystem affected by fish farming
2010356Controlling ocean debris
2011735Measuring the impact of electric vehicles
20121,329Identifying criminals in a conspiracy network
2013957Planet Earth's health
20141,028Using networks to measure influence and impact
+ +# 2014 ICM Problem Statement: Using Networks to Measure Influence and Impact + +One of the techniques to determine influence of academic research is to build and measure properties of citation or co-author networks. Coauthoring a manuscript usually connotes a strong influential connection between researchers. + +One of the most famous academic co-authors was the 20th-century mathematician Paul Erdős, who had over 500 co-authors and published over 1,400 technical research papers. + +It is ironic (or perhaps not!) that Erdős is also one of the influencers in building the foundation for the emerging interdisciplinary science of + +networks, particularly, through his publication with Alfred Rényi of the paper "On random graphs" [1959]. + +Erdős's role as a collaborator was so significant in the field of mathematics that mathematicians often measure their closeness to Erdős through analysis of Erdős's amazingly large and robust co-author network (see Grossman [2014]). + +The unusual and fascinating story of Paul Erdős as a gifted mathematician, talented problem solver, and master collaborator is provided in many books and at Websites (e.g., O'Connor and Robertson [2000]). Perhaps his itinerant lifestyle, frequently staying with or residing with his collaborators, and giving much of his money to students as prizes for solving problems, enabled his co-authorships to flourish and helped build his astounding network of influence in several areas of mathematics. + +To measure such influence as Erdős produced, there are network-based evaluation tools that use co-author and citation data to determine an impact factor of researchers, publications, and journals. Some of these are Science Citation Index, H-factor, Impact factor, Eigenfactor, etc. Google Scholar is also a good data tool to use for network influence or impact data collection and analysis. Your team's goal for ICM 2014 is to analyze influence and impact in research networks and other areas of society. Your tasks to do this include: + +1. Build the co-author network of the Erdos1 authors (you can use the file from the Website + +https://files. oakland.edu/users/grossman/enp/Erdos1.html or the one we include at Erdos1.htm). You should build a co-author network of the approximately 510 researchers from the file Erdos1, who co-authored a paper with Erdős, but do not include Erdős. This will take some skilled data extraction and modeling efforts to obtain the correct set of nodes (the Erdős co-authors) and their links (connections with one another as co-authors). There are over 18,000 lines of raw data in the Erdos1 file, but many of them will not be used since they are links to people outside the Erdos1 network. If necessary, you can limit the size of your network to analyze in order to calibrate your influence measurement algorithm. Once built, analyze the properties of this network. (Again, do not include Erdős—he is the most influential and would be connected to all nodes in the network. In this case, it's co-authorship with him that builds the network, but he is not part of the network or the analysis.) + +2. Develop influence measure(s) to determine who in this Erdős1 network has significant influence within the network. Consider who has published important works or connects important researchers within Erdős1. Again, assume that Erdős is not there to play these roles. + +3. Another type of influence measure might be to compare the significance of a research paper by analyzing the important works that follow from its publication. Choose some set of foundational papers in the emerging field of network science either from the attached list (NetSciFoundation. pdf)2 or papers you discover. Use these papers to analyze and develop a model to determine their relative influence. Build the influence (coauthor or citation) networks and calculate appropriate measures for your analysis. Which of the papers in your set do you consider the most influential in network science and why? Is there a similar way to determine the role or influence measure of an individual network researcher? Consider how you would measure the role, influence, or impact of a specific university, department, or a journal in network science? Discuss methodology to develop such measures and the data that would need to be collected. + +4. Implement your algorithm on a completely different set of network influence data—for instance, influential songwriters, music bands, performers, movie actors, directors, movies, TV shows, columnists, journalists, newspapers, magazines, novelists, novels, bloggers, tweeters, or any data set you care to analyze. You may wish to restrict the network to a specific genre or geographic location or predetermined size. + +5. Finally, discuss the science, understanding, and utility of modeling influence and impact within networks. Could individuals, organizations, nations, and society use influence methodology to improve relationships, conduct business, and make wise decisions? For instance, at the individual level, describe how you could use your measures and algorithms to choose who to try to co-author with in order to boost your mathematical influence as rapidly as possible. Or how can you use your models and results to help decide on a graduate school or thesis advisor to select for your future academic work? + +6. Write a report explaining your modeling methodology, your network-based influence and impact measures, and your progress and results for the previous five tasks. The report must not exceed 20 pages (not including your cover sheet and summary) and should present solid analysis of your network data; strengths, weaknesses, and sensitivity of your methodology; and the power of modeling these phenomena using network science. + +# The Results + +The 1,028 solution papers were coded at COMAP headquarters so that names and affiliations of the authors were unknown to the judges. Each + +paper was then read preliminarily by triage judges at the U.S. Military Academy at West Point, NY. At the triage stage, the summary, the model description, and overall organization are the primary elements in judging a paper. Final judging by a team of modelers, analysts, and subject-matter experts took place in late March. The judges classified the 1,028 submitted papers as follows: + +
Influence/ImpactOutstanding 6Finalist 5Meritorious 131Honorable Mention 367Successful Participant 519Total 1,028
Outstanding Teams
Institution and AdvisorTeam Members
"Who Are the 20%?" Southeast University Nanjing, China Dan HeChen Wang Mi Gong Zhen Li
"The Research of Influence Based on the Characteristic of a Network" National University of Defense Technology Changsha, China Dan WangSheng Zhang Ran Cheng Danling Zhao
"Influence Measures in Networks" Central University of Finance and Economics Beijing, China Xiaoming FanXicheng Miao Lingjing Gu Yue Xu
"Methods of Measuring Influence Using a Network Model" Xidian University Xi'an, China Shuisheng ZhouSijia Jiang Yuke Zhu Ruijie He
"Who Is the Hidden Champion in a Network?" Tsinghua University Beijing, China Liping ZhangYanjun Han Yingning Sun Zhonghong Kuang
"A Three-Dimensional Network Impact Analysis Model" Tsinghua University Beijing, China Jun YeJiawen Gu Lu Chen Yuanye Wang
+ +# Awards and Contributions + +Each participating ICM advisor and team member received a certificate signed by the Contest Director. Additional awards were presented to the team from Southeast University, Nanjing, China, by the Institute for Operations Research and the Management Sciences (INFORMS). + +# Judging + +Contest Directors + +Chris Arney, Dept. of Mathematical Sciences, U.S. Military Academy, West Point, NY + +Joseph Myers, Mathematical Sciences Division, Army Research Office, Research Triangle Park, NC + +Associate Directors + +Tina Hartley, Dept. of Mathematical Sciences, U.S. Military Academy, West Point, NY + +Rodney Sturdivant, Dept. of Statistics, Ohio State University, Columbus, OH + +Judges + +Kristin Arney, (Ph.D. student), Dept. of Industrial and Systems Engineering, University of Washington, Seattle, WA + +Amanda Beecher, Dept of Mathematics, Ramapo College, Mahwah, NJ + +Kathryn Coronges, Network Science Center, U.S. Military Academy, West Point, NY + +Rachelle DeCoste, Dept. of Mathematics, Wheaton College, Norton, MA + +Amy Krakowska, Dept. of Geography and Environmental Sciences, U.S. Military Academy, West Point, NY + +Jessica Libertini, Dept. of Mathematics, University of Rhode Island, Kingston, RI + +Ziyang Mao, (Lecturer), Dept. of Mathematics and System Science, College of Science, National University of Defense Technology, Changsha, Hunan, P.R. China + +Kathleen Snook, COMAP Consultant, Bedford, MA + +Robert Ulman, Network Sciences Division, Army Research Office, Research Triangle Park, NC + +Jie Wang, Computer Science Dept., University of Massachusetts, Lowell, Lowell, MA + +Triage Judges + +Eleanor Abernethy, Chris Arney, Kevin Blaine, Peter Charbonneau, Jong Chung, Gabe Costa, Michael Findlay, Hilary Fletcher, Paul Goethals, + +Tina Hartley, John Jackson, Joseph Lavalle-Rivera, Timothy Povich, Jarrod Shingleton, James Starling, and Shaw Yoshitani + +—all of Dept. of Mathematical Sciences, U.S. Military Academy, West Point, NY + +Amanda Beecher, Dept. of Mathematics, Ramapo College of New Jersey, Mahwah, NJ + +Kathryn Coronges, Dept. of Behavioral Sciences, U.S. Military Academy, West Point, NY + +Kevin Cummiskey, Rob Nowicki, and Chris Weld, U.S. Army + +Ralucca Gera and Jonathon Roginski, Naval Postgraduate School, Monterey, CA + +Michelle Craddock Guinn, Dept. of Mathematics, Belmont University, Nashville, TN + +Sheila Miller, Dept. of Mathematics, Bard College, Annandale-on-Hudson, NY + +Elizabeth Russell, National Security Agency, MD + +Michael Smith, Missile Defense Agency, Huntsville, AL + +Csilla Szabo, Dept. of Mathematics, Rensselaer Polytechnic Institute, Troy, NY + +Johann Thiel, Dept. of Mathematics, New York City College of Technology, Brooklyn, NY + +Robert Wooster, Dept. of Mathematics, College of Wooster, Wooster, OH. + +# References + +Erdős, P., and A. Rényi. 1959. On random graphs, I. Publicationes Mathematicae 6: 290-297. + +Grossman, Jerry. 2014. The Erdős Number Project. http://www. oakland.edu/ enp/. + +O'Connor, J.J., and E.F. Robertson. 2000. Paul Erdős. http://www-history.mcs.st-and.ac.uk/Biographies/Erdos.html. + +# Acknowledgments + +We thank: + +- the Institute for Operations Research and the Management Sciences (INFORMS) for its support in judging and providing prizes for a winning team, and +- all the ICM judges for their valuable and unflagging efforts. + +# Cautions + +To the reader of research journals: + +Usually a published paper has been presented to an audience, shown to colleagues, rewritten, checked by referees, revised, and edited by a journal editor. Each of the team papers here is the result of undergraduates working on a problem over a weekend. Editing (and usually substantial cutting) has taken place; minor errors have been corrected, wording has been altered for clarity or economy, and style has been adjusted to that of The UMAP Journal. The student authors have proofed the results. Please peruse these students' efforts in that context. + +To the potential ICM advisor: + +It might be overpowering to encounter such output from a weekend of work by a small team of undergraduates, but these solution papers are highly atypical. A team that prepares and participates will have an enriching learning experience, independent of what any other team does. + +# Editor's Note + +The complete roster of participating teams and results is too long to reproduce in the Journal. It can be found at the COMAP Website: + +http://www.comap.com/undergraduate/contest/mcm/contest/2014/results/2014_ICM_Results.pdf + +# About the Author + +Chris Arney graduated from the U.S. Military Academy and served as an intelligence officer in the U.S. Army. His academic studies resumed at Rensselaer Polytechnic Institute with an M.S. (computer science) and a Ph.D. (mathematics). He spent most of his 30-year military career as a mathematics professor at West Point, before becoming Dean of the School of Mathematics and Sciences and Interim Vice President + +![](images/1c5ebb64c3505f68be8eb07969e72f957e22cf7bb933ea33e66859c96751a1f5.jpg) + +for Academic Affairs at the College of Saint Rose in Albany, NY. Chris then moved to Research Triangle Park, NC, where he served in various positions in the Army Research Office. His technical interests include mathematical modeling, cooperative systems, pursuit-evasion modeling, robotics, artificial intelligence, military operations modeling, and network science; his teaching interests include using technology and interdisciplinary problems to improve undergraduate teaching and curricula. He is the founding director of the ICM. In August 2009, he rejoined the faculty at West Point as the Network Science Chair and Professor of Mathematics. + +# Announcement + +Later in 2014 COMAP will publish a book devoted to the ICM and to interdisciplinary modeling: + +The Interdisciplinary Contest in Modeling: Culturing Interdisciplinary Problem Solving, + +edited by Chris Arney and Paul J. Campbell. + +This volume contains + +- the history of the ICM contest, +- statements of the 16 problems, +- listings and summaries of outstanding teams, +- demographics of contestants and their schools, and +- reflections and helpful advice articles by participants, advisors, judges, and directors. + +Chapters describe how to prepare teams and how to develop modeling curricula, along with discussions on the current interdisciplinary academic environment and related literature. + +The volume provides an insightful look at trends in educating future interdisciplinary modelers and problem solvers. + +The book will be available as a CD-ROM: + +http://216.250.163.249//product/?idx=1441 + +ISBN 978-1-933223-53-7 + +and as a printed book: + +http://216.250.163.249//product/?idx=1440 + +ISBN 978-1-933223-52-9 + +The table of contents is available at the Web pages indicated. + +# Who Are the $20\%$ ? + +Chen Wang + +Mi Gong + +Zhen Li + +Southeast University + +Nanjing, China + +Advisor: Dan He + +# Abstract + +The famous 80-20 rule states that for many events, $80\%$ of influence is caused by $20\%$ of those involved. This principle also applies in network science: Only a few nodes have a significant influence and impact on the whole network. We employ a Relation Distance Model and an Authority-Popularity Evaluation Model to measure the $20\%$ and analyze its influence. + +For Requirements 1 and 2, we construct the undirected co-author network based on the 511th-order relationship matrix. We propose a Relation Distance Model based on the SNA (social network analysis) technique. It combines three centrality indexes in a vector to calculate the "distance" from the most influential node. Another measure (eigenvector centrality), which takes both degree and the influence of co-authors into consideration, outputs a different rank. Validation of the model is discussed by comparing the two rankings of the top 15 authors in the Erdos1 network: We find ALON, NOGA M., is the most influential person in the network. We show that the degree distribution of the Erdos1 network approximates a power-law distribution, which indicates that it is a scale-free network. + +For Requirement 3, we establish an Authority-Popularity Evaluation Model to analyze the depth and the width of the influence of nodes. We calculate an Authority Index by our Modified PageRank Algorithm to measure the depth of impact. We define a Popularity Index as citations per year, to reflect the width of influence. We implement the algorithm on a citation-directed network of 24 papers with weighted nodes. The ranking of the papers is obtained by combining the Authority and Popularity Indexes. + +For Requirement 4, we construct for 15 actors a bidirectional network with movie co-stars as links. + +For Requirement 5, we discuss two characteristics of a scale-free network: growth and preferential attachment. The philosophy of the dynamics of a scale-free network is revealed to be the "Matthew effect." We propose a + +method to boost influence: Find the shortest links to the most influential author, cooperate step-by-step with the $80\%$ with low closeness-centrality, and finally co-author with the key figure in the field. + +We conduct a sensitivity analysis to study the robustness of our algorithm, and the results show a good stability. We further discuss strengths and weaknesses of our models. + +# Basic Assumptions + +Assumption 1. The strength of co-authorship between two arbitrary Erdos1 authors is the same. + +The accurate strength level of co-authorship between two arbitrary Erdos1 authors is hard to measure. For the sake of simplification, if two authors co-author a paper, the co-authorship index is 1; if not, it is 0. + +Assumption 2. The significance of a research paper is determined by both its citations and its publication date; also, the influence of the journal should be considered. + +The more often a paper is cited or the earlier a paper is published, the more influential it is. Also, a highly influential journal contributes more to the influence of a paper. + +Assumption 3. We assume that the quality of a movie is determined by its IMDb rating [Internet Movie Database 2014]. The popularity of a movie star is measured by the number of Google search results. + +Definitions of symbols employed in this paper are listed in Table 1. + +# Models for Requirement 1 and 2 + +# Data Preprocessing + +Before presenting our models, we describe the preprocessing work that we did with the data. + +- Step 1. We extracted the 511 Erdos1 authors from over 18,000 lines of raw data in the Erdos1 file, by eliminating names without a date but followed by by the date of the first joint paper with Erdős and possibly the number of joint publications (if more than one). +- Step 2. To obtain the relationships among the 511 Erdos1 authors, we left out the Erdos2 names from the list of Erdos1, by using the library function countif() in Microsoft Excel 2010. + +Table 1. +Symbols used. + +
VariableDescription
Relation Distance Model
xIndex of a member node
aixRelation strength
dixRelation distance
Cd(x)Degree centrality of node x
Cb(x)Betweenness centrality of node x
Cc(x)Closeness centrality of node x
gij(x)Shortest path between i and j that passes through the node x
lixLength of the shortest path connecting node i and node x
nTotal number of nodes in a network
Vector containing the three centrality measures
A^CThe ideal vector of the node that has the most significant influence within the network
DC(x)Euclidean distance defined to measure the influence and impact of node x
Eigenvector Centrality Model
ExEigenvector centrality value of node x
B=(bij)Adjacent matrix of a network
cProportional constant
Revised PageRank Algorithm
PRx(0)Initial PageRank value of node x
sScale constant for the Revised Algorithm
GijRelationship matrix
DCoefficient matrix
+ +- Step 3. We constructed the 511th-order co-authorship matrix by executing a Matlab program. For the sake of description, we give each Erdos1 author an ID number by the rule: + +'1' stands for ABBOTT, HARVEY LESLIE +$^\prime 2^{\prime}$ stands for ACZEL, JANOS D. + +. + +'511' stands for ZIV, ABRAHAM + +Figure 1 shows part of the resulting graph. + +# Relation Distance Model based on SNA + +# Overview + +To build the Erdos1 network and analyze its properties, we employ the social network analysis (SNA) technique. + +Social network analysis refers to methods to analyze social network structures made up of individuals ("nodes") tied (connected) by one or more specific types of interdependency. In our case, the Erdos1 authors are viewed as nodes and co-authorship as links among them. + +![](images/931da607f04cc99307753f7ea715e0435dcaf1fcbd39ea8b18a8186e96dc6e66.jpg) +Figure 1. The co-author network of the Erdos1 file. For the sake of visibility, we illustrate just the top 50 Erdos1 authors. + +# Methodology + +- Step 1. We calculate centrality measures of the vector $\overrightarrow{A_x}$ of nodes in the network. There are three popular centrality measures [Freeman 1979]: + +- degree centrality $C_d(x)$ , +- betweenness centrality $C_b(x)$ , and +-closeness centrality $C_c(x)$ + +These measures can be used to identify "masters" who have significant influence or impact in a network. These measures are defined as below: + +- Degree centrality is defined as $C_d(x) = \sum_{i=1}^{n} a_{ix}$ , + +where $n$ is the total number of nodes in a network and $a_{ix}$ is a variable indicating the weighted number of co-authorship between nodes $x$ and $i$ . According to Assumption 1, in our Erdos1 network, $a_{ix} = 1$ or 0 for all $i$ . + +- Betweenness centrality is defined as $C_b(x) = \sum_{x}^{n}\sum_{j}^{n}g_{ij}(x),$ + +where $g_{ij}(x)$ indicates whether the shortest path between two other nodes $i$ and $j$ passes through the node $x$ . + +- Closeness centrality is defined as $C_{c}(x) = \sum_{i=1}^{n} l_{ix}$ , + +where $l_{ix}$ is the length of the shortest path connecting nodes $i$ and $x$ . The shortest paths can be calculated based on the Floyd algorithm. + +The centralities above describe different characteristics of nodes in a network: + +- Degree centrality shows the number of nodes' connections, which also reflects connectivity of nodes in a network. Nodes with high connectivity can be viewed as more influential. +- Betweenness centrality shows the number of shortest paths that pass through a node. It also reveals the dependency of a node on other nodes. Large betweenness centrality value of a node is equivalent to high importance in the network. +- Closeness centrality measures how far away one node is from other nodes. Small closeness of a node reflects high importance. + +We formulate a measure vector that contains all three measures: + +$$ +\overrightarrow {A _ {x}} = \left(\frac {C _ {d} (x)}{\max C _ {d} (x)}, \frac {C _ {b} (x)}{\max C _ {b} (x)}, \frac {C _ {c} (x)}{\max C _ {c} (x)}\right) = (A _ {x 1}, A _ {x 2}, A _ {x 3}). +$$ + +The three elements are all divided by their maximum values so as to be normalized, separately. According to the definition of the three centralities, $A_{x}$ will achieve its optimal value when degree $(A_{x1})$ and betweenness $(A_{x2})$ get to their largest value, 1, and closeness $(A_{x3})$ gets to its smallest value, 0. + +Therefore, an author who has the most significant influence within the network will ideally have measure vector + +$$ +\overrightarrow {A ^ {C}} = (A _ {1} ^ {C}, A _ {2} ^ {C}, A _ {3} ^ {C}) = (1, 1, 0). +$$ + +- Step 2. Calculate the "distances" among member nodes. + +We denote the Euclidean distance of $\overrightarrow{A_x}$ from the node $x$ to the ideal vector $\overrightarrow{A^C}$ by $D_C(x)$ . Based on the idea of the ranking method from the TOPSIS algorithm [Mahmoodzadeh et al. 2007], we call the distance "Influence and Impact Distance," defined as: + +$$ +D _ {C} (x) = \sqrt {(A _ {x 1} - A _ {1} ^ {C}) ^ {2} + (A _ {x 2} - A _ {2} ^ {C}) ^ {2} + (A _ {x 3} - A _ {3} ^ {C}) ^ {2}} +$$ + +- Step 3. Generate the influence priority list. + +This distance is the key to determine who in the Erdos1 network has significant influence. A greater distance indicates a lower possibility of significant influence. We arrange the Erdos1 authors in a priority list according to Influence and Impact, the value of $D_C(x)$ . Nodes with smaller $D_C(x)$ rank higher in the priority list, since $D_C(x)$ is the distance from the node to an ideal "most significant" node. + +# Results and Analysis + +In the three-dimensional graph of Figure 2, the 511 points are the 511 Erdos1 authors' measure vectors. + +![](images/13d9119c76d89bf88655bc35dde71339d3aca4b4390d2886ffd016298a453438.jpg) +Figure 2. Measure vectors of the 511 Erdos1 authors. The big red point(1, 1, 0) at center bottom stands for the ideal vector of being the most influential. + +The top 15 authors of the Influence and Impact priority list are shown in Table 2. For the sake of visibility, we choose the top 50 Erdos1 authors in the impact priority list to draw the network in UCINET, as we did in Figure 1. + +We can see from Table 2 that HARARY, FRANK* has the most significant influence within the co-author network. + +Table 2. Top 15 most influential authors in the Erdos1 network. + +
RankingDC(x)IDName of author
10.18187HARARY, FRANK*
20.29438SOS, VERA TURAN
30.3010ALON, NOGA M.
40.34165GRAHAM, RONALD LEWIS
50.36148FUREDI, ZOLTAN
60.3644BOLLOBAS, BELA
70.48479TUZA, ZSOLT
80.50355POMERANCE, CARL BERNARD
90.53449STRAUS, ERNST GABOR*
100.54341PACH, JANOS
110.58180HAJNAL, ANDRAS
120.65378RODL, VOJTECH
130.70440SPENCER, JOEL HAROLD
140.74249KLEITMAN, DANIEL J.
150.77399SARKOZY, ANDRAS
+ +# Eigenvector Centrality + +For the second question in Requirement 2, we consider who has published important works or connects important researchers within Erdos1. + +The importance of a node is determined by both the number of its neighbor nodes (its degree) and the importance of its neighbor nodes. In graph theory and network analysis, centrality of a vertex measures its relative importance within a graph. + +The definition of eigenvector centrality is + +$$ +E _ {i} = c \sum_ {j = 1} ^ {n} b _ {i j} E _ {j}, +$$ + +where $c$ is the proportional constant and $B = (b_{ij})$ is the adjacency matrix of the network. + +Calculating the 511 nodes' eigenvector centrality value in UCINET, we get the top 15 nodes as shown in Table 3. + +We can see from Table 3 that ALON, NOGA M. is the person who connects more important researchers within Erdos1 than others. + +# Validation of the Model + +Comparing the two ranking methods, the majority of the authors in the first table are also in the second table. However, some authors in the first table cannot be found in the second list. Why? We focus our discussion on HARARY, FRANK*. + +Table 3. Top 15 connecting important researchers within Erdos1. + +
RankingE-valueIDName of author
10.2610ALON, NOGA M.
20.23378RODL, VOJTECH
30.2144BOLLOBAS, BELA
40.20165GRAHAM, RONALD LEWIS
50.20148FUREDI, ZOLTAN
60.19479TUZA, ZSOLT
70.18440SPENCER, JOEL HAROLD
80.18177GYARFAS, ANDRAS
90.17462SZEMEREDI, ENDRE
100.16128FAUDREE, RALPH JASPER, JR.
110.16287LOVASZ, LASZLO
120.1578CHUNG, FAN RONG KING (GRAHAM)
130.15341PACH, JANOS
140.15261KOSTOCHKA, ALEXANDR V.
150.15326NESETRIL, JAROSLAV
+ +The degree of HARARY, FRANK* is the highest, 44. When we study the relatively most influential authors in the network, we discover that only 30 authors are "masters" within the network. That is to say, most of the co-authors of HARARY, FRANK* are not highly-influential authors. So, when we consider both the degree and the influence of co-authors, an author whose degree ranks high may not be a "master." So, a weakness of the Relation Distance Model is that it is susceptible to a large degree value. + +# Properties of the Erdos1 Network + +We have already found each node's degree centrality, so we can obtain the degree distribution of the network. Combining maximum-likelihood fitting methods with goodness-of-fit tests proposed by Clauset [2007], we discover that the distribution of degree centrality $k$ in the Erdos1 network is approximately a power-law distribution, with 1.6 (Figure 3): + +$$ +P (K > k) \sim k ^ {- 1. 6} +$$ + +According to an idea of Barabási and Albert [1999], we can consider the Erdős1 co-authoring network as a scale-free network. In a scale-free network, most nodes have small degree value, only very few nodes have large degree value, and the degree distribution of the scale-free network is approximately a power-law distribution. + +Some other properties of the network are: + +- The overall graph clustering coefficient is 0.34. +- The average distance (among reachable pairs) is 3.83. + +![](images/82d4bc0669cd10ce57ebf9e38f97d8fd2424742deabd6fc2fe1b41b665d16302.jpg) +Figure 3. Degree distribution of the Erdos1 network and the approximating power-law distribution. + +# Authority-Popularity Evaluation Model + +# The Citation Network for Requirement 3 + +For Requirement 3, we construct a directed network connection graph with weights. The 16 foundational papers listed in the list supplied with the contest question (NetSciFoundation.pdf) and 8 additional works that we discovered are considered as the nodes in the Citation network, and the citation relationship as the links. The Citation network is shown in Figure 4. The 8 additional works are listed in the Appendix. + +- Explanation 1: Let paper $A$ be cited by paper $B$ , then the direction of the link between them is from $B$ to $A$ (for the implementing of the PageRank Algorithm). +- Explanation 2: According to Assumption 2, we define the weight of a node as the impact factor of the journal [Impact Factor Search 2014] in which the paper was published. +- Explanation 3: For the sake of description, we give each paper an ID: 1 for the first paper in NetSciFoundation.pdf, 2 for the second, 17 for the first paper in the additional list, etc. + +![](images/57ac18b9adf9c92e671aed703621945e5b73df74d84aa53557d2603070f7dde7.jpg) +Figure 4. The directed citation network with weights added to nodes. + +# Authority-Popularity Evaluation Model + +The aim of Requirement 3 is to determine relative influence. For this task, we develop an Authority-Popularity Evaluation Model, which is a combination of the revised PageRank Algorithm and a Normalized Influence Factor. In our model, the PageRank value and the Influence Factor reflect the depth and the width of the impact, respectively. + +# The Modified PageRank Algorithm + +The basic idea of the PageRank Algorithm is that the importance of a node is determined by the quantity and quality of other nodes pointing to it. But when there is a dangling node (a node whose out-degree is 0), random surfing will fail, since the surfer will be "trapped" in the dangling node forever. There are three dangling nodes the Citation Network, so we propose a Modified PageRank Algorithm to measure influence. The Modified PageRank Algorithm includes two steps: Initialization and Iteration. + +# - Step 1: Initialization + +We give an initial PageRank value (PR value) $PR_{x}(0)$ , for $x = 1 \ldots, n$ , to all nodes in the network, normalized so that + +$$ +\sum_ {x = 1} ^ {n} P R _ {x} (0) = 1 +$$ + +The initial PR value of each paper is defined as the normalized impact factor of the journal in which the paper is published. The absolute impact factor values (collected from the Web [Impact Factor Search 2014]) for each paper are listed in Table 4. For instance, the weight of paper 4, + +"Emergence of scaling in random networks" published in Science is equal to the Impact Factor of Science, viz., 31.027. + +Table 4. Weights of Works. + +
1 +0.322 +44.983 +0.544 +31.035 +0.426 +3.387 +4.388 +38.60
9 +2.3110 +9.7411 +5.9512 +3.5413 +31.0314 +38.6015 +5.0216 +3.38
17 +2.3118 +39.2719 +1.8320 +2.5421 +40.6622 +9.7423 +9.7424 +7.94
+ +A 24-order relationship matrix $G$ is constructed by the rule that if paper $j$ is cited by paper $i$ , then $G_{ij} = 1$ ; otherwise $G_{ij} = 0$ . The coefficient matrix $D$ (also of order 24) is: + +$$ +D = \left( \begin{array}{c c c c c} 1 & & & & \\ & 1 & & & \\ & & \dots & & \\ & & & 1 & \\ & & & & 1 \end{array} \right) \left( \begin{array}{c c c c c} \frac {1}{\sum (1)} & & & & \\ & \frac {1}{\sum (2)} & & & \\ & & \dots & & \\ & & & \frac {1}{\sum (2 3)} & \\ & & & & \frac {1}{\sum (2 4)} \end{array} \right), +$$ + +where $\sum (i) = \sum_{j = 1}^{24}G_{ji}$ + +# - Step 2: Iteration + +![](images/014103ebc2d6d4b1538f8950622d8c2db336ce9c4492aec0d36f071a6243e3e0.jpg) +Figure 5. The flowchart of the modified PageRank iteration process. + +After $k$ iterations, we get the PageRank value: + +$$ +P R _ {i} (k) = q \sum_ {j} \frac {P R _ {j} (k)}{L (j)} + \frac {1 - q}{n}, \qquad i = 1, 2, \ldots , n, +$$ + +where $j$ represents all nodes that point to node $i$ and $q$ is a constant with default value 0.85. We iterate the equation above until the change + +in PageRank in a single step is small enough; we set this threshold as 0.0001. We get the stable PageRank values $PR(k)$ shown in Table 5. + +Table 5. Ranks of works according to the stable PageRank value. + +
RankingPRi(k)IDRankingPRi(k)ID
10.0157518130.0009419
20.0121714140.0006011
30.009374150.0006023
40.007842160.0005124
50.004931170.0005120
60.0027817180.0005113
70.0018321190.0004722
80.001503200.0004716
90.0014210210.0004715
100.001429220.0004712
110.001428230.000477
120.001116240.000475
+ +The result shows that the No. 18 work *Random Graphs*, by B. Bollobás, has the highest PageRank. So it is the most authoritative (highest Authority value) work among the 24. Authority reflects the depth of influence. + +To find the most influential paper, we have to get the width of influence: popularity. We use citations per year to measure popularity, as shown in Table 6. + +Table 6. Citations per year of each paper. + +
IDcitations/yearIDcitations/year
141355.5889.0
41256.2182.4
21104.2372.4
11965.11369.6
18458.2654.5
21281.82034.6
10211.41627.7
17178.22419.5
23170.61916.9
22142.31511.0
9117.270.5
1297.150.3
+ +To judge which is the most influential paper, we draw a picture of 24 nodes (Figure 6). For the sake of visibility, we use a log-log scale, graphing $\log_{10}(10^6 \times PR_i(k))$ vs. $\log_{10}(10 \times \text{citations/year})$ . + +![](images/7e6fb0b5076d5ba9b1e7b010f100c6af646dfc7047f295aa09324923e7b52346.jpg) +Figure 6. Authority-Popularity Diagram for the citation network + +In our Authority-Popularity Evaluation Model, we divide Figure 6 into four regions: + +high authority, more popular; +- high authority, less popular; +- low authority, more popular; and +- low authority, less popular. + +The node in the upper right corner is the most influential paper, since it is both the most authoritative and the most popular one. It is No. 14: "Collective dynamics of 'small-world' networks," by D. Watts and S. Strogatz. + +# The Co-star Network for Requirement 4 + +Similar to the Erdős number in mathematics, a Bacon number in the film industry became popular in the late 1990s. The "Game of Kevin Bacon" measures the shortest path to connect an arbitrary actor/actress to Bacon. Inspired by the game, we collect a set of 15 famous Hollywood movie stars as nodes and movies as links to their co-stars to construct a co-star network (Figure 7). This is also a directed network connection graph with weights, but the difference is that the weight value is on the link instead of on the node. + +- Explanation 1: Let $A$ be the leading actor who is supported by actor $B$ in a movie, then the direction of the link between them is from $B$ to $A$ . +- Explanation 2: According to Assumption 3, we define the weight of the link between two stars as the IMDb rating [Internet Movie Database 2014] of the movie in which they co-starred. + +![](images/8ba8af84afdcfc91a605650b59b829c47cf6d82bb82c79996364af64174f11ac.jpg) +Figure 7. The directed citation network with weight added to links. + +Since in the co-star network the weight is on the link instead of on the node, the PageRank algorithm here has to be improved. In the original PageRank algorithm, at every step of the iteration, the PageRank value of node $i$ is distributed uniformly to nodes that $i$ points to. In our improved PageRank algorithm, how much PageRank value that node $i$ distributes to nodes that $i$ points to is determined by the weight (IMDb rating of the movie) of the link. This is reasonable because an excellent movie can make movie stars more influential. For example, suppose that the weight of the link ( $i$ to $k$ ) is 3 and the weight of the link ( $i$ to $n$ ) is 4. So the PageRank value that $i$ distribute to $k$ is $\frac{3}{3 + 4} \times 1$ and the PageRank that $i$ distribute to $n$ is $\frac{4}{3 + 4} \times 1$ , as is shown in Figure 8. + +So iteration for this algorithm is different. The relationship matrix $G$ in the iteration for the citation network should be revised to $G'$ in the co-star network. We set column $j$ of $G$ as $G_j$ , with $G'_j$ defined similarly. It is clear that if $G_{ij} \neq 0$ , then $G'_i$ represents a movie. The elements in $G'_j$ should be proportional to the IMDb rating of the movie in $G'_j$ . For example, assume that movie $G_{2j}'$ 's IMDb rating is 3 and $G_{4j}'$ 's IMDb rating is 4. Let + +$$ +G _ {j} = \left[ \begin{array}{c c c c} 0 & 1 & 0 & 1 \end{array} \right] ^ {T} +$$ + +So we get + +$$ +G _ {2 j} ^ {\prime} = \frac {3}{3 + 4} (1 + 1) = \frac {6}{7}, G _ {4 j} ^ {\prime} = \frac {4}{3 + 4} (1 + 1) = \frac {8}{7}, \text {a n d} G _ {j} ^ {\prime} = [ 0 \quad \frac {6}{7} 0 \quad \frac {8}{7} 0 ] ^ {T}. +$$ + +![](images/be6395c8ccc0388cd2d196a90f203de6e0e460d36ee97b5c70483a75bcd75816.jpg) +Figure 8. The diagram for the modified PageRank algorithm. + +Other steps are the same as previously. We also implement the improved PageRank algorithm in Matlab, with the results shown in Table 7. + +Table 7. PR value and number of Google search results for each star. + +
NamePR valueGoogle results (millions)
Angelina Jolie0.141242
Tom Cruise0.117238
Brad Pitt0.097198
Morgan Freeman0.09261
Matt Damon0.08785
Tom Hanks0.076122
Nicole Kidman0.07487
Leonardo DiCaprio0.070149
George Clooney0.06258
Nicolas Cage0.04435
Sandra Bullock0.04450
Joseph Gordon-Levitt0.03220
Anne Hathaway0.03286
Kirsten Dunst0.01616
Tobey Maguire0.0157
+ +As we did earlier for the Erdos1 network, we draw a figure for the 15 nodes in the co-star network. In Figure 9, we graph stable PageRank value vs. $\log_{10}$ (number of Google search results). It is clear that Angelina Jolie is the most influential movie star in this network. + +![](images/8a029abd56ba5fbb73cc85bcaf958c63f426e9bbd374bc9a0d61e3f6559389b8.jpg) +Figure 9. Authority-Popularity Diagram for the co-star network. + +# Sensitivity Analysis + +To analyze the robustness of our model, we perform a sensitivity analysis. In the PageRank algorithm, there is the equation + +$$ +P R _ {i} (k) = q \sum_ {j} \frac {P R _ {j} (k)}{L (j)} + \frac {1 - q}{n} \qquad i = 1, 2, \ldots , n +$$ + +where $q$ is a constant, generally called the damping coefficient. The meaning of $q$ is: There is a probability $(1 - q)$ that all of the nodes in the network have the same PageRank value $1 / N$ . If the coefficient $q$ is not used, the "strong" nodes will be so strong that other nodes find it hard to "survive." + +For the co-star network, too, this makes sense. The PageRank value of a movie star reflects influence, which can also be seen as the likelihood that a movie wants him to star. But there are always some movies that do not need the most famous actor, and for which all movie stars all have an opportunity to be the star. This situation is reflected by the coefficient $q$ . So the value of $q$ can influence the PageRank of the network. We set the default to be $q = 0.85$ , as is standard. With a change in $q$ , we get a corresponding PageRank and can compare this PageRank with the standard one (Figure 10). + +The results show that in a certain range, the PageRank changes slowly with a change in $q$ . So our model has high stability. + +![](images/bbd72733900c4b7e7ca375658f2c669af8e9c0046066df8ec212f7ba65cde922.jpg) +Figure 10. Sensitivity to damping coefficient $q$ . + +# Strengths and Weaknesses + +# Strengths + +- Comprehensiveness: From the perspective of both depth and width, we determine importance/influence based on a variety of indexes. +- Adaptability and Practicability: The model we build has good portability; it is suitable for most network analysis. +- Simplicity and Accuracy: The programs of the model are easy to understand, and the calculations are precise. +- Flexibility: No matter whether weights are added to the nodes or to the links, and whether there are dangling nodes in the network or not, our model can handle the network. + +# Weaknesses + +- Data Limitations: If the data on a network are limited, the error may be large. + +# The Philosophy of a Scale-Free Network + +Barabási and Albert [1999] pointed out that an ER (Erdős-Rényi) random graph and a WS (Watts-Strogatz) small-world model neglect two important characteristics of an actual network: + +- Growth: The scale of an actual network is likely growing; for instance, many papers are published every month. However, in an ER random graph and in a WS small-world model, the number of nodes is fixed. +- Preferential attachment: New nodes tend to connect with nodes with high connectivity. This phenomenon is also known as "the rich get richer" or the "Matthew effect" [Wikipedia 2014a]. New papers tend to cite important and influential papers that have been widely quoted. And also, actors always try their best to act with stars. + +The dynamics in a scale-free network is the basis for the famous Pareto principle [Wikipedia 2014b] (also known as the 80-20 rule), which states that for many events, roughly $80\%$ of the effects come from $20\%$ of the causes. + +# Shortcut to Boost Influence + +In the real world, "important nodes" are hard for ordinary ones to approach: An ordinary researcher will find it hard to co-author with a leading figure, since the important ones tend to co-author with other influential people. So, how can we boost our influence as quickly as possible? The solution that we propose from our model is illustrated in Figure 11: + +- Determine the "20%" in the network according to our Relation Distance Model or Authority-Popularity Evaluation Model +Calculate the closeness centrality of the "80%" to the "20%. +- Cooperate step by step with the "80%" (more approachable, relatively) with low closeness centrality. +- Finally, co-author with the key figure in the field! + +![](images/52971a7b82dd03ad9beced78f7fbaf73c9a66fee6fcb8b6072612f07f4f7ae1e.jpg) +Figure 11. The shortcut to boost influence. + +# Appendix: The 8 Additional Papers + +No. 17: Newman, M.E.J., S.H. Strogatz, S and D.J. Watts. 2001. Random graphs with arbitrary degree distributions and their applications. *Physical Review E* 64 (2): 026118-026134. +No. 18: Bollobás, B. 2001. Random Graphs. 2nd ed. New York: Academic Press. +No. 19: Holland, P.W., and S. Leinhardt. 1981. An exponential family of probability distributions for directed graphs. Journal of the American Statistical Association 76: 33-65. +No. 20: Snijders, T.A.B. 2002. Markov chain Monte Carlo estimation of exponential random graph models. Journal of Social Structure 3 (2). +No. 21: Watts, D.J. 1999. Small Worlds. Princeton, NJ: Princeton University Press. +No. 22: Barrat, A., M. Barthelemy, R. Pastor-Sattoras, and A. Vespignani. 2004. The architecture of complex weighted networks. Proceedings of the National Academy of Sciences 101 (11): 3747-3752. +No. 23: Amaral, Luis A. Nunes Amaral, Antonio Scala, Marc Barthelemy, and H. Eugene Stanley. 2000. Classes of small-world networks. Proceedings of the National Academy of Sciences 97 (21): 11149-11152. +No. 24: Barthelemy, M., and Luís A. Nunes Amaral. 1999. Small-world networks: Evidence for a crossover picture. Physical Review Letters 82: 3180. + +# References + +Barabási, Albert.-László., and Réka Albert. 1999. Emergence of scaling in random networks. Science 286 (5439): 509-512. +Clauset, Aaron, Cosma Rohilla Shalizi and M.E.J. Newman. 2007. Powerlaw distributions in empirical data. arXiv:0706.1062v1. +Freeman, L.C. 1979. Centrality in social networks: Conceptual clarification. Social Networks 1 (3): 215-239. +Impact Factor Search. 2014. http://www IMPACTfactorsearch.com +Internet Movie Database (IMDb). 2014. http://www.imdb.com/. +Mahmoodzadeh, S., J. Shahrabi, M. Pariazar, and M.S. Zaeri. 2007. Project selection by using fuzzy AHP and TOPSIS technique. *World Academy of Science, Engineering and Technology* 30: 333-338. +Reynolds, Patrick. 2013. The Oracle of Bacon. http://oracleofbacon.org/. + +Wikipedia. 2014a. Matthew effect. http://en.wikipedia.org/wiki/Matthew-effect. + +Wikipedia. 2014b. Pareto principle. http://en.wikipedia.org/wiki/Pareto_principle. + +![](images/7b43b81ee3cfa5ecbefdf1f97f930893e3fa97502e7113282beef10d2c9a0d00.jpg) + +Team members Zhen Li, Mi Gong, and Chen Wang (the sign behind them says "Department of Mathematics, Southeast University"). + +# Judges' Commentary: + +# Measuring Network Influence and Impact + +Chris Arney + +Dept. of Mathematical Sciences + +U.S. Military Academy + +West Point, NY 10996 + +david.arney@usma.edu + +Kathryn Coronges + +Department of Behavioral Sciences and Leadership + +U.S. Military Academy + +West Point, NY + +Tina Hartley + +Dept. of Mathematical Sciences + +U.S. Military Academy + +West Point, NY + +Jessica Libertini + +Dept. of Mathematics + +Virginia Military Institute + +Lexington, VA + +# Introduction + +The topic area for this year's Interdisciplinary Contest in Modeling (ICM) was network science. Network science will continue to be the topical area for one of next year's ICM problems. However, there will also be a second ICM problem, involving human-environment interactions in the areas of environmental science, including climatology, food security, and geography. For teams that want to organize early for next year's contest, prepare by studying network modeling or environmental science and assemble a team with one of those subjects in mind. + +The ICM continues to be an opportunity for teams to tackle challenging real-world problems that require a wide breadth of understanding in multiple academic subjects and skill in modeling interdisciplinary phenomena. These kinds of interdisciplinary study and modeling are included in the definition of network science. The complexity of ICM problems, along with the short duration of the contest, requires effective communication and coordination among team members. One of the most challenging issues for ICM teams is to organize and collaborate effectively to use each team member's skills and talents to tackle the diverse nature of ICM problems. Teams that resolve this organizational challenge and co-operate well often submit 20-page solutions that rise to the higher levels of ICM awards. + +# The Problem Statement + +One of the techniques to determine influence of academic research is to build and measure properties of citation or co-author networks. Coauthoring a manuscript usually connotes a strong influential connection between researchers. + +One of the most famous academic co-authors was the 20th-century mathematician Paul Erdős, who had over 500 co-authors and published over 1,400 technical research papers. + +It is ironic (or perhaps not!) that Erdős is also one of the influencers in building the foundation for the emerging interdisciplinary science of networks, particularly, through his publication with Alfred Rényi of the paper "On random graphs" [1959]. + +Erdős's role as a collaborator was so significant in the field of mathematics that mathematicians often measure their closeness to Erdős through analysis of Erdős's amazingly large and robust co-author network (see Grossman [2014]). + +The unusual and fascinating story of Paul Erdős as a gifted mathematician, talented problem solver, and master collaborator is provided in many books and online Websites (e.g., O'Connor and Robertson [2000]). Perhaps his itinerant lifestyle, frequently staying with or residing with his collaborators, and giving much of his money to students as prizes for solving problems, enabled his co-authorships to flourish and helped build his astounding network of influence in several areas of mathematics. + +To measure such influence as Erdős produced, there are network-based evaluation tools that use co-author and citation data to determine impact factor of researchers, publications, and journals. Some of these are Science Citation Index, H-factor, Impact factor, Eigenfactor, etc. Google Scholar is also a good data tool to use for network influence or impact data collection and analysis. Your team's goal for ICM 2014 is to analyze influence and impact in research networks and other areas of society. + +We summarize the tasks for the teams in this year's ICM problem: + +1. Build the co-author network of the 511 Erdős1 co-authors. This will take some skilled data extraction and modeling efforts to obtain nodes (the Erdős co-authors) and their links (connections with one another as co-authors). There are over 18,000 lines of raw data in Erdős1 file, but many of them will not be used since they are links to people outside the Erdős1 network. Analyze the properties of this network. +2. Develop influence measure(s) to determine who in this Erdos1 network has significant influence within the network. Consider who has published important works or connects important researchers within Erdos1. +3. Another type of influence measure might be to compare the significance of a research paper by analyzing the important works that follow from its publication. Choose a set of foundational papers in the emerging field of network science (a possible set was provided). Use these papers to develop a model to determine their relative influence. Which of the papers of the set do you consider is the most influential in network science and why? +4. Implement your algorithm on a completely different set of network influence data. +5. Discuss the science and utility of modeling influence and impact within networks. Could individuals, organizations, nations, and society use influence methodology to improve relationships, conduct business, and make wise decisions? + +# A Short Historical Reflection on Paul Erdős and the Erdős Co-author Network + +Paul Erdős's creative research advanced graph theory, combinatorics, discrete mathematics, and number theory and laid foundations for the applied subjects of computer and network science. He excelled at modeling number systems and graphical structures, and determining their properties. He worked on many of the most important problems in these fields; and, through tireless effort and amazing skill, he became the most prolific and eccentric mathematician of modern times. He published over 1,500 scholarly papers. Paul Hoffman, who wrote Erdős's biography *The Man Who Loved Only Numbers*, wrote, "Erdős's style was one of intense curiosity, a style he brought to everything he confronted" [2012, 21]. The ICM hopes to develop that trait of curiosity in its contestants. + +Because of Erdős's extensive collaborations, mathematicians began tracking and counting his collaborators. A special network numbering system was devised such that a person who collaborated by publishing a paper with Erdős was given an Erdős number of 1. Collaboration (publishing) + +with an Erdős 1 author gave a mathematician an Erdős number of 2, and so on. If there is no chain of co-authorships connecting someone with Erdős, the person's Erdős number is infinite. The result of this effort, using the Erdős Number Project site (http://www. oakland.edu/enp/) [Grossman 2014] and data through the MathSciNet service of the American Mathematical Society's Mathematical Reviews, is an elaborate collaboration graph of the mathematics research community that captures the connections of over 400,000 authors. Elaborate records and a myriad of statistics are kept on the collaborations and connection record of Erdős. Today the Erdős Number Project Website provides all sorts of trivia, such as the data in Table 1 on co-author connections to Erdős [Grossman 2014]. The table shows the number of people with Erdős number 1, 2, 3, ..., according to the electronic data from MathSciNet (slightly different than other data sources). + +Table 1. Numbers of mathematicians with particular Erdős numbers. + +
Erdős numberNumber of mathematicians
01
1504
26593
333,605
483,642
587,760
640,014
711,591
83,146
9819
10244
1168
1223
135
+ +Thus, at the moment when this table was tabulated, the median Erdős number was 5, the mean 4.65, and the standard deviation 1.21. In addition to these 268,000 people with finite Erdős number, there are about 50,000 published mathematicians who have collaborated with others but have an infinite Erdős number, and 84,000 who have never published joint work (and therefore also have an infinite Erdős number). + +Erdős lived from 1913 to 1996 and spent six decades living out of two tattered suitcases. He would show up on the doorsteps of his colleagues prepared to do work, and they would accommodate him. After working through a problem or two and writing a paper, he would move on to the next research station, hopefully to confront the next problem and find its solution. + +Erdős received many awards, which allowed him the freedom to travel + +and the money to pay student solvers of his legendary challenge problems. His challenge problems were often easy to state but difficult to solve. These numerous cash giveaways to student problem-solvers made Erdős's campus visits special events. Another quotation about Erdős from Hoffman's book reinforces an interdisciplinary perspective: "For Erdős, mathematics was a glorious combination of sciences and art." [1998, 27] + +# Judges' Criteria + +The panel of judges was impressed by the modeling of many teams. Many papers were rich in network modeling methodology and modeling creativity. To ensure that the individual judges assessed submissions on the same criteria, a judging rubric and guide was developed. The general framework used to evaluate submissions is described below. The main thrust of ICM problem-grading is finding and evaluating modeling that includes good science and leads to measurable outcomes and a viable solution. + +# Executive Summary + +It was important in the summary that students succinctly and clearly explained the highlights of their submissions. The executive summary should contain brief descriptions of both the problem and the bottom-line results. One mark of better papers was a summary with a well-connected and concise description of the methodology, results and recommendations. + +# Modeling + +Well-defined measures of influence and impact were needed to build a viable model. Many teams started with standard network centrality measures and modified them to produce influence or impact effects. Other teams calculated other measures from clustering, community building, and dynamic measures. In this problem, teams needed to develop viable influence measure(s) to determine who in the network has significant influence within the network. For some teams, influence was a scalar value; others established a multidimensional vector with several components. + +Many teams used network analysis software packages such as Gephi, ORA, Pajek, and UICNET for both calculations and visualizations. In many cases, the resulting mathematical analysis included statistical measures. Some teams used the explicit structures of networks or graphs to determine classic nodal measures and properties. In such cases, critical assumptions such as the directionality of influence and weights of connections within the network led to viable network models. + +Better papers discussed the differences in co-author and citation networks by explaining that a co-author network is not directed and a citation network model is directional. Similarly, the Erdős co-author network is now static (nearly 20 years after his death); but the citation network is dynamic, with new citations to papers occurring frequently. + +No matter the modeling framework, the assumptions needed for these models and the careful and appropriate development of these models were important in evaluating the quality of the solutions. The better submissions explicitly discussed why key assumptions were made and how these assumptions affected model development. + +Stronger submissions presented a balanced mix of mathematics and prose rather than a series of equations and parameter values without explanation. One major discriminator was the use or misuse of arbitrary parameters without any explanation or analysis. Establishing and explaining parameter values in models are at least as significant as making and validating assumptions. + +Perhaps the most challenging aspect of this problem was determining the topic and data collection of the test application dataset. Collecting good data where their influence measure was appropriate was a challenge. The judges recognized this challenge and rewarded papers with strong datasets that produced viable network models. + +# Science + +The ICM modelers discussed the science of influence at many levels. Some teams did effective background research and analysis of this aspect of the problem, included elements of their scientific analysis, and described how their model fit into the science of influence. In this case, powerful scientific analysis was performed best by making strong, insightful connections between the precise mathematical measures that teams created and the abstract notions of influence that they produced from social theory. + +No matter what level of modeling was performed by the teams, the interdisciplinary nature of this problem was revealed in the science requirements and the background investigation performed by the teams. The ICM students were exposed to the nature of influence in information theory in performing their background research, and the team reports required proper documentation of the team's research sources. + +# Data/Validity/Sensitivity + +Filtering over the 18,000 lines of data to extract the 511 co-author nodes and nearly 1,700 links was a challenge for some teams, as was collecting data for their own test application. Sensitivity analysis to determine the effects of assumptions and data validity were empowering for some of the teams. Sensitivity analysis is especially important for highly-structured and + +powerful data-rich models such as networks. Some network structures are highly robust and flexible, while others are more fragile and highly sensitive to data errors or changes. While this sensitivity analysis is a challenging element of network modeling, it was important to address this modeling issue in the report. Teams that did this well quickly rose to the top of judges' evaluations. + +# Strengths/Weaknesses + +Discussion of the strengths and weaknesses of the models is where students demonstrate their deeper understanding of what they have created. The utility of a model fades quickly if team members do not understand the limitations or constraints of their assumptions or the implications of their methodology. Networks are non-reductive, complex structures and, therefore, the strengths and weakness are often hidden from direct view or full control of the modeler. Some of the better reports presented these elements despite these challenges. + +# Communication/Visuals/Charts + +To clearly explain solutions, teams used multiple modes of expression including diagrams and graphs, and—for this contest—clearly written English. A report that could not be understood did not progress to the final rounds of judging. Judges were often well informed through the amazing array of powerful charts and graphs that explained both models and results. The graphics shown in Figures 1-3 provide a glimpse of the richness of this kind of presentation. + +![](images/136f58b590d369c2da05fa9ff0fb33f233707398c97d8dd22fe0287b1a531722.jpg) +Figure 1. Many teams provided informative network graphs to show the entire co-authors' network model. The graphic on the left is from Team 25425 (Beijing University of Posts and Telecommunications). The graphic on the right is from Team 30407 (Hong Kong Polytechnic University). + +![](images/fcdcfd0a69bf0d9a2292fcd3eb1c561c972abbb12f6d9a496056a0ab27771736.jpg) +Figure 2. Some reports contained elaborate co-citation network diagrams, like this one from Team 31227 (Humboldt State University). + +![](images/5e4b1f4b7980362411ebfcd73aeba713321a32b4623f9e047f465337310d86c1.jpg) +Figure 3. Other graphics zoomed in on significant parts of the co-author network to show details of the most influential co-authors, like this one from the report by Team 26715 (Peking University). + +# Discussion of the Outstanding Papers + +Despite the common dataset and tasks, many different approaches were used by ICM teams to model various aspects of the problem. As a result, the submissions were varied and interesting. Overall, the basic modeling was often sound, creative, and powerful. Those papers that did not reach final judging generally suffered from various shortcomings. Some lacked clear explanation of the structure of their model. They provided some details but not a complete description of their model and its purpose. Others failed to connect their mathematical models to the aspects and basic elements of the science of influence. In general, incomplete or awkward communication was the most significant discriminator in determining which papers reached the final judging stage. + +Although the six Outstanding papers used different methodologies, they all addressed the problem in a comprehensive way. These Outstanding papers were generally well written and presented clear explanations of their modeling procedures. In several of the Outstanding papers, a unique or innovative approach distinguished them from the rest of the finalists. Others were noteworthy for either the thoroughness of their modeling or the significance of their results. Summaries of the six Outstanding papers follow. + +# Central University of Finance and Economics, China: "Influence Measures in Networks" + +The team from Central University of Finance and Economics gave their report the title "Influence Measures in Networks" to reflect their focused and quite thorough investigation of network proximity as a proxy of influence among network members. They rightly point to the limitations of some of the traditional network metrics, namely centrality measures, in their inability to efficiently handle link weights and to account for the whole network structure. This group instead combined the Shapley approach, a concept developed in cooperative game theory, with a cohesion measure (KPP-POS, developed by Stephen Borgatti) to capture influence effects. The limitation of this measure is that it can be applied only to undirected networks. + +The team's approach to directed networks was somewhat less novel, making modest modification to the frequently used PageRank measure. + +The team's approach to combine conventional social network analytic methods with the less obvious, game-theoretic Shapley approach showed a breadth and depth in their handling of this problem. Challenged with developing a more appropriate and meaningful influence measure, they modified Borgatti's cohesion measure to enable inclusion of weighted edges, and more notably, used the Shapley method to account for the rank order + +of nodes. + +As a nice introduction to their analysis, the team provided descriptive information about the networks, such as the degree distribution, path length, and clustering. They also showed that their new metric of influence gives different results than conventional betweenness centrality values. They chose to validate their Shapley-cohesion measure on a weighted network that they built by selecting actors who have collaborated with the popular British actor, Jude Law. Unfortunately, on that dataset, their measure did not perform differently from betweenness centrality. The Jude Law network may have been too small to enable detection of difference among these metrics. + +The team's approach to directed networks was one that was used by many ICM teams. However, this team successfully modified the network data to overcome some of the limitations of the PageRank method. Specifically, + +- they incorporated additional papers into the citation network to augment the data specified in the problem; and +- they weighted the papers as a function of the number of times they were cited by these endogenous nodes. + +The schematic of their model is provided in Figure 4. + +![](images/63b4297e1c23bbb03cff9dec7d2559f4658f4e691e509f4a65544da4b54786c0.jpg) +Figure 4. Network model supplemented with additional co-citation papers. + +With their modified or optimized PageRank measure, the team found influence rankings among publications that differed from the standard PageRank measure. However, they did not investigate whether those papers that jumped in rank were any more meaningful than the standard rank. In addition, the team did not deal with the dimension of time—ignoring the fact that articles could not be cited by articles that appeared earlier in time, or that articles that were cited more recently or over longer periods of time were probably more influential. + +Judges were impressed with the ability to combine the conventional network metrics with game-theoretic approaches. Further, the team showed thoughtfulness about applying network science concepts to a problem of + +social influence. This paper was well written and contained graphic results such as that given in Figure 5 to show the nodal degree distribution between the Erdos1 network and a similar-sized random network. + +![](images/43bfb2cca72a05fdee37de62a33abdbc867a9ef87fd157c1c23d8b9617cbddc4.jpg) +Figure 5. Nodal degree distribution of the Erdős co-author network and a similar-size random graph. + +# National University of Defense Technology, China: "The Research of Influence Based on the Characteristic of a Network" + +The team from National University of Defense Technology took two very different approaches in analyzing the graphs, depending on whether or not the graph was directed or not. For the undirected co-author network, this team explained and computed many of the established network metrics for each node, including degree, betweenness, closeness, and eigenvector centralities. Rather than use all these metrics, the team provided a clear argument for first identifying those with the most authority (those who had published the most number of times with Erdős), and then ranking these authoritative co-authors based only on eigenvector centrality. + +To qualitatively validate the results produced by this algorithm, the team did online searches to learn about the careers of each of the top five mathematicians on their results list. For the citation network, this team gave a visual representation of the network, laid out on a temporal axis. The inclusion of time in the network visualization was an excellent example of how something relatively simple can really make a difference in the ease of interpretation of the results. + +For the analysis, this team's approach to the citation network relied on the fact that the resulting network is a directed acyclic graph. This team then attempted to identify the most influential paper by examining four different centrality measures, only to discover that their results were inconsistent. + +From there, they determined that a topological sorting algorithm would sufficiently decrease runtime compared to a matrix multiplication method such as PageRank. + +Leveraging the topological structure of the graph, the team defined and computed a contribution coefficient that took both the number of citations and the timing of those citations into consideration. They discussed how self-citation could influence their results, along with giving a modified model and a sensitivity analysis based on a range of values for their self-contribution coefficient. + +Lastly, this team applied their algorithms to construct a directed ownership network of 500 U.S. media corporations and to identify the top media companies. After performing their analysis, they validated their results by looking at published business rankings. Following this, the team provided an insightful list of potential applications and a discussion of the benefits of using network science in business and military decision-making. + +The judges were impressed by this paper's strong links between the theory, applications, and mathematics. The visualizations of their networks provided meaningful insights into their analysis as shown in the network graph in Figure 6. They calculated many of the standard network measures; but instead of consolidating all of them, they presented a convincing argument for only using certain measures. Additionally, these students demonstrated that they understood the most predictable paths for solving the problems, and they showed how they could use inherent network properties to improve upon the more obvious choices. The judges were very impressed by this team's qualitative approach to validating their results through internet research. + +# Southeast University, China (INFORMS Winner): "Who are the $20 \%$ ?" + +This team developed a model that they called the Relation Distance Model, which utilized three standard centrality measures (degree centrality, betweenness centrality, and closeness centrality) to construct a normalized centrality measure vector for each node in the network. The Euclidean distance from each node's centrality measure vector to an ideal vector was computed and used to determine the most influential nodes in the network. The team did a nice analysis of the results of their model, and then used eigenvector centrality results as a means to validate the results of their model, which exposed a limitation of the model. + +To analyze the citation network, the team developed a model that they called the Authority-Popularity Model. Each paper or node was weighted based on the impact factor of the publishing journal. The model then used the Modified PageRank Algorithm to determine a value representing the importance of each paper in the network, which the team classified as its + +![](images/db435b76f6b6d9b5b095fae55078bb8d0610279c2fe2754db4a115216d7846fd.jpg) +Figure 6. Structure of the Erdos1 network as presented in the report from the Outstanding team from the National University of Defense Technology, China. + +level of authority or "depth of influence." Each node was also assigned a value based on the number of times per year that it was cited, which the team classified as its level of popularity or "width of influence." These two scores were then plotted on a log scale to visually segregate those papers that had both high authority and high popularity. + +The Authority-Popularity Model was then applied to a co-star network that consisted of 15 movie stars and the links between them. The team recognized that this network would have weighted links (based on the rating of the movie) rather than weighted nodes, and adapted the model accordingly. + +A particularly impressive feature of this paper was the strong use of visuals and graphics to clearly present the models and display results in a meaningful way. Figure 7 is an example. In addition, judges were impressed with the thorough development of each of the models, the analysis of the effects of significant parameters, and the candid discussion of the models' strengths and weaknesses. Finally, the judges appreciated the team's use of their model to propose an innovative method for a researcher to quickly "boost" their influence in order to co-author with a leading figure in network science using the Pareto principle, or 80-20 rule. + +![](images/46704adac5d9403297aae565c584f07dc4c499529f6c59f80de90ba6aebb359b.jpg) +Figure 7. Portrait of the co-citation network by the Outstanding team from Southeast University, China. + +# Tsinghua University, China: "Who is the Hidden Champion in a Network?" + +This team performed standard network analysis on the Erdős co-author network, considering four standard centrality measures: degree centrality, eigenvector centrality, closeness centrality, and betweenness centrality. They did a very nice job of explaining each centrality measure and interpreting its meaning. The team evaluated the citation network both as a directed and as an undirected graph. While recognizing that the citation network is a directed network, they initially transformed it using symmetric relationships to find two related undirected networks, which they called a co-bibliography network and a co-citation network. The team then evaluated these related networks using the same centrality measures as they used for the co-author network. Next, the team analyzed the citation network as a directed graph, using applicable centrality measures such as the Modified Katz centrality. + +The team then applied the methods that they used for the Erdős coauthor network analysis to two different data sets: Chinese pop singers and Chinese movie actors. The network for singers had singers as vertices with an edge between two singers if they recorded a song written by the same songwriter. In the network for actors, an edge represents the fact that two actors are in a movie together. Upon completing this analysis, the team then developed a new approach: constructing a bipartite graph of singers-songwriters and actors-films, and using network centrality measures to rank the popularity of each set. + +The judges were impressed by the team's clear presentation of the prob + +lem and the report's thorough and well-explained analysis. The executive summary and introduction were extremely well written, and the paper concluded with a clear discussion of the strengths and weaknesses of the team's analysis. One innovative aspect of the paper was the team's development and utilization of an algorithm called stress majorization to produce a graph of the network, which minimizes distance between vertices that are connected to present an optimal visual depiction of the network. + +# Tsinghua University, China: + +# "A Three-Dimensional Network Impact Analysis Model Based on Centralizing, Connecting and Spreading Characteristics" + +This team from Tsinghua University, as indicated in the title, realized that the task of identifying the most influential node in a network depends on the definition of influence. This team divided the concept of influence into three characteristics: + +- centralizing characteristics, which aim to identify the nodes with the most central location in the topology of the network; +- connecting characteristics, which aim to identify the nodes whose positions are crucial to the connectedness of the whole network; and +- spreading characteristics, which aim to identify the nodes that are most capable of promoting flow of information through the network. + +For the majority of these characteristics, the team carefully selected established measures from the field of network science and thoughtfully explained how each measure was relevant to its assigned family of characteristics. Specifically, the measures of degree centrality, eigenvector centrality, and page rank were chosen for the centralizing characteristics, while the measures of betweenness centrality, clustering coefficient, and a node removal method for evaluating total loss were chosen for the connecting characteristics. After finding only one established measure, that of closeness centrality, for their spreading characteristics, the team presented the clear development of two new network measures, spreading breadth index and spreading depth index, both of which factor time into the flow of information through the network. + +After calculating all nine of these measures for each node, the team used principal component analysis for each family of characteristics, reducing the measure to a 3-vector, which could be reduced further to a scalar by applying weights based on the goal of the analysis. This approach was then applied to the co-author network, the paper citation network, and the users' comments on a Website for movies that are popular in China. Ultimately, the vectors for the most influential co-authors were presented visually on a three-dimensional graph. The team also presented very meaningful visualizations for their co-author and citation networks. + +While many teams used similar sets of established network measures, the judges were impressed by this team's understanding and intuition about how each of these established metrics measured different aspects of influence. When they were not able to find established metrics that measured the elements that they were interested in capturing, this team developed new metrics and wrote detailed explanations of their measurement process. Additionally, this team presented a meaningful way to reduce all of these measures into a scalar, allowing ease of comparison and ranking of nodes. This talented team did a very strong job of connecting the mathematics to the more abstract meaning of influence through clear written prose. Additionally, their paper made excellent use of tables, charts, and graphical representations of their networks to convey results. See Figure 8 for an example of the team's graphics to display the citation network. + +![](images/0ea740244828cbb1bf1a1ac0bf3a3e64270cba0e0012eacf42cabf8785809f4d.jpg) +Figure 8. Network of fundamental papers' co-citations, by the Outstanding team from Tsinghua University, China. + +The judges realize that given the timeline of the competition, not every team will have an opportunity to tackle masterfully all elements of the problem. This team appears to have done some nice analysis to identify the most influential user based on comments on a Website for popular + +movies in China; however, this section of the report was not as strongly presented as the other applications. Additionally, although the team did a nice job of explaining some factors that may contribute to the sensitivity of their model, they did not follow through with any computational results in their report. + +# Xidian University, China: "Methods of Measuring Influence Using Network Model" + +The team from Xidian University did excellent work in + +- laying out a set of criteria that one should consider in the evaluation of influence, +- how they mapped out these criteria to their specific approach, and finally, +- translating the algorithm components to meaning in a social context. + +They developed two new measures of centrality to account for network influence: importance degree (combines degree centrality and clustering coefficient) and influence degree (weights PageRank with importance degree measure). They focus on these two dimensions of influence, which they attempt to describe: "[importance degree] reflects the researcher's ability to contribute to the ... network by contacting other researchers, while [influence degree]... shows [how] the researcher is affected by ... her/his partners and [how they] can ... [assert] her/his overall influence." + +They validate their model with a network of actors who have collaborated with the Chinese movie star Tony Leung Chiu Wai. Analysis of importance degree and influence degree show that these metrics reflect different dimensions of social influence. + +Importance degree ranks nodes by combining degree centrality (number of links) and clustering coefficient (links among neighbors), which the team turn into a piecewise function to deal with time of collaboration. In their piecewise function, they account for the year of collaboration between each author and Erdős. They identify the group who collaborated with Erdős in his earlier years as "old researchers," for whom they note "their frequent and early cooperation help them develop and grow in the collaborative network...." One of the most innovative aspects of this team's solution was their thorough handling of the dimension of time on influence. + +The team powerfully presents a comparison of the two time periods to examine the changes over time. They found that mathematicians with large differences in their influence before and after their collaboration with Erdős are "young researchers" (those with later collaboration dates). While these researchers were less integrated in the network because they joined the collaboration late, they were successful in creating connections with high-influence researchers, thus drastically improving their influence metrics. + +Authors who lose influence over time are those who have many collaborators in the Erdős 2 network (2 steps away from Erdős) but do not integrate into the Erdős 1 network. Judges were uncertain about some elements of the piecewise equation, making this aspect of the paper difficult to judge. + +This team found a creative solution to identify social influence. Their algorithms enabled them to analyze these data, providing useful insights about the influence dynamics. For example, they suggest that a researcher can enhance their influence degree by cooperating with highly-influential researchers, even if they are themselves low-influence individuals; and high-influence individuals will lose influence over time if they are unable to collaborate with the core community. Crisp, clear graphical displays such as that in Figure 9 helped the presentation of this paper. + +![](images/80361747fe688da57a551fd3a381cf093a69009c612b38eb8c2ee807d22eb671.jpg) +Figure 9. Influence links in the co-citation network, from the paper by the Outstanding team from Xidian University, China. + +# Conclusion + +Among the 1,028 papers, there were many strong and innovative submissions that made judging both exciting and challenging. It was very gratifying to see so many students with the ability to combine modeling, science and effective communication skills in order to understand such large, complex datasets and build viable network models for their analysis. + +# Recommendations for Future Participants + +- Answer the problem. Weak papers sometimes do not address a significant part of the problem. Outstanding teams often cover all the bases and then go beyond for some aspects of the problem. +- Manage your time. Every year there are submissions that do an outstanding job on one aspect of the problem, then "run out of gas" and are + +unable to complete their solution. Outstanding teams have a plan and adjust as needed to submit a complete solution. + +- Coordinate your plan. It is obvious in weaker papers that the work and writing was spilt between group members, then pieced together into the final report. For example, the output from one model or one step in a process doesn't match the input for the next model or a section appears in the paper that does not fit with the rest of the report. The more your team can coordinate the efforts of its members and integrate the writing, the stronger your final submission will be. +- Do more than just model. The model itself is not the solution. Some weak papers present a strong model, then stop. Outstanding teams use their models to produce results, understand the problem and recommend or produce a solution. +- Explain what you are doing and why. Weaker submissions tend to use too many equations and too few words. Problem approaches appear out of nowhere. Outstanding teams explain what they are doing and why. + +# References + +Erdős, P., and A. Rényi. 1959. On random graphs, I. Publicationes Mathematicae 6: 290-297. +Grossman, Jerry. 2014. The Erdős Number Project. http://www. oakland.edu/enp/. Accessed 1 May 2014. + +"This page was last updated on April 29, 2014 (but subpages may have been updated more recently). However, the lists of coauthors and the various other statistics on this site are updated about once every five years. The current version was posted on October 20, 2010 and includes all information listed inMathSciNet through mid-2010. The next update will probably occur around 2015." + +Hoffman, Paul. 1998. The Man Who Loved Only Numbers: The Story of Paul Erdős and the Search for Mathematical Truth. New York: Hyperion. +O'Connor, J.J., and E.F. Robertson. 2000. Paul Erdős. http://www-history.mcs.st-and.ac.uk/Biographies/Erdos.html. + +# About the Authors + +Chris Arney graduated from the U.S. Military Academy and served as an intelligence officer in the U.S. Army. His academic studies resumed at Rensselaer Polytechnic Institute with an M.S. (computer science) and a Ph.D. (mathematics). He spent most of his 30-year military career as a mathematics professor at West Point, before becoming Dean of the School of Mathematics and Sciences and Interim Vice President + +![](images/879f0c4c5ea8c5a9b9cc6eafceb200c2b44bc41b027e1cf504931b7e4ac04d10.jpg) + +for Academic Affairs at the College of Saint Rose in Albany, NY. Chris then moved to Research Triangle Park, NC, where he served in various positions in the Army Research Office. His technical interests include mathematical modeling, cooperative systems, pursuit-evasion modeling, robotics, artificial intelligence, military operations modeling, and network science; his teaching interests include using technology and interdisciplinary problems to improve undergraduate teaching and curricula. He is the founding director of COMAP's Interdisciplinary Contest in Modeling (ICM). In August 2009, he rejoined the faculty at West Point as the Network Science Chair and Professor of Mathematics. + +![](images/45b0aa0affd9445bb69076044d7136bca578365797636f16eb3d94fc502b0190.jpg) + +Kate Coronges received a Master's in Public Health and a Ph.D. from the University of Southern California in Human Health Behavior. She was an Assistant Professor in the Dept. of Behavioral Sciences and Leadership and a Research Fellow in the Network Science Center at the U.S. Military Academy for four years. Currently, she works as a Program Manager at the Army Research Office. Her research interests focus on the role of social and organizational network structures, and the dynamics of these networks, in communication patterns + +and performance of teams, groups, and societies. She is active in shaping and building momentum towards specific domains of social science basic research important in military settings, ranging from small-team dynamics (such as shared decision-making and collective intelligence), to belief and behavior propagation in groups and communities, to public and global policy (to include energy, education, information security and health care systems), and methodological challenges involved with modeling multidimensional and multifunctional systems. + +Tina Hartley is an Academy Professor at the U.S. Military Academy and an active-duty officer. Her Ph.D. is in computational mathematics from George Mason University. Tina began her military career as an Air Defense Artillery Officer, and also served as an Operations Research Analyst. She is currently the Director of the Core Mathematics Program at the U.S. Military Academy. She has been an ICM judge for the past six years. + +![](images/233f7b0bcce0ddb407ed875681ed16d2624f1cb0df254960c27beb60a5a4c168.jpg) + +Jessica Libertini started her career as an engineer, earning a B.S. and an M.S. in mechanical engineering from Johns Hopkins University and Rensselaer Polytechnic Institute, respectively. She spent nine years at General Dynamics working on projects ranging from the design of submarines to the development of a multinational layered missile defense system. After earning her Ph.D. in applied mathematics from Brown University in 2008, Jessica left industry and began her + +academic career at the U.S. Military Academy, where she held the positions of Assistant Professor and National Research Council Fellow. While there, Jessica used her engineering background to motivate students to address large, open-ended, and meaningful questions, both in the classroom and as the coach of the competitive mathematics team. She has since served on the faculty at the University of Rhode Island and is currently an Assistant Professor of Applied Mathematics at Virginia Military Institute. Given her background in engineering, her research spans a wide variety of mathematical approaches to modeling, analyzing, and simulating medical, military, and physical applications; she also participates in the scholarship of teaching and learning, focusing on undergraduate pedagogy and the elements of a successful transition from high school to college. + +![](images/62e5765c4a07b528ad71c0a9b67a7ce5cbfbc9591eec5fb139181e5ac0cbf855.jpg) + +# Developing an Interdisciplinary Mindset with Students Through the ICM + +Heidi Berger + +Rick Spellerberg + +Mathematics Dept. + +Simpson College + +Indianola, IA 50125 + +rick.spellerberg@simpson.edu + +# Overview + +This is an account of Simpson College's participation in the Interdisciplinary Contest in Modeling (ICM) from 2010 through 2014. During this time period, there were 4,405 teams that participated in the ICM, 156 which came from the United States. Simpson College fielded 50 of those teams. + +While Simpson College did not field any Outstanding teams over this time period, it did field $33\%$ of the Finalist teams from the U.S. and $32\%$ of the Meritorious teams from the U.S. In 2014, Simpson College accounted for $48\%$ of the teams from the U.S. participating in the ICM. + +The focus of this article will address what the Simpson College Mathematics Department has done to promote participation in the ICM and what outcomes our students have had. + +# Part I: Building the Culture + +At Simpson College, we pride ourselves on helping students learn oral and written communication skills, collaborative skills, problem-solving skills, critical thinking skills, time management skills, and research skills. As a faculty, we recognize that the ICM is a valuable experience for students in that it addresses the development of all of these skills. Through + +our promotional efforts, this competition has become the hallmark of our program. + +We have been deliberate in promoting participation in this competition, starting with the recruiting of students into the program. A significant number of high school seniors who decide to join our program do so because of our success in the ICM. During 2010-14, an average of 16-18 first- and second-year students participated in the ICM. All of the students who participate in the ICM are aware of student outcomes in terms of securing internships, full-time employment, Research Experiences for Undergraduates (REUs), and graduate school acceptance as a direct result of their success with the ICM. This has led to a very high level of persistence with participation in the competition. + +To increase their chances of success, the students have learned to form interdisciplinary teams for the ICM. By bringing in students with different academic backgrounds, teams possess a wide variety of problem-solving strategies. Thus, it's not only mathematics majors who are involved in this competition—in 2010-2014, a range of 15-20 different majors participated in the ICM each year. + +The fact that so many disciplines are represented in the ICM annually led to the institution providing academic and financial support. Starting in 2013, students were able to receive general education credits for Collaborative Leadership and Information Literacy if they participated in the ICM. Furthermore, Simpson College's Student Government Association recognized the impact that this competition was having across campus and agreed to pay every team's registration fee for the ICM. Finally, in three of the past five years, the College's Public Relations Dept. has produced videos and articles related to the competition that highlight the students' experiences and achievements. + +We have now gotten to the point where a critical mass of students consistently participate in the ICM. This has led to a steady increase in the number of teams participating in the ICM. We expect these numbers to persist and increase over the coming years. + +# Part II: During the Competition + +To support our students in this competition, we have been deliberate in our attempts to assure the students have a positive experience during the competition. + +A few weeks prior to the ICM, the Mathematics Department builds a wiki for the competition. All logistical information is available to students at this site, including building hours, staff schedules, meal times, and links to resources on formatting their papers. Students can also provide input on what they would like during the competition, such as food, snacks, and beverages. + +On Thursday evening before the competition begins, we have a one-hour workshop about logistics and strategies for surviving the ICM. At this meeting, we introduce students to our computer archive of past ICM solutions, which they can use as a resource for formatting their papers. Experienced students provide tips for success—including time management skills, collaborative skills, and writing skills. This meeting leads into a "kick-off party." At this party, light snacks are provided and the problems become available to the students on the COMAP website. + +![](images/e1fe5d73eec4f6da8b3d1f0c50541ae6c63b89bde1d1575f1d12d9b637d045a3.jpg) +Figure 1. Students at Simpson's kick-off party. + +Starting on Friday afternoon of the ICM, every team is assigned its own classroom and provided laptops with appropriate modeling software, for the duration of the competition. Faculty and staff across campus volunteer their time during the competition to provide students access to buildings and to bring snacks for the students. Additionally, we have an IT person on call to assist with technological malfunctions (e.g., printer problems, etc.). + +To build community spirit, we stock a break room with appropriate snacks and beverages. Additionally, we organize formal dinners on Friday, Saturday, and Sunday night as a show of support for the students' efforts in the competition. + +# Part III: Student Outcomes + +Over the past decade, we have strived to build an interdisciplinary culture at Simpson College. COMAP's Interdisciplinary Contest in Modeling has played a major role in successfully building this environment, culminating in the results cited from 2010 through 2014. + +During this time, our students have played an active role in motivating us to continue supporting their efforts in the ICM. Indeed, students have persisted in participating in the competition in large numbers and maximized their efforts by consistently striving to perform at the highest level. + +Two students who exemplify the commitment for participating in the ICM are 2013 graduates Laura Collins and Stephen Henrich. Laura triplemajored with Honors in Mathematics, Physics, and Chemistry and has just finished her first year in the Ph.D. program in Physics at Notre Dame. Stephen triple-majored in Mathematics, Biochemistry, and Philosophy. He has just finished his first year in an M.D./Ph.D. program at Northwestern University. Both students graciously agreed to provide a brief account of their experiences with the ICM and its impact on their career paths. + +# Laura Collins + +I participated in the ICM competition at Simpson College all 4 years that I was there, with a different team each time. My first year, my team received an Honorable Mention, and the last three years my teams received Meritorious rankings for our papers. While I didn't really know what to expect from the competition my first year, every following year I was very excited to have the opportunity to work together with my classmates towards solving a challenging math problem. I even found myself wishing I could participate again this past year when the time came for the competition to start again. + +Whenever I try to explain the ICM to my new classmates in graduate school, I mention that it's a 96-hour challenging but very fun math competition, and by the end we completed a 20-page research paper based on mathematical models. Almost immediately my classmates, who have never participated in a competition like the ICM, conclude that I am/was crazy. While they might have a point, I definitely think that it was a worthwhile experience and would participate in the ICM again in a heartbeat. + +Participating in the ICM helped me grow not only as a mathematician, but as an overall problem solver. The competition encouraged us to work together and think of new ways to solve real-world problems. Over my four years, I learned a lot about working together as a team, how I work with others under pressure with a deadline, and how it is important to recognize others' strengths and utilize them. The ICM is one of the many experiences I had at Simpson College that helped me realize that I enjoy working on + +![](images/60bbf7acb867d41acb2c777a937d8db462c1d545c61f20388031b4b400774146.jpg) +Figure 2. Laura Collins. + +challenging problems collaboratively with others. My experiences in the ICM competition convinced me that research would be an excellent career path for me, and so here I am in graduate school at the University of Notre Dame working towards a Physics Ph.D. + +# Stephen Henrich + +My first time participating in the ICM mathematical modeling competition during my freshman year at Simpson was one of those rare pivotal experiences that will probably stick with me forever. I wasn't quite sure what to expect initially, but I was glad to discover that the competition would be much more than a mere test. We weren't asked to provide definitive answers to close-ended questions—and the judges didn't have an answer sheet. Instead, we were given the task of investigating, in depth, a real-world problem. + +In 2010 our problem was the rapidly expanding multi-ton pile of plastic known as the Pacific Ocean Garbage Patch. Over the course of 96 hours, our team of freshman rookies learned a great deal from one another, consumed pounds of nutritionless food, got very little sleep, scrawled on and erased three whiteboards 15 times over, thought up numerous original ideas, and eventually scrapped most of them. Our final product was a new mathematical description of a relationship between atmospheric pressure and the density of plastic in the Patch, which earned a Finalist rank in the competition. I walked away from the weekend feeling that I had experienced for the first time what life might be like as a professional problem solver—as a scientist, mathematician, politician, businessman, or other trained analytic specialist. + +I participated in the competition three more times while at Simpson, + +![](images/7ead9db9e3ddffdc8a3c5eec74b66cf9ef4d7d9c08498f9f5a6a91be911e8660.jpg) +Figure 3. Stephen Henrich. + +always looking forward to that time of the year when the modeling competition would come around, and Carver Hall would come alive with the excitement of our $20+$ teams of amateur problem solvers. My teams were able to tackle an electric vehicle integration proposal, solve a corporate conspiracy, and assess the state of the global energy crisis- earning Honorable Mention, Meritorious, and another Finalist ranking in the process. + +Looking back, it is clear that the competitions played a significant role in my decision to undertake a career path in academia as a medical researcher. These short but intense experiences provided a great deal of insight into the stimulating and ever-changing world of research. And each competition confirmed my suspicion that nothing would suit me better than to become a part of it. + +# Conclusion + +We believe that our participation level and student success in the ICM is a result of our intentional efforts in building an interdisciplinary mindset in our students. These efforts start with the recruitment of the students to the program and continue with our mentorship and structure around the competition. The most exciting thing for us as faculty is to see the students follow our lead and run with it. To better understand the interdisciplinary mindset that our students have developed, please watch the video produced by our public relations office [Simpson College 2012]. + +It is the students who choose the interdisciplinary problem on a regular basis and form teams that are interdisciplinary in nature. This mindset they have developed has allowed them to persist and excel in this competition. + +# Reference + +Simpson College. 2012. Math @ Simpson College. Video, 3:20 min. https://www.youtube.com/watch?v=EaIYEzzbTIM. + +# About the Authors + +![](images/9599fab5d5fc58e857d6bfd6e8681905888501701b8828c0b104156da2ec6b9a.jpg) + +Rick Spellerberg was raised on an Iowa farm and graduated from Coe College in 1984 with degrees in Mathematics and Physics. He went directly on to the University of Iowa where he completed his Ph.D. in Mathematics in 1990. Since then he has been at Simpson College teaching mathematics. A major focus of his outreach activities in recent years has been promoting COMAP's High School Contest in Modeling (HiMCM®). For the past four summers, he and co-author Heidi Berger have run a workshop called "Molding Mathematical Minds" that prepares a teacher and a team of their students + +for the HiMCM. This workshop was first funded through a SUMMA-Tensor grant from the Mathematical Association of America and most recently by the National Security Agency. + +Heidi Berger earned her degrees in mathematics and physics at Coe College in 2002, a master's in mathematics at University of Nebraska-Lincoln (UNL) in 2004, and a Ph.D. from UNL in 2008. She has just completed her sixth year at Simpson College. Aside from working with the MCM/ICM and the HiMCM + +workshop at Simpson College, Heidi is passionate about conducting undergraduate research in biomathematics. She has collaborated with Dr. Clint Meyer and about a dozen undergraduates on problems of population ecology, time scales calculus, and restoration processes of ecosystems. She has received numerous minigrants to support this work and is a co-director for CURM, the Center for Undergraduate Research in Mathematics. + +![](images/bbff377fa76fa1d5972b4cdcd069b3f6052404e5fbbbdc66b6e541fb72c71623.jpg) \ No newline at end of file diff --git a/MCM/2014/A/25142/25142.md b/MCM/2014/A/25142/25142.md new file mode 100644 index 0000000000000000000000000000000000000000..20b95610f4a0246d311dc6254cea55e008f7961f --- /dev/null +++ b/MCM/2014/A/25142/25142.md @@ -0,0 +1,874 @@ +For office use only + +T1 +T2 +T3 +T4 + +Team Control Number + +25142 + +Problem Chosen + +A + +For office use only + +F1 +F2 +F3 +F4 + +2014 + +Mathematical Contest in Modeling (MCM/ICM) Summary Sheet + +# Freeway Traffic Model Based on Cellular + +# Automata and Monte-Carlo Method + +# Summary + +Based on Cellular Automata and Monte-Carlo method, we build a model to discuss the influence of the "Keep right except to pass" rule. First we break down the process of vehicle movement and establish corresponding sub-models, inflow model for car-generation, vehicle-following model for one vehicle following another, and overtaking model for one vehicle passing another. + +Then we design rules to simulate the movement of vehicles in sub-models. We further discuss rules for our model to adapt to the keep-right situation, the unrestricted situation, and the situation where transportation is controlled by intelligent system. We also design a formula to evaluate the danger index of the road. + +We simulate the traffic on two-lane freeway (two lanes per direction, four lanes in total), and three-lane freeway (three lanes per direction, six lanes in total) via computer and analyze the data. We record the average velocity, overtaking rate, road density and danger index and assess the performance of the keep-right rule by comparison with the unrestricted rule. We vary the upper speed limitations to analyze the sensitivity of the model and see the impacts of different upper speed limits. Left-hand traffic is also discussed. + +Based on our analysis, we come up with a new rule combining the two existing rules (the keep-right rule and the unrestricted rule) for an intelligent system to achieve better performance. + +# Contents + +# 1 Introduction 4 + +1.1 Terminology 5 +1.2 Assumptions 5 + +# 2 The Models 6 + +2.1 Design of Cellular Automata 6 +2.2 Inflow Model 7 +2.3 Vehicle-Following Model 8 +2.4Overtaking Model 10 + +2.4.1Overtaking Probability 10 +2.4.2Overtaking Condition 10 +2.4.3 Danger Index 11 + +2.5 Two Sets of Rules for CA Model 13 + +2.5.1 Keep Right Except to Pass Rule 13 +2.5.2 Unrestricted Rule 14 + +# 3 Supplementary Analysis on the Model 14 + +3.1 Design of the Acceleration and Deceleration Probability Distributions 14 +3.2 Design to Avoid Collision 14 + +# 4 Model Implementation with Computer 15 + +# 5 Data Analysis and Model Validation 16 + +5.1 Average Velocity 16 + +5.2 Average Velocity of Fast Cars 17 +5.3 Density 18 +5.4Overtaking Rate 19 +5.5 Danger Index 19 + +# 6 Sensitivity Evaluation of the Model under Different Speed Limitations 20 + +7 Driving on the Left 21 +8 Transportation under Intelligent System 21 + +8.1 New Rule for Intelligent System 21 +8.2 Adaption of the Model 22 +8.3 Result of Intelligent System 23 + +9 Conclusions 23 +10 Strengths and Weaknesses 25 + +10.1 Strengths 25 +10.2 Weakness 25 + +References 26 + +Appendices 26 + +# 1 Introduction + +Today, about $65\%$ of the world's population live in countries with right-hand traffic and $35\%$ in countries with left-hand traffic. [worldstandards.eu,2013] In countries with right-hand traffic, like USA and China, regulations request driving and walking keep to the right side of the road. Multi-lane freeways in these countries often employ a rule that requires drivers to drive in the right-most lane unless they are passing another vehicle, in which case they move one lane to the left, pass, and return to their former travel lane. This rule on driving and overtaking is referred to as the "Keep right except to pass" rule, or the keep-right rule in our paper. The rule in countries with left-hand traffic is exactly mirror symmetrical to the keep-right rule("Keep left except to pass"). So, what's the purpose of applying such a rule? Does the keep-right rule ameliorate the freeway traffic condition? Transportation free of the restriction of the keep-right rule (Vehicles can choose either side for overtaking.) is referred to as obeying the unrestricted rule. How does the keep-right rule perform comparing with the unrestricted rule? + +Based on the Cellular Automata model and the Monte Carlo algorithm, we establish a model to simulate freeway traffic under different conditions (under the keep-right rule or the unrestricted rule, in light traffic or in heavy traffic, 2-lane or 3-lane per direction). Our model is divided into 3 sub-models the inflow model, the vehicle-following model and the overtaking model. The inflow model employs the Poisson probability distribution for the simulation of the vehicle-generation process. The vehicle-following model introduces a special probability distribution model which makes the simulation of the process of a car following another more realistic. The overtaking model simulates the overtaking behavior and defines the danger index to evaluate the safety risk of a certain freeway. We also build an extended model for transportation under the control of an intelligent system. + +We implement the model in MATLAB, and obtain sufficient data. We test the average velocity, the density, the overtaking rate and the danger index, analyze their properties and assess the performance of the keep-right rule by comparison with the unrestricted rule. In addition, we analyze the sensitivity of our model under different speed limits. It turns out that our model is robust. + +Then we come to our conclusions which consist with common sense. We also put forward a new rule for transportation under the control of an intelligent system. + +Table 1: Notation + +
SymbolMeaning
Vcurrent velocity of the vehicle
Vmmaximum velocity of the vehicle
Vlthe upper speed limit of the freeway
V0the velocity before overtaking
V1the velocity in the overtaking process
Gthe distance between a vehicle and the vehicle ahead of it
Gsthe minimum gap required for safety consideration
G0the minimum gap after the vehicle stops
TrPIEV time(human reaction time)
Pothe overtaking probability
Pathe acceleration probability
Pbthe deceleration probability
fthe frictional force when braking
ddanger index in one overtaking event
Ddanger index of the road system
athe acceleration during overtaking
apthe component parallel to the lane of acceleration during overtaking
adthe available deceleration
+ +# 1.1 Terminology + +- Two-lane road: Two lanes on right-half of the road, four lanes in total. +- Three-lane road: Three lanes on right-half of the road, six lanes in total. +- Danger index: An index designed in our paper to evaluate the danger of the road system. +- Minimum safety gap: The distance between two vehicles that deemed safe enough in our model. +- Keep-right rule:Keep right except to pass rule. +- Unrestricted rule: Vehicles are not restricted and can overtake others from either side. +- Free-driving style: When there is no vehicles nearby, drivers will not accelerate or decelerate deliberately, but the speed will still fluctuate slightly. + +# 1.2 Assumptions + +- The road is straight and there is no bypass. + +- The width of one lane is only enough for one vehicle. +- All vehicles have the same volume. +- There are only two kinds of vehicles on the road (fast one and slow one). +- The environment and climate are good for driving. +- Driving on the right is the norm. +- Pedestrians are ignored. + +# 2 The Models + +# 2.1 Design of Cellular Automata + +Large quantities of former traffic simulations [Wagner P et al.2005] based on Cellular Automata (CA) indicate that CA model is a feasible and effective method to emulate traffic flow. Space, time and status are all discrete in Cellular Automata. For example, the model divides the road into small rectangular cells, and time is divided into small units. This feature predigests the simulation process significantly. Besides, the status of a cell is controlled by its neighboring cells following a set of rules, which is much similar to real-life traffic where a car's movement largely depends on its neighboring cars' movement. Therefore, it is rational for us to apply Cellular Automata in solving our problem. + +In our simulation, we divide each lane into 1000 cells. Each cell is 4 meters long in length and width and has two properties, the current velocity $V$ and the maximum velocity $V_{m}$ . A cell is empty when $V$ is 0, because a car won't stop in our crash-free simulation. We consider only one direction of the freeway for simplicity. Thus, a freeway of n lanes is converted into an $n^*1000$ matrix. + +In our simulation, we employ two kinds of vehicles, faster ones to simulate the cars and slower ones to simulate the trucks. + +For each lane, the first 6 cells are used as car-generation area, traffic flow is observed in the last 10 cells and traffic density is calculated on the basis of the last 500 cells. Our model updates once per second, while the period $T = 1s$ is the average reaction time of a driver. + +We discuss the basic processes for the CA model: + +- Inflow Process: According to the inflow model that we will discuss later, assign vehicles in the vehicle-generation area. + +- Acceleration Process: If $V < V_{m}$ , a vehicle accelerates by $\Delta V$ , and the new speed is $V' = V + \Delta V$ . +- Deceleration Process: If the distance between a vehicle and the vehicle ahead of it (Front bumper to front bumper distance, we call it the gap, and the gap is denoted by $G$ and its unit is cell. When there is no vehicle ahead, $G$ is set to $+\infty$ .) is no more than $V$ , the vehicle decelerates to $V' = (G - 1)/T$ . +- Moving Process: Vehicles move forward by $V^{\prime} * T$ cells only when $G > G_{s}(V^{\prime})$ . ( $G_{s}(V^{\prime})$ is the minimum gap required for safety consideration, and is to be defined later.) + +Specific rules will be set in the inflow model, the following model and the overtaking model to simulate traffic under the Keep-Right-Except-To-Pass rule and traffic under no such restriction. + +# 2.2 Inflow Model + +The inflow model, or the vehicle-generation model, simulates the stochastic arrivals of vehicles at the entrance of the freeway. For each lane, the first six cells in the cellular automata are set as vehicle-generation area. We assume that the arrival of each vehicle obeys the binomial probability distribution. Let $t_s$ denote the sampling time interval and $N$ denote the total of vehicle arrivals during $t_s$ . Then $N$ can be approximated to obey the Poisson probability distribution. Let $P_{t_s}(N)$ be the probability of $N$ and we have + +$$ +P _ {t _ {s}} (N) = \frac {\lambda^ {N}}{N !} e ^ {- \lambda}, N > 0 +$$ + +With $t_s$ being one second in our implementation, we can assign the expectation of $N$ to a range of values from 0 to 3.6. $N$ being the total of vehicle arrivals in each second, the expectation of $N$ , can effectively reflect the traffic condition. The smaller the $\lambda$ is, the lighter the traffic is; the greater, the heavier. Thus we are able to simulate different traffic conditions, light or heavy, by assigning corresponding values to $\lambda$ . After the value of $\lambda$ is set, we get the stochastic number of vehicles entering the freeway for every second in simulation. Which lane to enter is then randomly assigned. + +Our model supports two kinds of vehicles of different velocity ranges, the initial speed of all vehicles are set to $20\mathrm{m / s}$ . Such practice brings simplification and doesn't weaken the result. + +That is because the speed of all vehicles tends to converge toward a value controlled by the traffic density and by the distribution of acceleration probability which is to be introduced later. When traffic density is low, vehicles can always accelerate freely to the maximum speed without worrying about collisions, so the convergence speed is near the highest speed allowed. When traffic density is high, all the lanes will be filled with vehicles, and the speed of the traffic flow is decided by the speed of the slowest vehicle on the lane, so the convergence speed is near the lower speed limit. The preliminary analysis on the convergence speed will be justified by later implementation of the model. + +The utilization of the Poisson probability distribution makes our inflow model close to reality and practical. Because of the convergence tendency, the same initial speed policy can yield simplification without harm to the simulation. + +# 2.3 Vehicle-Following Model + +The Federal Highway Administration of the United States Department of Transportation defines, in its Manual on Uniform Traffic Control Devices, define driver's reaction time as PIEV time. PIEV time consists of four parts: + +- Perception process: A driver perceives the change in driving environment. +- Intellection process: The driver analyzes the information about the change. +- Evaluation process: The driver determines driving behavior based on his analysis. +- Volition process: The driver executes the driving behavior. + +We apply the PIEV process in our vehicle-following model and overtaking model. In every time cycle, we first obtain the velocity and position of each vehicle, calculate the gap, and then determine the driving behavior (whether to continue following or to change lane for overtaking). According to driving behavior, compute the acceleration and update the speed and position. + +Decision on driving behavior based primarily on the current gap. If the gap $G$ is safe enough, acceleration is feasible; otherwise, the vehicle should slow down. Here, we define the minimum safe gap $G_{s}$ to be $T_{r} * V$ . ( $T_{r}$ stands for the PIEV time, and $V$ is the current velocity.) + +We assume that decisions on driving behavior follow certain principles: + +- When $G > G_{s}$ , the vehicle tends to accelerate (Later we will introduce a probability model to simulate the tendency.), until achieving the freeway speed limit or its maximum possible velocity; +- When $G < G_{s}$ , whether to overtake or to follow is determined by the overtaking probability $P_{o}$ and the overtaking conditions ( $P_{o}$ and overtaking conditions are to be discussed in the overtaking model). + +When following, a vehicle can accelerate, decelerate or keep the original speed. We introduce two parameters [SUN yue 2005], the acceleration probability $P_{a}$ and deceleration probability $P_{b}$ . The higher the speed, the smaller the $P_{a}$ and the greater the $P_{b}$ . $V_{l}$ represents the higher speed limit of the freeway, and $V_{max}$ is the maximum speed the vehicle can reach. This probability model takes into account the fact that speeding can't be ignored. When $V > V_{l}$ , $P_{a}$ gets even smaller and $P_{b}$ gets even bigger, which makes it possible for speeding but the possibility is small. We use a stochastic variable $R$ for implementation: + +- If $R < P_b$ , the vehicle decelerates; +- If $R > 1 - P_{a}$ , the vehicle accelerates; +- Otherwise, the vehicle maintains the current speed. + +Based on the probability model, we create several rules for the Cellular Automata: (The maximum possible speed of vehicle $i$ is $V_{max}$ , the current gap is $G$ , the minimum safe gap is $Gs$ and its velocity is denoted as $V$ , $P_a$ and $P_b$ are functions of velocity $V$ , and $P_a + P_b \leq 1$ .) + +- Free driving rule: If $G \geq Gs$ , + +$$ +V ^ {\prime} = \left\{ \begin{array}{c c} m i n (V _ {m a x}, V + 1), & w i t h p r o b a b i l i t y P _ {a}; \\ m a x (V _ {m i n}, V - 1), & w i t h p r o b a b i l i t y P _ {b}; \\ V, & o t h e r s. \end{array} \right. +$$ + +- Safe deceleration rule: If $G < G_{s}$ but won't crash if moving forward, + +$$ +V ^ {\prime} = m a x (V _ {m i n}, V - 1) +$$ + +$V_{min}$ is under-posted the lower speed limit + +- Crash-free rule: If moving forward causes crash, stop behind the former vehicle. + +The values of $P_{a}$ and $P_{b}$ are listed in table 2 for fast cars, table 3 for slow ones. + +Table 2: Acceleration and Deceleration Probability for Fast Vehicles + +
V/cell·s-1345678
pa10.80.70.50.30
pb00.10.20.30.40.8
+ +Table 3: Acceleration and Deceleration Probability for Slow Vehicles + +
V/cell·s-13456
pa10.70.40
pb00.20.40.8
+ +# 2.4 Overtaking Model + +# 2.4.1 Overtaking Probability + +The driver will decide wether to overtake another car with the probability $P_{o}$ . Probability $P_{o}$ depends on the vehicle A and the vehicle B ahead. Let $V_{max1}$ be the velocity of vehicle A, and $V_{max2}$ be the velocity of vehicle B. The probability $P_{o}$ satisfies: + +$$ +P _ {o} = \left\{ \begin{array}{c c} 1 - 0. 9 \cdot \mathrm {e} ^ {\mathrm {V} _ {\max 2} - \mathrm {V} _ {\max 1}} & , i f V _ {\max 2} < V _ {\max 1} \\ 0. 1 & , i f V _ {\max 2} \geq V _ {\max 1} \end{array} \right. +$$ + +It is rational to assume that the larger the difference in velocity, the more likely the vehicle is to accelerate. The probability distribution reflects the tendency well. + +# 2.4.2 Overtaking Condition + +A driver can't overtake as he pleases. Overtaking can be sometimes dangerous, and a vehicle should to be able to overtake successfully, i.e. able to return to the former lane, under the Keep right except to pass rule. Thus, restrictions are made on overtaking. + +Condition of Overtaking: + +- the gap $G'$ on the target lane is larger than $G_{s}$ , and +- the velocity of the vehicle is larger than that of the vehicle ahead. + +# 2.4.3 Danger Index + +Here we redefine the minimum safe gap $G_{s}$ using a different method to calculate the danger index. The theoretical relationship between $G_{s}$ and the current velocity $V$ is: + +$$ +\frac {1}{2} m V ^ {2} = f (G _ {s} - G _ {0}) +$$ + +$f$ is the frictional force when braking; $G_{0}$ is the minimum gap after the vehicle stops. + +Considering normal driving speed is under $200\mathrm{km / h}$ in reality and drivers' acceptance gap is usually larger than the theoretical safe value, and for the sake of simplicity of computer implementation, we approximate $G_{s}$ , as a function of present speed $V$ , to be linear to $V$ . + +![](images/e9dd268071e9fbe7a4a1a5618f99cca8d19b40e0f24fabf31735d3dfe18764ea.jpg) +Figure 1: Relationship between safe-gap and velocity + +We set $G_0$ to be 10 meters, and use 0.7 as the friction coefficient. The linear relationship we get is + +$$ +G _ {s} = 1 0 + 2. 8 \times V +$$ + +When changing lanes for overtaking, a vehicle spare part of its acceleration to change the direction and the rest acceleration is to cope with the deceleration + +![](images/dc63bea94bbffe70cd3add174812d1e95bb6308965636fdeefd368d73b4cb7a3.jpg) +Figure 2: Velocity Analysis + +when facing emergency. So the assessment of safety for lane-changing should be different from that for vehicle-following. + +As is shown in the figure 2, $V_{0}$ denotes the velocity before overtaking, $V_{1}$ denotes the velocity in the overtaking process, and a is the acceleration during overtaking. Empirically, $V_{1} = V_{0} - 4m / s$ (deceleration during lane-changing procedure for safety concern). Vehicles finish lane-changing in 1s, and We calculate the component parallel to the lane of a: + +$$ +a _ {p} = V _ {0} - V _ {1} \cdot \cos (\arcsin (\frac {4}{V _ {1}})) +$$ + +Then, the deceleration available is + +$$ +a _ {d} = 6. 8 6 - a _ {p} = 6. 8 6 - (V _ {0} - \sqrt {V _ {0} ^ {2} - 2 V _ {0}}) +$$ + +The values of the available deceleration $a_{d}$ changes slightly as $V_{0}$ varies, so we set $a_{d}$ to $5.76m / s^{2}$ for simplicity. $G_{s}$ changes to $10 + 3.4V$ accordingly. + +We create a function to assess the danger coefficient of a vehicle in unit time: + +$$ +d = \left\{ \begin{array}{c c} 0 & , i f G _ {s} - G _ {r} < 0 \\ G _ {s} - G _ {r} & , i f G _ {s} - G _ {r} \geq 0 \end{array} \right. +$$ + +When $G_r \geq G_s$ , we assume the danger is small enough to neglect, so the danger coefficient is set to 0. When $G_r < G_s$ , we use the difference of $G_s$ and $G_r$ to calculate the danger coefficient. A higher danger coefficient indicates the driving state of the vehicle to be more dangerous. + +The danger coefficients of vehicle-following are similar in various road and rule conditions, so we consider only the danger coefficients of overtaking in further discussion. + +Now we define the danger index to indicate the risk of a certain road under a certain rule. Let $D$ be the sum of danger coefficients of overtaking events happened in 300s time: + +$$ +D = \sum_ {t = 1} ^ {3 0 0} A \cdot d +$$ + +The danger index is the average $D$ of all the vehicles. $A$ is a parameter defined as: + +$$ +A = \left\{ \begin{array}{l l} 1 & , w h e n p a s s i n g f r o m l e f t \\ 3 & , w h e n p a s s i n g f r o m r i g h t \end{array} \right. +$$ + +According to researches, if a left-hand drive vehicle (the vehicle controls located on the left hand side) tries to pass from the right side, driver's sight will get restricted and thus increasing the danger index. We assume that the danger index of passing from the right side is three times higher than that of passing from the left side. So we set $A$ to 1 when passing from the left, and set $A$ to 3 when from the right. + +The danger index $D$ we introduced here is the basis to evaluate the safety of the road in our model. + +# 2.5 Two Sets of Rules for CA Model + +# 2.5.1 Keep Right Except to Pass Rule + +We analyze the performance of the Keep Right except to Pass rule by comparing the outcomes from the model under this rule and the model without such restriction. Applying this rule demands some rules for the Cellular Automata: (The rules are listed in descending priority, that is, if the first rule is satisfied, the following ones are to be neglected.) + +- If the gap $G$ in the right lane is larger than $G_{s}$ , change to the right lane; +- If the current gap $G$ is larger than $G_{s}$ , apply the free driving rule set in the vehicle-following model; +- If the gap $G$ in the left lane is larger than $G_{s}$ , apply the overtaking model with probability $P_{o}$ , and apply the following model with probability $1 - P_{o}$ . + +# 2.5.2 Unrestricted Rule + +Likewise, when we implement the model without such restriction, another set of rules is needed: (The rules are listed in descending priority, that is, if the first rule is satisfied, the following ones are to be neglected.) + +- If the current gap $G$ is larger than $G_{s}$ , follow free driving rule employed in vehicle-following model; +- If the overtaking conditions are satisfied, and the gap $G$ in the left lane is larger than $G_{s}$ , apply the overtaking model to pass from the left side with probability $P_{o}$ , and apply the vehicle-following model with probability $1 - P_{o}$ ; +- If the overtaking conditions are satisfied, and the gap $G$ in the right lane is larger than $G_{s}$ , apply the overtaking model to pass from the right side with probability $P_{o}$ , and apply the vehicle-following model with probability $1 - P_{o}$ . + +# 3 Supplementary Analysis on the Model + +# 3.1 Design of the Acceleration and Deceleration Probability Distributions + +The design of the acceleration and deceleration probability distributions we introduced in the vehicle-following model simulates the change in velocity during the driving process. The system can self-adjust the average velocity according to the density. When the density is small, the average velocity of the traffic flow is close to the average speed of a free-driving car which follows the probability distribution. When the density is large, slow cars on the road decelerate the following cars. In other words, slower cars in the lane determine the average velocity. When the freeway is relatively crowded, the expectation of the slowest speed decreases and thus affects the average velocity of the road. + +# 3.2 Design to Avoid Collision + +While simulating heavy traffic, we design rules to avoid car crashes. Normally, a freeway limits the lower speed, but when a vehicle finds itself too close to the one ahead, it can brake to avoid collision regardless of the lower speed limit. + +When the freeway is crowded, the frequency of deceleration to avoid collision rises, so the average velocity goes below the lower limit of speed. + +# 4 Model Implementation with Computer + +Based on the Cellular Automata model and Monte Carlo algorithm, we implement our model successfully via MATLAB. Starting from a simple situation, we first simulate a freeway of 2 lanes under the Keep-right rule. Then with a small change to the rules, we have the unrestricted 2-lane-freeway model for comparison. We work on to extend the model to simulate a freeway of 3 lanes and both rule conditions are realized. Moreover, traffic under a keep-left rule, with different speed limits as well as transportation directed by an intelligent system is also emulated. We test these models with different inflow rates in order to see the influence of traffic heaviness. Supported by sufficient simulated data, we are able to evaluate accurately the performance of the Keep right except to pass rule in light and heavy traffic, including the tradeoffs between flow and safety, the average speed, the traffic density and the overtaking frequency. And we further discuss the impact of a keep-left rule and that of an intelligent system. Figure spatiotemporal diagram of vehicles, expectation of inflow rate is 0.5 veh/s, the ratio of smaller cars to larger vehicles is 1, three-lane. The diagram records positions of all the vehicles in every time cycle. Red represents smaller cars and green represents larger vehicles. Every three columns stand for the freeway state of one time cycle. + +![](images/fc90b8ce4b6d1e2087da04c06e3c1de49c14bfbc19cb3382ac516ccbc838f3de.jpg) +Figure 3: Tempo-Spatial Distribution of Vehicles + +# 5 Data Analysis and Model Validation + +# 5.1 Average Velocity + +Traffic flow is linear to the vehicle-generation rate. We choose the average velocity of the vehicle flow to reflect the traffic efficiency. We analyze the statistics from the two-lane model and the three-lane model, both under the keep-right rule and unrestricted. The relationships between the average speed and the inflow rate in various conditions are shown in the figure 4 and figure 5. + +![](images/8138e53a7166c8d8bff26dd8ae12c6fa14b2d8dcb5db3ee15820466995838e1b.jpg) +Figure 4: Average Velocity under Different Figure 5: Average Velocity under Different Rules(2 lanes) Rules(3 lanes) + +![](images/9fab02d0461688ab3c2eac0945b86660120b915d050fb14accc14d1dc58a08d1.jpg) + +It's clear that in the two-lane model, the keep-right rule yields faster average velocity in general. When it comes to the freeway of three lanes (by statistics) or more (by deduction), the keep-right rule doesn't improve the average velocity. We can see that when the vehicle-generation rate is over 0.75veh/s, the unrestricted rule outperforms the keep-right rule according to the figure. + +High inflow rate may trigger traffic jam, as we can learn from the figure. When the inflow rate is higher than 1.8veh/s, the average velocity in both models goes below the lower speed limit of the freeway. + +If the interference of other vehicles is neglected (that is, a vehicle drives on an empty freeway under the free-driving rule), the average speed, or we call it the ideal speed, of slow vehicles is $19.44\mathrm{m / s}$ , and the ideal speed of fast cars is $25.88\mathrm{m / s}$ . (The data come from our MATLAB simulation.) We can see from figure 4 and 5 that when the inflow rate is low, the keep-right rule can almost reach the ideal speed, but the unrestricted rule gives a worse performance. + +We can conclude from the analysis that on a three-lane freeway, the Keep right except to pass rule promotes the average velocity of vehicle flow in light traffic but makes no improvement on traffic efficiency in heavy traffic. However + +on a two-lane freeway, the Keep right except to pass rule promotes the average velocity of vehicle flow prominently. + +# 5.2 Average Velocity of Fast Cars + +We calculated the average velocity of faster cars in the three-lane model. We mainly focus on the faster cars to study that to what extent the faster car is blocked by slower vehicles. + +![](images/6e021e9ed9408f3d03703d50d654feefd6689167255cf142170acd57e67549a7.jpg) +Figure 6: Average Velocity of Fast cars + +The general tendency is going down for the following reasons: + +- Large vehicles (slower) may block the road resulting in the limited speed of small cars. +- The more crowded the freeway is, the more will the average speed be affected by the slowest vehicle. + +The velocity goes up when the inflow rate is relatively low. That is because at the beginning, the inflow rate is so low that cars have almost no companions to overtake, which makes them move in the free-driving style. As inflow rate goes high in the lower range (0-0.5veh/s), cars have more chance to overtake so that + +their acceleration probability rises and their average velocity tends to increase. The tendency of the curve can also be interpreted as that denser traffic (within a certain range) stimulates drivers' desire of overtaking. + +# 5.3 Density + +![](images/55f8502ec66c99594f9a7d4de231fcea9b60fee9299ca21a9d6d4b74645f7986.jpg) +Figure 7: Density of Traffic in Each Lanes(2 Figure 8: Density of Traffic in Each Lanes(2 lanes, keep-right) lanes, unrestricted) + +![](images/0e7852b9d71bb9472c7df4b8dfe6c2dc5b735d4aa68b08ade4a31b640c686a0d.jpg) + +![](images/dd0e36c88da523b90b9242949a7ce7c448f5e3d9adf0b96a785739453ff00163.jpg) +Figure 9: Density of Traffic in Each Lanes(3 Figure 10: Density of Traffic in Each Lanes(3 lanes, keep-right) lanes, unrestricted) + +![](images/6f93cc5d22e29b24e6fce02f73b9fd701c5e1241972da2519962332bab224625.jpg) + +These four charts show the density in each lane under different rules. We find that the keep-right rule causes the unbalanced use of the road, which may result in different degrees of wear in the lanes in reality. So the repair for different lanes can be staggered which reduces the harm due to suspension of the lane under repair. + +# 5.4 Overtaking Rate + +We count the sum of overtaking or passing events happened in the three-lane model in five minutes. + +![](images/6ad9972ec8d28b437b95a34007724fb47356486a971e470c2e4bdb5f7d965bf1.jpg) +Figure 11: Passing Events With Different Rules(3 lanes, 300s) + +Under the unrestricted rule, overtaking from the left and from the right are equivalent in priority, so the rates on both sides are approximately the same. + +Under the keep-right rule, most vehicles drive in the right lane if possible, and vacate the left lane, thus making the overtaking requirements easier to meet and causing the sum of passing events to be much more than that of the unrestricted left lane. High passing rate keeps faster cars from the limit produced by slower vehicles, which incarnates the quickness of the keep-right rule. Besides, too many passing events on the right lane harm the safety greatly. + +These data is very important for us to evaluate the danger index of the road system. + +# 5.5 Danger Index + +In light traffic, danger index is low; in heavy traffic, the freeway is crowded, and vehicles move relatively slow, so danger index is also low. Only when the + +![](images/789cf1fea8068e9df252151b9ab0f1dbc2bc372707a43a04682e1597244abd68.jpg) +Figure 12: Danger Index under Different- Figure 13: Danger Index under Different Rules(2 lanes) t Rules(3 lanes) + +![](images/62f19011a75b77c7076ae0e79ef9f84c1fb5fbe90937549c090124440e75df38.jpg) + +density appears at a middle level, is the danger index $D_{m}$ high. We learn from the charts that $D_{m}$ under the keep-right rule is apparently lower than that of no restriction in both the two-lane and three-lane situations. + +# 6 Sensitivity Evaluation of the Model under Different Speed Limitations + +We modify the upper speed limit of the freeway, and the result supports that our simulation is robust. We tested the situation where the upper speed limit is changed from $32\mathrm{m / s}$ to $28\mathrm{m / s}$ and to $36\mathrm{m / s}$ . + +![](images/4f528f23f435ef1de8931c8aa6e0112ddbf99b4c32c43186a0588f8c379fa892.jpg) +Figure 14: Average Velocity under Different Figure 15: Average Velocity under Different Speed Limits(3 lanes, keep-right) Speed Limits(3 lanes, unrestricted + +![](images/f717799e1d0f14301aa8808cbbca0844aba849f801ce4ebade96d880fc4ddcd1.jpg) + +Although speed limits differ, the data from the three models show a similar pattern- the lower the expectation of the vehicle-generation rate, the higher the + +![](images/f36200a994f1341ef807177e2aecc48ff08d67ad60812dd50a7f823dbb9a4987.jpg) +Figure 16: Danger Index under Different- Figure 17: Danger Index under Different- Speed Limits(3 lanes, keep-right) t Speed Limits(3 lanes, unrestricted) + +![](images/07ede851fb657d562768a263e1f2bae5d259546b4fdc010d36f76f8280cf7df1.jpg) + +average speed. This fact indicates that our model is applicable to a wide range of situations. + +We also calculate the corresponding danger indexes under different speed limitations, and the result consists with common sense. The higher the speed limitation, the more dangerous while driving. + +The speed limitation won't make any remarkable changes in our model. + +# 7 Driving on the Left + +We've discussed the right-hand traffic and now, let's consider the left-hand traffic. The situation is exactly mirror symmetrical to that with right-hand traffic. So we require the vehicles used in this model to be right-hand drive and make a switch of the left and the right in our former model. Then the situation of driving on the left is simulated. + +# 8 Transportation under Intelligent System + +# 8.1 New Rule for Intelligent System + +After our simulation on computer, We make a new rule for intelligent system to achieve the best performance + +- When the inflow rate is lower than $1.5\mathrm{veh} / \mathrm{s}$ , a vehicle should follow the + +keep-right rule. + +- Otherwise, it follows the unrestricted rule. + +We will explain why we choose such rules in the following part. + +# 8.2 Adaption of the Model + +If the vehicle transportation on the roadway was fully under the control of an intelligent system, some conditions will be changed: + +- The response time of a driver no longer matters. +- A vehicle no longer changes its speed randomly, but makes a change only when necessary. +- The danger of changing lanes decreases prominently. +- The risk of changing lanes from left to right and from right to left should be the same because in an intelligent system there is no blind zone caused by the driver's position in a car. +- The judgment on whether a car shall pass another is more scientific and less subjective. + +The major goal of the intelligent system model is to achieve a high-level of traffic flow. We consider that an intelligent system won't get tired or distracted as human do, so it doesn't make mistakes. As a result, danger won't occur unless the vehicle itself breaks down. In the aspect of safety, we simply regard it as the function of speed. + +On the basis of former analysis and previous CA model, we establish some additional rules: + +- Change the response time to $0.1\mathrm{~s}$ , making a smaller minimum safe gap between vehicles. +- No longer change speed randomly. Change the speed toward the setting value. We adjust the free-driving speed-changing possibility ( $p_a$ is possibility to accelerate, $p_b$ to decelerate) distribution table to: + +Table 4: Acceleration and Deceleration Probability for Fast Car + +
V/cell·s-1345678
pa111100
pb000001
+ +Table 5: Acceleration and Deceleration Probability for Slow Vehicle + +
V/ cell · s-13456
pa1100
pb0001
+ +- Change the overtaking probability $P_{o}$ from: + +$$ +P _ {o} = \left\{ \begin{array}{c c} 1 - 0. 9 \cdot \mathrm {e} ^ {\mathrm {V} _ {\max 2} - \mathrm {V} _ {\max 1}} & , i f V _ {m a x 2} < V _ {m a x 1} \\ 0. 1 & , i f V _ {m a x 2} \geq V _ {m a x 1} \end{array} \right. +$$ + +to: + +$$ +P _ {o} = \left\{ \begin{array}{c c} 1 - \mathrm {e} ^ {\mathrm {V} _ {\max 2} - \mathrm {V} _ {\max 1}} & , i f V _ {\max 2} < V _ {\max 1} \\ 0 & , i f V _ {\max 2} \geq V _ {\max 1} \end{array} \right. +$$ + +# 8.3 Result of Intelligent System + +When the inflow rate is low, the keep-right rule is better in promoting the average velocity. This is easy to understand. Under the unrestricted rule, slower vehicles won't change lanes unless for overtaking. Thus they may block the entire road, resulting in a poor performance. But the keep-right rule will provide faster vehicles with more chances for overtaking. + +When the inflow rate becomes high, the density of the road will be unbalanced under the keep-right rule. The rightmost lane becomes so crowded that the average speed decreases greatly. But vehicles are evenly distributed on the road under the unrestricted rule, so the freeway won't get that crowded. + +# 9 Conclusions + +Keeping to the right side of the road is an accepted traffic regulation in many countries, and some countries even make it a law. Through the establishment of + +![](images/82d4178dffa7ae491b118f51e6e3918f7e3364b7355e155dbdf5b648b39004b8.jpg) +Figure 18: Average Velocity under Different Rules in an Intelligent System + +a reasonable model and the simulation of actual road conditions, we find that the Keep right except to pass rule can separate the fast and slow vehicles into different lanes to some extent. Fast vehicles will receive fewer constraints from the traffic flow, so that freeway carrying capacity and people's travel efficiency are improved. (Although in our multi-lane simulation, unrestricted rule performs a little better in heavy traffic, but it also brings huge risk compared with keep-right-except-to-pass rule, so we recommend keep-right expect-to-pass rule. + +The velocity limit directly influences traffic security; the higher the speed limit, the more insecure the freeway. But it is irrational to lower the speed limit blindly, resulting in unnecessary loss in traffic efficiency. How to balance between velocity and safety asks for further study on the performance of vehicles and accident frequency under different speed limits. + +In countries like Britain and Japan, vehicles are mostly right-hand drive (vehicle controls are located at the right-hand side). Security risks are higher if passing from the left side. Therefore, they formulate a rule exactly mirror symmetrical to the Keep right except to pass rule, i.e. the Keep left except to pass rule, to lower the incidence of traffic accidents. + +When we look at the model under the control of an intelligent system, where crashes and collisions won't happen, the keep-right rule yields higher average velocity in light traffic and the unrestricted rule performs better in heavy traffic. Therefore, we put forward a new driving rule for transportation fully controlled + +by an intelligent system seeking for best performance: When the inflow rate is lower than 1.5veh/s, a vehicle should follow the keep-right rule. Otherwise, it follows the unrestricted rule. + +# 10 Strengths and Weaknesses + +Like any model, the one present above has its strengths and weaknesses. Some of the major points are presented below. + +# 10.1 Strengths + +- Fully consideration of the mental state of the driver + +In the vehicle-following model, we fully consider the overtaking psychology of drivers. The overtaking probability is greater when a fast car is after a slow car, or when the velocity difference is larger. + +In the free-driving style, the speed of a vehicle will go up or down according to the unique probability distribution, which simulates the unpredictable slight change in speed in real-world driving. + +- Easy to assess the safety of the system + +We exclude the possibility of crash in our model, but use the danger index to assess the security of the system. Such practice consists with the low probability of crash in real world. + +# 10.2 Weakness + +Not accurate enough + +The change unit (a cell) of the gap and the velocity is relatively big, which may harm the accuracy of the simulation. + +- Values of some parameters are not very scientific + +Some parameters lack real-life data, so we have to estimate based on common sense. + +# References + +[1] SUN Yue, YU Jia, HU You-qiang, MO Zhi-feng. Microscopic Traffic Simulation Mathematical Model Based on Cellular Automata[J]. Journal of Chongqing University(Natural Science Edition), 2005, 28(5): 022 +[2] Kesting A, Treiber M, Helbing D. General lane-changing model MOBIL for car-following models[J]. Transportation Research Record: Journal of the Transportation Research Board, 2007, 1999(1): 86-94. +[3] MO Zhi-feng, YU Jia, SUN Yue. Poisson Distribution Based Mathematical Model of Producing Vehicles in Microscopic Traffic Simulator[J]. Journal of Wuhan University of Technology(Transportation Science & Engineering), 2003, 27(1): 73-76. +[4] Wagner P, Nagel K, Wolf D E. Realistic multi-lane traffic rules for cellular automata[J]. Physica A: Statistical Mechanics and its Applications, 1997, 234(3): 687-698. +[5] Traffic C D. Manual on Uniform Traffic Control Devices[J]. +[6] Why do some countries drive on the left and others on the right? http://www.worldstandards.eu/cars/driving-on-the-left + +# Appendices + +Here are the simulation programmes we used to implement our model. For different rules listed in our model, the xdeal.m function may be a little different. Here we give the code of the keep-right-except-to-pass rule with 3 lanes simulation. + +# Main function: + +clear all; +clear; +global road1 Xsumsmall Xsumlarge Xsmallv Xlargev Xsumallicar Vaverage; Xsumsmall=0; +Xsumlarge $\equiv 0$ . +Xsumallcar $= 0$ . +Xsmallv $= 0$ . +Xlargev $= 0$ . +Vaverage $= 0$ + +global pass_right_count pass_left_count pass_right_v pass_left_v; +pass_right_count $= 0$ +pass_left_count $= 0$ +pass_right_v $= 0$ +pass_left_v $= 0$ +out $\equiv$ []; +for q=[0.1,0.3,0.5,0.7,1,1.4,1.8,2.5,3,3.6] Xsumsmall $\coloneqq 0$ . Xsumlarge $\equiv 0$ . Xsumallcar $\equiv 0$ . Xsmallv $\equiv 0$ . Xlargev $\equiv 0$ . Vaverage $\equiv 0$ road1=zeros(2,1000,2); out $\equiv$ [out;q,Xmain(q)]; +end savefile $\equiv$ 'final'; +save(savefile); + +# Simulating different inflow rates function: + +function [out] $\equiv$ Xmain(qq) + +global speedmax1 speedmaxs +speedmax1=7; +speedmaxs=9; + +global pass_right_count pass_left_count pass_right_v pass_left_v; +pass_right_count $= 0$ . +pass_left_count $= 0$ .. +pass_right_v $= 0$ .. +pass_left_v $= 0$ .. +sumdst $= 0$ .. +global road1 Xsumsmall Xsumlarge Xsmallv Xlargev Xsumalcar Vaverage numoflane; numoflane $= 3$ .. +Xsumsmall $= 0$ .. +Xsumlarge $= 0$ .. +Xsumalcar $= 0$ .. +Xsmallv $= 0$ .. +Xlargev $= 0$ .. +Vaverage $= 0$ .. +road1 $\equiv$ zeros(numoflane,1000,2); +Xdens $= 0$ .. +cdens $= 0$ .. +%h $\equiv$ figure(1); +%hold all; + +global time; +for time=1:700 road1(:,991:1000,:)=zeros(numoflane,10,2); Xbegin(qq,0.5); for xdistance $= 990$ -1:1 for lane $\equiv$ 1:numoflane if (road1(lane,xdistance,1) $\equiv = 0$ ) continue; end Xdeal(lane,xdistance); end end $\% \{$ +for xdistance $= 990$ -1:1 for lane $\equiv$ 1:2 Xdraw(lane,xdistance,time); end +end + $\% \}$ for xdistance $= 991$ :1000 for lane $\equiv$ 1:numoflane if road1(lane,xdistance,1)\~ $\equiv 0$ Xcount(road1(lane,xdistance,1),road1(lane,xdistance,2),time); end end +end if mod(time,50)=0&&time>400 [xd,dst]=Xdensity(); cdens=cdens+1; Xdens=Xdens+xd; sumdst=sumdst+dst; +end +end +Xaveragedens=Xdens/cdens; +Xaveragedens=Xaveragedens/(400*4); %cars per m sumdst=(sumdst./cdens)/(400*4); timeall $= 300$ . +Vaverage=Xsumallcar/Xaveragedens; +Vaverage=Vaverage/timeall; out_v=Vaverage; out_d=Xaveragedens; out_ns=Xsumsmall; out_nl=Xsumlarge; out=[out_v,out_d,sumdst',out_ns,Xsmallv,out_nl,Xlargev,pass_right_count,pass_right_v, savefile $\equiv$ num2str(qq*10); save(savefile); %saveas(gcf,'myfig05.jpg'); + +Deal with different situations: + +```matlab +function Xdeal( lane,xdistance) +%XDEAL Summary of this function goes here +% Detailed explanation goes here +global road1 numoflane; +v=road1(lane,xdistance,1); +f_dis=Xcarahead(lane,xdistance,v); +if lane>1 +l_dis=Xcarahead(lane-1,xdistance,v); +end +if lane1&&lane16) prnd=16; +end +global road1; +A=road1(:,1:8,1); +B=road1(:,1:8,2); +S=size(road1); +R=randperm(8*S(1)); +for i = R(1:prnd) if A(i) == 0 if rand= 1 + Xr = p(1); +end +y = m; +% x = Xr + n - 1; +b = Xr - 1; +x = n + min([b, road1(m, n, 1) - 1]); +road1(m, n, 1) = max([speedmax1 - 3, min([b, road1(m, n, 1) - 1])]); +``` + +end + +# Judge whether to overtake: + +```matlab +function [ output_args ] = Xovertake(m,n) +%XOVERTAKE Summary of this function goes here +% Detailed explanation goes here +global road1; +p=find(road1(m,n+1:min(n+30,1000),1)); +S=size(p); +Xr=30; +if S(2)>1 + Xr=p(1); +end +v_Ahead=road1(m,n+Xr,2); +v_me=road1(m,n,2); +if v_me>v_Ahead + b=0.9; +else + b=0.1; +end +r=rand(); +if routput_args=1; +else + output_args=0; +end +``` + +# Free-driving style: + +```matlab +function [ b,x,y ] = Xfree(m,n) +% judge if the car is free. +A=[1,0.7,0.4,0;0,0.2,0.4,0.8]; +B=[1,0.8,0.7,0.5,0.3,0;0,0.1,0.2,0.3,0.4,0.8]; +global road1; +global speedmax1; +offset=speedmax1-4; +if (1) +``` + +b=1; + $\mathbf{x} = \mathbf{m}$ . +if road1 $(\mathrm{m},\mathrm{n},2) == 6$ r=rand; if $r < A$ (1,road1 $(\mathfrak{m},\mathfrak{n},1)$ -offset) road1 $(\mathfrak{m},\mathfrak{n},1) =$ road1 $(\mathfrak{m},\mathfrak{n},1) + 1$ elseif $r > 1 - A$ (2,road1 $(\mathfrak{m},\mathfrak{n},1)$ -offset) road1 $(\mathfrak{m},\mathfrak{n},1) =$ road1 $(\mathfrak{m},\mathfrak{n},1) - 1$ end else r=rand; if $r < B$ (1,road1 $(\mathfrak{m},\mathfrak{n},1)$ -offset) road1 $(\mathfrak{m},\mathfrak{n},1) =$ road1 $(\mathfrak{m},\mathfrak{n},1) + 1$ elseif $r > 1 - B$ (2,road1 $(\mathfrak{m},\mathfrak{n},1)$ -offset) road1 $(\mathfrak{m},\mathfrak{n},1) =$ road1 $(\mathfrak{m},\mathfrak{n},1) - 1$ end end y=n+road1 $(\mathfrak{m},\mathfrak{n},1)$ . +else b=0; x=m; y=n; +end +end + +# Count the traffic flow: + +```matlab +function Xcount(a,b,time) +%XCOUNT Summary of this function goes here +% Detailed explanation goes here +global Xsumsmall Xsumlarge Xsmalllv Xlargev Xsumallcar; +global speedmax1 speedmaxs; +if (time<400||time>950) return +end +Xsumallcar=Xsumallcar+1; +if b==speedmax1 Xsumlarge=Xsumlarge+1; Xlargev=Xlargev+a; +elseif b==speedmaxx Xsumsmall=Xsumsmall+1; Xsmalllv=Xsmalllv+a; +end +``` + +end + +Calculate the density: + +function [Xd,density] $\equiv$ Xdensity() +%XDENSITY Summary of this function goes here +% Detailed explanation goes here +global road1; +ext $\equiv$ road1(:,500:900,1) $>0$ . +Xd $\equiv$ sum (sum(ext),2); +density $\equiv$ sum (ext,2); +end + +# Calculate overtaking events from left side + +```matlab +function Xpleft(v) +global pass_left_count pass_left_v; +global time; +if time < 400 +return; +end +pass_left_count = pass_left_count + 1; +pass_left_v = pass_left_v + v; +end +``` + +# Calculate overtaking events from right side + +function Xpright(v) +%XPRIGHT Summary of this function goes here %Detailed explanation goes here +global pass_right_count pass_right_v; +global time; +if time<400 return; +end +pass_right_count=pass_right_count+1; +pass_right_v $=$ pass_right_v $+\mathrm{v}$ + +end \ No newline at end of file diff --git a/MCM/2014/A/26333/26333.md b/MCM/2014/A/26333/26333.md new file mode 100644 index 0000000000000000000000000000000000000000..a6c51dedd843ee6284fcb7ac619d64b226765531 --- /dev/null +++ b/MCM/2014/A/26333/26333.md @@ -0,0 +1,1021 @@ +For office use only + +T1 + +T2 + +T3 + +T4 + +Team Control Number + +# 26333 + +Problem Chosen + +A + +For office use only + +F1 + +F2 + +F3 + +F4 + +# 2014 Mathematical Contest in Modeling (MCM) Summary Sheet + +(Attach a copy of this page to each copy of your solution paper.) + +# Abstract + +Our goal is a model that can evaluate the performance of the keep-right-except-to-pass rule and other alternatives by simulating the traffic flow on the freeway. We construct models to analyze five influencing factors. Then we integrate multiple criteria to judge the performance of nine rules using a fuzzy synthetic evaluation(FSE). + +Our basic lane-changing model focuses on the behavior of a specific vehicle on the freeway. We carefully examine the vehicle's lane-changing behavior, an essential component of overtaking. + +We extend our model with a cellular-automaton-based approach. We assume that the drivers will change the lane with a probability if the trigger and safety conditions are satisfied. Using periodic boundary conditions, we seek to simulate a section of a long freeway, which is hardly influenced by real boundary conditions. In addition, we can accurately control the occupancy of the freeway. We can simulate the traffic flow under several conditions by varying the number of lanes, maximum speed limit, minimum speed limit and signaling behavior. + +Four other basic rules such as free-overtaking rule are examined by revising the laws governing the cells in the cellular automaton. Then we design five improved rules based on the basic rules attempting to obtain an optimal rule. + +We choose flow rate, average speed as traffic flow criteria, sharp braking frequency as a safety criterion and satisfaction and standard deviation of speed as experience criteria. Then we use a fuzzy synthetic evaluation technique to integrate these criteria to determine the performance of each rule. We find that in a light traffic case, a partial-assigned-lane-and-keep-right rule performs the best while in a heavy traffic situation, a different-speed-limit-on-each-lane rule is preferred. + +We change the probability of lane-changing to adjust our model to a country like Great Britain. Moreover, we change that parameter to simulate a freeway fully controlled by an intelligent system and observe small deviations. + +Additionally, we refine our extended model considering the ramps. We adopt open boundary conditions and assume that the vehicles flowing in are Poisson-distributed. Finally, we change parameters to analyze freeways with ramps under different conditions. + +# Keep Right to Keep “Right” + +Team #26333 + +February 10, 2014 + +# Abstract + +Our goal is a model that can evaluate the performance of the keep-right-except-to-pass rule and other alternatives by simulating the traffic flow on the freeway. We construct models to analyze five influencing factors. Then we integrate multiple criteria to judge the performance of nine rules using a fuzzy synthetic evaluation(FSE). + +Our basic lane-changing model focuses on the behavior of a specific vehicle on the freeway. We carefully examine the vehicles lane-changing behavior, an essential component of overtaking. + +We extend our model with a cellular-automaton-based approach. We assume that the drivers will change the lane with a probability if the trigger and safety conditions are satisfied. Using periodic boundary conditions, we seek to simulate a section of a long freeway, which is hardly influenced by real boundary conditions. In addition, we can accurately control the occupancy of the freeway. We can simulate the traffic flow under several conditions by varying the number of lanes, maximum speed limit, minimum speed limit and signaling behavior. + +Four other basic rules such as free-overtaking rule are examined by revising the laws governing the cells in the cellular automaton. Then we design five improved rules based on the basic rules attempting to obtain an optimal rule. + +We choose flow rate, average speed as traffic flow criteria, sharp braking frequency as a safety criterion and satisfaction and standard deviation of speed as experience criteria. Then we use a fuzzy synthetic evaluation technique to integrate these criteria to determine the performance of each rule. We find that in a light traffic case, a partial-assigned-lane-and-keep-right rule performs the best while in a heavy traffic situation, a different-speed-limit-on-each-lane rule is preferred. + +We change the probability of lane-changing to adjust our model to a country like Great Britain. Moreover, we change that parameter to simulate a freeway fully controlled by an intelligent system and observe small deviations. + +Additionally, we refine our extended model considering the ramps. We adopt open boundary conditions and assume that the vehicles flowing in + +are Poisson-distributed. Finally, we change parameters to analyze freeways with ramps under different conditions. + +# Contents + +# 1 Introduction 5 + +1.1 Restatement of the Problem 5 +1.2 Literature Review 6 + +# 2 Assumptions and Justifications 6 + +# 3 Notations. 7 + +# 4 Model Overview 9 + +# 5 The Keep-Right-Except-To-Pass Model 9 + +5.1 The Basic Lane-changing Model 9 +5.2 The Extended "Ring Road" Model 12 +5.3 The Refined Model with Ramps 19 + +# 6 Results: Influencing Factors 21 + +6.1 Variables and Criteria 21 +6.2 Number of Lanes 23 +6.3 Maximum Speed Limit 28 +6.4 Minimum speed limit 31 +6.5 Signal Before Shifting 33 +6.6 Conclusions 34 + +# 7 Results: the Optimal Rule 35 + +7.1 Basic Rules of Overtaking 35 +7.2 Criteria and Single Criterion Analysis for Basic Rules 35 +7.3 Fuzzy Synthetic Evaluation for Basic Rules 38 +7.4 Improved Rules of Overtaking 41 +7.5 Fuzzy Synthetic Evaluation for All Rules 41 +7.6 Conclusions 41 + +# 8 Sensitivity Analysis 42 + +8.1 Percentages of Vehicles 42 +8.2 Probability of Randomization 43 +8.3 Probability of Willing to Change Lane 44 + +# 9 Further Discussions 45 + +9.1 Modifications for Countries Where Driving on the Left Is the Norm 45 +9.2 Modifications for An Intelligent System 46 +9.3 Additional Research on the Refined Model with Ramps 49 + +# 10 Strengths and Weaknesses 52 + +10.1 Strengths 52 +10.2 Weaknesses 52 + +# 1 Introduction + +A freeway is a highway designed for high-speed vehicles. It provides an unhindered flow of traffic with no traffic lights or intersections.[1] Typically, a freeway has several advantages over common roads, such as the high speed and the high traffic volume. The Keep-Right-Except-To-Pass rule, also known as "Slower Traffic Keep Right", is often employed in right-hand traffic in order to raise the quality of traffic flow, especially the quality of traffic flow on freeways.[2] The effectiveness of the rule will not only increase utilization of freeways but also enhance driver satisfaction. Thus, it will be exhilarating if we can design another rule that can outperform the current rule. In this paper, we simulate different rules of overtaking and compare them so as to decide the optimal rule. + +# 1.1 Restatement of the Problem + +We are required to build a mathematical model to analyze the performance of the keep-right-except-to-pass rule the other alternatives. We decompose the problem into two sub-problems: + +- Build a model that can simulate the overtaking process. +- Propose a mathematical criterion to determine the performance of a specific rule. + +In the first step, we seek to build a model with the inputs of speed limits and other factors. Most importantly, the model should reflect the mechanism of the given rule. Then we can change the inputs to do several simulations. Also, we can change the mechanism to apply alternative rules to our model. Finally, we can obtain the outputs of our model. + +In the second step, we seek to use the outputs of our model to propose a mathematical criterion to evaluate different rules. We will consider the trade-off between traffic flow, safety and other factors. We will design other rules and determine which rule is the best. + +Then we attempt to adjust our model and apply it to countries like Great Britain. We also consider the influence of intelligent system. + +# 1.2 Literature Review + +A model for the simulation of freeway traffic is inevitable to the study of the performance of the rule. German physicists Nagel and Schreckenberg built a theoretical model for the simulation of freeway traffic, which is a simple cellular automaton model for road traffic flow known as "N-S model".[3] They defined a one-dimensional lane with two kinds of boundary conditions-open or periodic boundary conditions: In their model, each site may either be occupied by one vehicle or it may be empty. Each vehicle has an integer velocity with values between zero and $v_{max}$ . During each time step, four sub-steps are performed: acceleration, slowing down, randomization and car motion. + +After the single-lane model was established, several scientists were devoted to building a multi-lane model. The main difficulty was to set up the rules for shifting to the neighboring lane. Rickert et al. introduced a two-lane model with two parallel lanes in their article.[4] Several conditions have to be fulfilled before the vehicle changing to other lanes: (1) another vehicle is in the way (2) other lanes are better and (3) collision will not happen. They also simulated the model using a cellular automaton, and the results were quite reasonable. + +As a matter of fact, a multi-lane model does not have to be symmetric. Difference may include different speed limits on each lane, different kinds of cars, etc. In 1997, Chowdhury et al. first simulated the model with different kinds of vehicles. [5] In their model, different maximum possible speeds are introduced to different kinds of vehicles. The result shows that even if the portion of "slow cars" is relatively low, "fast cars" can only move in a low speed. + +Further studies are carried out based on the comparison between model and reality. Knospe, Santen suggested that the influence of "slow cars" might have been over-estimated.[6] They recommended to consider the impact of expectancy in the model. + +# 2 Assumptions and Justifications + +To simplify the problem, we make the following basic assumptions, each of which is properly justified. + +- No pedestrian can affect the vehicles on freeways. Usually, pedestrians have no access to freeways, let alone going across the freeways. +- We ignore the force of crosswind to a vehicle when it is changing lanes. This impact is negligible compared with that of the head wind. + +- Drivers cannot drive on the emergency lane. Typically, the emergency lane is not for use of flowing traffic. [7] +- In our model, we only consider the freeway as completely flat, with no curve or slopes. This assumption greatly simplifies our model and allows us to focus on the nature of overtaking. +- We assume that all drivers act based on the same set of rules. We classify drivers as aggressive and non-aggressive and both groups follow the relevant rules. + +# 3 Notations + +All the variables and constants used in this paper are listed in Table 1 and Table 2. + +Table 1 Symbol Table-Constants + +
SymbolDefinitionUnits
Constants
λExpectancy of poisson-distributionunitless
pslowPossibility that a vehicle slows down randomlyunitless
pleftPossibility that a vehicle shift to the left lane when possibleunitless
prightPossibility that a vehicle shift to the right lane when possibleunitless
pexitPossibility that a vehicle need to move off through the exit rampunitless
+ +Table 2 Symbol Table-Variables + +
SymbolDefinitionUnits
Variables
vsSpeed of vehiclescell/time-step
vexpectExpected speed of vehiclescell/time-step
vlfSpeed of the vehicle on the left lane aheadcell/time-step
vlbSpeed of the vehicle on the left lane behindcell/time-step
vrfSpeed of the vehicle on the right lane aheadcell/time-step
vrbSpeed of the vehicle on the right lane behindcell/time-step
tTimetime-step
Dl,f,gapLeft front gapcell
Dl,b,gapLeft back gapcell
Dr,f,gapRight front gapcell
Dr,b,gapRight back gapcell
vehiclei jThe jth vehicle on theith laneunitless
vj i (t)Speed of vehiclei j at tth time-stepcell/time-step
vj ,expectExpected speed of vehiclei jcell/time-step
gapj i (t)Front gap of vehiclei j at tth time-stepcell
xj i (t)Location of vehiclei j at tth time-stepcell
lfgapj iLeft front gap of vehiclei jcell
lbgapj iLeft back gap of vehiclei jcell
rfgapj iRight front gap of vehiclei jcell
rbgapj iRight back gap of vehiclei jcell
lbumj iSpeed of the vehicle behind vehiclei j on the left lanecell/time-step
rbumj iSpeed of the vehicle behind vehiclei j on the right lanecell/time-step
v (t)Average speed at tth time-stepcell/time-step
NThe number of vehicles passing a certain point on the highwayunitless
N (t)The number of vehicles on the highway at tth time-stepunitless
Nj (t)The number of vehicles on the jth lane at tth time-stepunitless
Nshift (t)The number of vehicles changing lanes at tth time-stepunitless
textExpected timetime-step
tactualActual timetime-step
aijValue of the jth criterion of theith ruleuncertain
uj0Value of the jth criterion of the ideal schemeuncertain
rijRelative deviation of the jth criterion of theith ruleunitless
vjcoefficient of variation of the jth criterionunitless
wjWeight of the jth criterionunitless
FiRelative deviation of theith ruleunitless
+ +# 4 Model Overview + +Most research for traffic flow can be classified as microscopic and macroscopic. As macroscopic methods are difficult to apply to our problem, we approach the problem with microscopic techniques. Our study into the keep-right-except-to-pass rule takes several approaches. + +Our basic model allows us to have a close look at the lane-changing behavior. We focus on the incentive of changing lane and the conditions for a successful lane changing. We treat the changing-to-the-left-lane behavior and changing-to-the-right-lane behavior differently. This model gives us some intuition about the rule and serves as a stepping stone to our later study. + +The extended model views the problem from a wider perspective. We consider a section of freeway and divide it into lattices. Then we run a cellular automaton to simulate the behavior of vehicles. The essence of the model is the laws governing the cells. We derive the laws according to the analysis of our basic model. Moreover, using periodic boundary conditions, we treat the freeway as a "ring road" so as to accurately control the density. Thus, we call it a "Ring Road" model. + +Our refined model attempts to tackle a more realistic while more challenging problem. We add an entrance ramp and an exit ramp to our cellular automaton. The laws for the entering and exiting vehicles are also added. We use a Poisson distribution to simulate vehicles moving in from the start point. + +We use the extended "Ring Road" model as a standard model to analyze the problem and all results have this model at their cores. + +# 5 The Keep-Right-Except-To-Pass Model + +We will start with the idea of the basic model. Then we present the cellular automaton and explain the algorithm. Finally, we introduce our additional work in the refined model. + +# 5.1 The Basic Lane-changing Model + +The basic model is a microscopic approach. A typical overtaking behavior consists of five actions: signal for three seconds, change lane, accelerate, signal back and change back to the former lane. Among these actions, lane-changing is the + +most crucial part. An analysis of lane-changing behavior might shed light on the nature of the quality of traffic flow. + +# 5.1.1 Changing to the Left Lane + +When making the decision of whether to overtake, the driver will first decide whether to change to the left lane. There are two main considerations [8]: + +- A reason or a trigger consideration +- A safety consideration + +The former means that the vehicle ahead moves slowly enough, which will trigger the driver to overtake it. The latter indicates that he will take safety into account. In other words, if there is a high-speed vehicle driving on the left lane, he will choose to stay on the current lane to avoid collision. Based on these considerations, we can introduce some mathematical intuition into the problem. + +Figure 1 illustrates the situation that the red car tends to change to the left lane. + +![](images/b3c0b0a204118ae3a24fffad0aa369078b3f0c38c9beeda98a392a310c0a7c52.jpg) +Figure 1 Change-to-the-left-lane + +Based on the trigger consideration, the speed of the cars satisfies + +$$ +v _ {e x p e c t} > v _ {s} +$$ + +In terms of safety, the left back gap should satisfy + +$$ +D _ {l, b, g a p} > (v _ {l b} - v _ {s}) t +$$ + +Additionally, the red car attempts to accelerate. It is unreasonable if it changes to the passing lane while slowing down. This happens if there is a vehicle ahead. Therefore, the left front gap should satisfy + +$$ +D _ {l, f, g a p} > (v _ {s} - v _ {l f}) t +$$ + +The mathematical expressions above give the basic conditions of a change-to-the-left-lane behavior. + +# 5.1.2 Changing Back to the Right Lane + +After accelerating and passing the slow vehicle ahead, the driver tends to change back to the former lane due to the keep-right-except-to-pass rule. However, this lane-changing behavior will be subject to the following constraints: + +- There is no incentive to pass the car in the current passing lane. If so, and if it is a multi-lane freeway, the driver would prefer to continue to overtake the car ahead and change to the left passing lane rather than change back to the former lane. +- It is safe to change back. This condition means that the driver should pass the slow car and ensure that he will not collide with the slow car when changing back. +- After changing back to the former lane, the driver can maintain his relatively high speed. Otherwise, he will intend to pass more than one car in an overtaking process. + +The first constraint can be treated as the changing-to-the-left-lane situation as mentioned above and the other conditions can be stated as the following mathematical expressions. + +Due to the safety consideration, the right back gap should yield + +$$ +D _ {r, b, g a p} > (v _ {r b} - v _ {s}) t +$$ + +Due to the intention to pass more cars, the right front gap should yield + +$$ +D _ {r, b, g a p} > (v _ {s} - v _ {r f}) t +$$ + +Figure 2 illustrates this change-back-to-the-right-lane situation. + +![](images/bc5eefe73548a531b87686b1ddc365c0090639e4a8c514960aa68e8eae40ba70.jpg) +Figure 2 Change-back-to-the-right-lane + +# 5.2 The Extended "Ring Road" Model + +In order to understand how the rule works in a traffic flow, we have to analyze the behavior of vehicles in a relatively long way, which will be more like a freeway. One intuition for modeling the problem is to think of it as a stochastic process. Therefore, we use a cellular automaton to simulate the behavior of vehicles on a freeway. + +A cellular automaton is a discrete model that describes the time development of a system. It is referred to as a discrete model because it treats time as a discrete variable. The model requires an initial configuration and a set of fixed laws that determine how the system develops. At every time-step, the cellular automaton will advance incrementally and all the laws will be implemented. + +# 5.2.1 Assumptions of the Model + +The following general assumptions are made based on common sense and we use them throughout our model. + +- Drivers obey the rules with a probability. It is obvious in real life that a driver might not want to change lane even if all the conditions are satisfied. We assume that a driver changes to the left lane and the right lane with the probability $p_{left}$ and $p_{right}$ respectively when possible. +- All drivers tend to drive as fast as possible while keeping a safe following distance. Nearly every driver wants to drive faster on the freeway as + +long as there is enough time for him to react if the vehicle ahead decelerates. Also, the maximum speed is limited by the vehicle type and the speed limit of freeway. + +- Drivers are "myopic", which means that they can only see one vehicle in front of him, one vehicle behind him and several vehicles on the neighboring lanes. A driver can see the vehicle in front of him by eyes and see the vehicle behind him with the help of rearview mirrors. Moreover, he can turn his head to left and right to see the car on the neighboring lanes. He might see cars, say, on the left lane of the left lane, but he will not take this information into account when deciding whether to change lanes. Thus, we simply suppose that the drivers cannot see these vehicles and other vehicles much further. +- Drivers make decisions only according to his own interest. Because drivers are "myopic", they cannot know the conditions of the whole freeway. Consequently, they make greedy decisions in order to pass the freeway in a shorter time. + +Additionally, in order to implement cellular automaton on our problem, we propose the following assumptions: + +- We assume that lane-changing does not cost additional time. Obviously, vehicles change lanes while moving. Although the travelling distance of lane-changing seems longer compared with just moving in one lane, the drivers tend to accelerate to change lanes. Thus, we simply suppose that the lane-changing costs the same amount of time as moving in one lane. This assumption helps a lot to build a cellular automaton. +- Each cell represents a $4\mathrm{m}^{*}6\mathrm{m}$ area. The road's length is 2000 cells. A lane's width is 1 cell. We divide a multilane freeway into equally partitioned lanes. We choose a length of $12\mathrm{km}$ to simulate because of the tradeoff between time complexity and the completeness of the model. Also, each array of cells represents a lane. +- Every time-step represents 1 second. Such an assumption is made by nearly all cellular automaton techniques. +- We run 20000 time-step and analyze the last 1000 steps. This ensures us to obtain steady-state conditions. +- While all vehicles have a trend to reach the maximum speed, every single vehicle randomly slows down with probability $p$ . This randomization is a characteristic of traffic flow. + +- Acceleration is done steadily while any kind of deceleration can be done in one time-step. Steady acceleration is an energy-saving behavior and drivers will decelerate to avoid possible collisions. + +# 5.2.2 Characteristics of Vehicles + +We classify the vehicles into three groups: + +- Cars: Cars are those small vehicles, which usually can have high speeds. +- Buses: Buses are large vehicles used for carrying people, and their speeds can be relatively high. +- Trucks: Trucks are large vehicles used for moving heavy articles, and they can only have lower speeds. + +Then we define the characteristics of the three types of vehicles: + +- Occupancy: Each car occupies one cell. Since a typical car's length is $3.6\mathrm{m}-4.6\mathrm{m}$ . The car cannot fully occupy a cell. We place the car in the middle of the cell and treat the space in front of and behind the car as safe distance. Accordingly, each bus and each truck occupies two cells with safe distance preserved. +- Maximum speed: At each time-step, cars cannot move more than 6 cells, which means that its maximum speed is $129.6\mathrm{km/h}$ . Similarly, buses cannot move more than 5 cells per time step, since they are large so that they cannot move at a higher speed. Lastly, trucks carry heavy goods so they have the lowest maximum speed. We suppose it is 3 cells per time-step, which equals $64.8\mathrm{km/h}$ . The maximum speed here is based on the vehicles themselves. If they exceed the speed limit, vehicles can drive no more than the limit. In China, the maximum speed limit of freeway is $120\mathrm{km/h}$ and the lowest speed limit of that is $60\mathrm{km/h}$ . The role of speed limit is examined by comparing situations under different limits. +- Percentage: We assume that cars account for $60\%$ of the traffic flow, buses account for $30\%$ and trucks account for $10\%$ . This assumption are based on the data collected by [9] + +The characteristics are listed in Table 3. + +Table 3 Characteristics of Vehicles + +
TypeOccupancyMaximum speedpercentage
Car1 cell6 cells/sec60%
Bus2 cells5 cells/sec30%
Truck2 cells3 cells/sec10%
+ +# 5.2.3 Laws Governing the Cellular Automaton + +Our cellular automaton is implemented by an algorithm, the essence of the algorithm is the set of laws sequentially implemented in every time-step. In this subsection, we will introduce the laws. These laws are based on the previous analysis and are expressed from a computational perspective. + +# Step 1: moving + +These laws are set based on the assumptions of the model and the characteristics of the vehicles. We denote the $j_{th}$ vehicle on lane $i$ by $\text{vehicle}_j^i$ . + +(a) Determine the speed + +Our three laws below was implemented sequentially, so we use $(t + \frac{1}{3}), (t + \frac{2}{3})$ to denote the intermediate state. + +i. Acceleration + +![](images/8bb876eade3416d0b9422d54b8c17ca364b92603f25daf76d64e1359c5016c39.jpg) +Figure 3 Clarification of some denotes + +All drivers tend to drive as fast as possible: + +$$ +\text {I f} v _ {j} ^ {i} (t) < v _ {j, \text {e x p e c t}} ^ {i}, \quad \text {t h e n} v _ {j} ^ {i} \left(t + \frac {1}{3}\right) = v _ {j} ^ {i} (t) + 1 +$$ + +Where $v_{j}^{i}(t)$ is the speed of vehicle at time $t$ , $v_{j, \text{expect}}^{i}$ means the expected speed of vehicle. + +ii. Randomization + +Every vehicle randomly slows down by 1 with probability $p_{\text{slow}}$ . Mathematically, + +$$ +v _ {j} ^ {i} (t + \frac {2}{3}) = v _ {j} ^ {i} (t + \frac {1}{3}) - 1 +$$ + +iii. Deceleration(because of other vehicles) + +To maintain a safe following distance and to avoid collision, a cell cannot be occupied by more than one vehicle at the same time-step. + +$$ +\text {I f} v _ {j} ^ {i} \left(t + \frac {2}{3}\right) > g a p _ {j} ^ {i} (t), \quad \text {t h e n} v _ {j} ^ {i} (t + 1) = g a p _ {j} ^ {i} (t) +$$ + +Where $gap_j^i(t)$ is the gap between vehicle $j^i$ and the vehicle a-head. + +(b) Determine the location + +We derive the locations based on the speeds above: + +$$ +x _ {j} ^ {i} (t + 1) = x _ {j} ^ {i} (t) + v _ {j} ^ {i} (t + 1) +$$ + +$$ +g a p _ {j} ^ {i} = x _ {j} ^ {i + 1} - x _ {j} ^ {i} - 1 +$$ + +Where $x_{j}^{i}$ is the location of vehicle $j$ . + +# Step 2: lane-changing + +These rules are set based on + +(a) Changing to the left lane + +i. Based on trigger criterion, the driver tends to change to the left if the vehicle ahead moves so slow that he fails to reach his expected speed. + +$$ +g a p _ {j} ^ {i} (t) < v _ {j, e x p e c t} ^ {i} +$$ + +ii. Then, take acceleration criterion into account, we have + +$$ +g a p _ {j} ^ {i} (t) < l f g a p _ {j} ^ {i} +$$ + +Where $lfgap_j^i$ is the left front gap of vehicle. + +![](images/a1e1a1cd85bf1b4537febc88303a3e99fe57cbace141f8dba40aedc62affc0ee.jpg) +Figure 4 Flow chart of cellular automaton + +iii. Lastly, consider the factor of safety, we have + +$$ +l b g a p _ {j} ^ {i} > l b v _ {j} ^ {i} +$$ + +Where $lbgap_j^i$ is the left back gap of vehicle, $lbv_j^i$ is the speed of the vehicle behind on the left lane. + +(b) Changing to the right lane + +i. To check the rules in 2.1 above, if any one of the rules are not satisfied, the driver cannot change to the left lane. + +ii. To consider safety, we have + +$$ +r f g a p _ {j} ^ {i} > v _ {j} ^ {i} +$$ + +Where $rfgap_j^i$ is the right front gap of vehicle $j$ . + +iii. To consider another safety criterion, we have + +$$ +r b g a p _ {j} ^ {i} > r b v _ {j} ^ {i} +$$ + +Where $rbgap_j^i$ is the right back gap of vehicle $j^i$ , $rbv_j^i$ is the speed of the vehicle behind on the right lane. + +The algorithm is diagrammed in Figure 4. + +# 5.2.4 Modeling Using Periodic Boundary Conditions + +To run a cellular automaton, we need to specify the boundary conditions and the initial condition. Boundary conditions determine the way vehicles move into our system and the way they move out of our system. Likewise, an initial condition determines the distribution of vehicles in our system and the speeds of them. + +Inspired by Nagel and Schreckenberg's work [3], we tackle the problem with periodic boundary conditions. Periodic boundary conditions assume that the vehicles moving out of the freeway will immediately appear at the front of the system, so the total number of vehicles is a constant during the dynamics. Thus, we can accurately define a constant system density and study the performance of the rule with varying density. Moreover, periodic boundary conditions turn our road into a closed system, so it is similar to the case that all vehicles are moving on a circle. We, in turn, name our extended model as a "Ring Road" model. + +Although there is no such short ring road in reality, this model still holds true because we can imagine that a freeway is made up of several equally partitioned sections and the vehicles on all the sections are identical. Obviously, this situation satisfies periodic boundary conditions. This is how periodic boundary conditions relate theory with reality. + +# 5.3 The Refined Model with Ramps + +Even though the "Ring Road" model is good enough to solve the problem, it fails to consider the effect of ramps. Consequently, we refine our model by adding entrance and exit ramps and applying open boundary conditions. + +# 5.3.1 Adding Entrance and Exit Ramps + +Vehicles may also use an entrance ramp to enter the freeway. The entrance ramps give them a chance to accelerate to the expected speed. However, most ramps are too short to allow them to speed up to, say, $100\mathrm{km / h}$ . As a result, the vehicles on the right-most lane might slow down to yield to the incoming vehicles or incoming vehicles might hard to enter the freeway. Both cases might have deleterious effects on the quality of traffic flow. Likewise, vehicles have to decelerate in order to enter an exit ramp, and similar problem will occur. + +Figure 5 demonstrates a real freeway section containing ramps. We can see that an exit ramp was followed by an entrance ramp. We add the ramps based on this fact. + +The laws mentioned in the previous section are still valid, and the only difference is how the incoming vehicles enter the freeway. We add some additional laws to instruct the behavior of incoming vehicles. To introduce the concept of ramps in our cellular automaton, we add the following assumptions. + +![](images/aa32dd89d6f96089099a8a87702ad16235fd3971fc0776f7dad2bdcf856154db.jpg) +Figure 5 A real freeway section containing ramps + +# 5.3.2 Additional Assumptions in the Refined Model + +- The exit ramp: We assume that the overlap between the freeway and the exit ramp ranges from the $850_{th}$ cell to the $900_{th}$ cell, which is $300\mathrm{m}$ . +- The entrance ramp: We assume that the overlap between the freeway and the entrance ramp ranges from the $1100_{th}$ cell to the $1500_{th}$ cell, which is $300\mathrm{m}$ . + +# 5.3.3 Additional Laws + +- Off-ramp law: If the vehicle wants to go off the free way (how to determine this will be discussed later), it is not allowed to change to left lane after it reaches the $700_{th}$ cell. At the same time, it will slow down to 3cell/s. If the vehicle is on the right-most lane between the $850_{th}$ cell and the $900_{th}$ cell, we assume that it can move into the ramp at the next time step if it wants to. Since in reality exit ramps usually contain two lanes and the ratio of vehicles exit at one ramp is low, this assumption is reasonable. If it misses the exit, it will not be governed by this additional law anymore. +- On-ramp law: For a vehicle on the entrance lamp, if the laws of changing to the left lane are satisfied, it can go into the freeway during the next timestep. Otherwise, it will continue to move forward. + +# 5.3.4 Modeling Using Open Boundary Conditions + +After we introduce ramps into our model, it is no longer reasonable to consider it as a closed system. Thus, we must use open boundary conditions to determine how the vehicles flow in. Considering that the amount of traffic is stochastic and the input of the system is discrete, a generally used approach is to model the input of vehicles as a Poisson process. Consequently, we assume that the number of vehicles flowing in from the starting point in any interval of length $t$ is Poisson-distributed with mean $\lambda t$ . + +While moving into the system, vehicles tend to move off through the exit ramp with a probability $p_{\text{exit}}$ . And the vehicles flowing in from the entrance ramp in any interval of length $t$ is 0-1-distributed with mean $\lambda t$ . In other words, we try to let the number of vehicles from the entrance ramp equal the number of vehicles from the exit ramp. However, as discussed above, some vehicles may fail to exit the freeway and some others may fail to enter the freeway. We view both of them as bad characteristics of the traffic flow. We analyze these characteristics and we will discuss them in the next section. + +# 6 Results: Influencing Factors + +In this section, we first give a definition of light and heavy traffic. Then we explicitly define four factors and change one factor each time in order to analyze how this factor influences the performance of the Keep-Right-Except-To-Pass Rule. + +We run several simulations with cellular automaton and find that the traffic flow can be classified into two groups. The time space diagram (Figure 8) demonstrates these two kinds of traffic flow. The diagram shows the trace of every vehicle in the simulation. A gentle trace indicates a low speed. Conversely, a steep trace indicates a high speed. + +![](images/13802c4bd100d090a6dcb89d3ff8007a95f3cc738a5a3b4add14057f2f07bfce.jpg) +(a) Time space diagram(occupancy=0.1) +Figure 6 + +![](images/0a93ae46d85477547ca303817815cc5da7f21b83d08e0c52feac1e8fadb135fc.jpg) +(b) Time space diagram(occupancy=0.4) + +- From the left diagram, we can clearly see that the vehicles with a high speed can continue with the high speed which indicates that it is not constrained by slower vehicles. This is a situation of light traffic. +- From the right diagram, we can see that no vehicle can reach a high speed, which indicates that there is a congestion. This is a situation of heavy traffic. + +# 6.1 Variables and Criteria + +We choose number of lanes, the maximum speed limit, the minimum speed limit and signaling behavior as our variables. To judge the effectiveness of the rule, we propose the following criteria: + +- Flow rate: flow rate is the number of vehicles passing a point on a highway per unit time.[10] + +$$ +f l o w r a t e = \frac {N}{T} +$$ + +where $N$ stands for the number of vehicles passing a point on the highway in the time of $T$ . + +- Average speed: the average speed of all vehicles passing a point on a highway or a lane over some specified time period. + +$$ +a v e r a g e s p e e d \bar {v} (t) = \frac {1}{N (t)} \sum_ {j = 1} ^ {3} \sum_ {i = 1} ^ {N _ {j} (t)} v _ {j} ^ {i} (t) +$$ + +where $N(t)$ stands for the number of vehicles on the freeway, and $N_{j}(t)$ stands for the number of vehicles on $j_{th}$ lane. + +- Lane utilization ratio: lane utilization ratio is defined as the ratio between the number of vehicles on the lane to the total number of vehicle on the freeway. + +$$ +l a n e \: u t i l i z a t i o n \: r a t i o _ {i} = \frac {N _ {j} (t)}{N (t)} +$$ + +- Sharp braking frequency: we do not consider the occurrence of accidents in our simulation, since we view accidents as abnormal events and it is difficult to consider abnormal events in microscopic models. Instead, we use sharp braking frequency as an indicator of unsafety. Sharp braking frequency occurs when a vehicle's speed decreases by more than 2 cell/timestep. If this happens in our simulation, we assume that it is more likely to cause an accident in reality. + +- Shift ratio: the number of shifts per unit time. + +$$ +s h i f t r a t i o = \frac {N _ {s h i f t} (t)}{N (t)} +$$ + +where $N_{shift}(t)$ stands for the number of vehicles changing lanes at $t_{th}$ time step. + +- Satisfaction: if a vehicle fails to reach its maximum speed, the driver's satisfaction will decrease. We define the expected time $t_{\text{expect}}$ as the time it takes to drive with the maximum speed through a given distance. We define the actual time $t_{\text{actual}}$ as the time it actually takes to drive through the same distance. Then we derive the criteria by dividing expected time by actual time. The value of satisfaction ranges from 0 to 1. + +$$ +s a t i s f a c t i o n = \frac {t _ {e x p e c t}}{t _ {a c t u a l}} +$$ + +- Standard deviation of speed: people might feel uncomfortable if the vehicle continues to accelerate and decelerate. We use standard deviation of speed to measure this kind of discomfort. + +$$ +S t d. \text {d e v i a t i o n o f s p e e d} = \frac {1}{N (t)} \sum_ {j = 1} ^ {3} \sum_ {i = 1} ^ {N _ {j} (t)} \sqrt {\sum_ {t = 1} ^ {T} [ v _ {j} ^ {i} (t) - \bar {v} (t) ] ^ {2}} +$$ + +# 6.2 Number of Lanes + +# 6.2.1 Flow Rate + +From the flow rate and occupancy figure, we can obtain an optimal occupancy. In a low occupancy situation, the number of cars are too low. Although they are more likely to reach their maximum speeds, the road resources are not fully utilized, so the total flow rate is low. In a high occupancy situation, the number of cars are too high, and it might cause a traffic congestion. We expect an optimal occupancy and view it as a watershed of light traffic and heavy traffic. + +![](images/7dc41916e68d32f2e4b13e807da1a5d9b41d3307268d393798a01da0211b1b01.jpg) +Figure 7 Total flow rate under the condition of different occupancies (3 & 4 lanes) + +From Figure 7, we can see that the optimal occupancy for a 3-lane freeway and a 4-lane freeway is between 0.2 and 0.3. Thus, in our following analysis, we choose an occupancy of 0.1 to represent a light traffic condition and an occupancy of 0.4 to represent a heavy traffic condition. + +![](images/996892b3ed1fabe0c714d505dab136d60fb143127ba25defaac773def53e07fc.jpg) +(a) Flow rate of each lane (3 lanes) +Figure 8 + +![](images/44561334b149b3527b049de3209eea081e3d729746200e2203cd9a5445c6abda.jpg) +(b) Flow rate of each lane (4 lanes) + +We also calculate the flow rate of each lane and sequence them from left to right. From the figures, we conclude that the optimal occupancy of rightmost lane is lower than that of leftmost lane. This tendency holds true for both 3-lane and 4-lane freeways. However, it's not a significant difference. + +# 6.2.2 Average Speed + +![](images/aba187fe02e6742d883d43cdf63e4df624bc2d53e00b1fc8e55c70a8dc419f64.jpg) +Figure 9 Average speed under the condition of different occupancies (3 & 4 lanes) + +As occupancy increases, the average speed tends to decrease due to congestion. This common sense can be justified from the speed-occupancy Figure 9. Surprisingly, the curves for 3-lane and 4-lane coincide! This result indicates that + +different lane numbers may hardly change the performance of the rule under the condition of the same occupancy. + +![](images/0d90e5ba3861a8995ad87ddad464b7ab9e73e32b8361fe72f047211d2512b7c5.jpg) +(a) Average speed of each lane (3 lanes) +Figure 10 + +![](images/d4a45e2a48c18eb5cf4a09a40d7c8c3808c6e7fb9da14e901fc112c5bbf97066.jpg) +(b) Average speed of each lane (4 lanes) + +From the average speed of each lane, we can conclude that the leftmost lane has the highest speed and the rightmost lane has the lowest speed. This is because the rule requires drivers to overtake from the left. + +# 6.2.3 Lane Utilization Ratio + +![](images/323e49eb795389ed303a0dfe57ef87c2c91ef0818b7c9068172fe9008fbfa1b9.jpg) +(a) Lane utilization ratio of each lane (3 lanes) +Figure 11 + +![](images/66d01adec5e46a0df0b42aac6d6935499f89efd1828580345f0c319ad9b6fd4c.jpg) +(b) Lane utilization ratio of each lane (4 lanes) + +From the Figure 11, we can know that the lane utilization ratio of the leftmost lane increases as occupancy increases. It is reasonable because when occupancy increases, more vehicles use the leftmost lane to overtake. In contrast, in the light traffic situation, the lane utilization ratio of the rightmost lane decreases as occupancy increases. It is because more vehicles try to change to the middle lane to move faster. The increase tendency in the heavy traffic condition indicates the difficulty of overtaking. + +# 6.2.4 Sharp Braking Frequency + +![](images/a77daa24129d62542098a2facb9d4e9952a766193045f960d6552a0d060b747e.jpg) +Figure 12 Sharp braking frequency under the condition of different occupancies (3 & 4 lanes) + +There also exists a peak value in the sharp braking frequency curve. In the case of light traffic, the average speed is high. Therefore, as occupancy increases, sharp braking situation increases. In the case of heavy traffic, the average speed is low, so the frequency decreases correspondingly. + +# 6.2.5 Shift Ratio + +![](images/0b9a7bb3dc39738969642f0bdc56f75d02b8c1f9898513bfe6505e64fa23951f.jpg) +Figure 13 Shift ratio under the condition of different occupancies (3 & 4 lanes) + +In the low occupancy case, the ratio is high, probably because of the low car number. In the high occupancy case, the ratio approximately keeps unchanged. + +# 6.2.6 Satisfaction + +![](images/0491224f790eedde0e1393435ad8a37b8be8e84e266e86ef7cede13ee5830140.jpg) +Figure 14 Satisfaction under the condition of different occupancies (3 & 4 lanes) + +As the occupancy increases, it takes longer for drivers to pass a certain distance, so their satisfaction decreases. Again, the curves coincide, which means there is little difference between 3-lane and 4-lane freeway in terms of satisfaction under the condition of the same occupancy. + +# 6.2.7 Standard Deviation of Speed + +![](images/dd3ef10a5cfbf27c3e247c7daa2fff2329bd88c4f7ed78d4d78b0d7f0206ad1e.jpg) +Figure 15 Std. deviation of speed.png under the condition of different occupancies (3 & 4 lanes) + +The shapes of the curves are similar to the speed curves and they are irrelevant to the lane number. + +# 6.3 Maximum Speed Limit + +We study the case in which the maximum speed limit of 4 cell/s (86.4km/h) and 5 cell/s (108km/h) and a case with no maximum speed limit. + +# 6.3.1 Flow Rate and Average Speed + +![](images/f91657e84dbe935b829b6cd839889be74063bcc5a3f983149abf5b392662372e.jpg) +(a) Flow rate in light & heavy traffic (different maximum limit) +Figure 16 + +![](images/eafff3a99e1a1eeea5f72ea3879d53bebab1f3f4edf71cc8ba865052f2a11138.jpg) +(b) Average speed in light & heavy traffic (different maximum limit) + +Vehicles can move in a high speed in a light traffic case, so the maximum speed limit must have a significant influence on the flow rate and speed. In a heavy traffic case, however, vehicles averagely maintain a low speed, and few of them can reach the maximum speed limit, so it can hardly influence the average speed. However, a maximum speed limit can avoid sharp changes of following distance, which is beneficial to the whole traffic flow. Therefore, a proper speed limit might result in a higher flow rate. We can clearly see that the results meet our expectations from the Figure 16. + +# 6.3.2 Sharp Braking Frequency and Shift Ratio + +![](images/c60cf0ec90733d00cc44e8bcd9eabfc5b19a0107c020e43aa2118db9efa4504d.jpg) +(a) Sharp braking frequency in light & heavy traffic (different maximum limit) +Figure 17 + +![](images/2efeac6338fc7a5c70d296177ac7868916fffd9dd1b7ae768745b37a76d5c04b.jpg) +(b) Shift ratio in light & heavy traffic (different maximum limit) + +Figure 17 demonstrate that the maximum speed limit can effectively reduce the sharp braking frequency and shift ratio in a light traffic case, which is beneficial to safety. Nevertheless, it has little impact in a heavy traffic case. + +# 6.3.3 Lane Utilization Ratio + +![](images/3c484fff0bbe5bd10d6d07b04895fab14fc3c091829b8fc4bfe235b53da21de1.jpg) +(a) Lane utilization ratio in light traffic (different maximum limit) +Figure 18 + +![](images/2469ce9ab9d8211b92ef01bab00bb40055894585cc1962d8fd9d5386e14d6168.jpg) +(b) Lane utilization ratio in heavy traffic (different maximum limit) + +From Figure 18, we can conclude that the utilization ratio of the leftmost lane in a light traffic case decreases under a more strict speed limit because the limit reduces driver's willingness to overtake. + +# 6.4 Minimum speed limit + +We study the influence of minimum speed limit by presenting two cases: a case with a minimum speed limit of 3 cell/s (64.8 km/h). By minimum speed limit we mean that vehicles will not exceed the limit due to randomization. However, they can decelerate due to safety consideration. + +# 6.4.1 Flow Rate and Average Speed + +Figure 19 +![](images/825e8ec12d506f3248bd33d48742309e3aaf0af68d8b6a8f3a7aea277e93aab2.jpg) +(a) Flow rate in light & heavy traffic (different (b) Average speed in light & heavy traffic (differential limit) different minimum limit) + +![](images/b5baf242f0ed12d178211bc921f1f9ca26cc7e99d220c63c21fe7bf87f67099a.jpg) + +Opposite from the maximum speed limit case, the minimum speed limit plays an important role in a heavy traffic case and it is of little importance in a light traffic case. We can clearly conclude that from Figure 19 + +# 6.4.2 Sharp Braking Frequency and Shift Ratio + +![](images/a31f00a31c1897867ade016a0fae762b869bcc8a2f73a6656d53a593f6639700.jpg) +(a) Sharp braking frequency in light & heat traffic (different minimum limit) +Figure 20 + +![](images/c05bd4d45794d1c66a2e0afb50589ca53e70478f6c9d4d1bb86b4de988229c17.jpg) +(b) Shift ratio in light & heavy traffic (different minimum limit) + +In the light traffic case, vehicles cannot slow down in advance due to the speed limit, which in turn increase the sharp braking frequency. In the heavy traffic case, the frequency decreases. The reason might lies in the low shift ratio and the fluent traffic flow. The condition for lane changing is difficult to satisfy. Figure 20 illustrates this conclusion. + +# 6.4.3 Lane Utilization Ratio + +![](images/5bb883980d0de809b26fc7974de38e97aa2fcf9167f4199465919dbf8739d73f.jpg) +(a) Lane utilization ratio in light traffic (different minimum limit) +Figure 21 + +![](images/b58753fe076374c78d95e1d19b95e6263313bcd2830ec8e92565d73da9811692.jpg) +(b) Lane utilization ratio in heavy traffic (different minimum limit) + +In the light traffic case, the ratio is of little influence. In the heavy traffic case, the ratio tends to evenly distribute between the three lanes due to the difficulty to change lane. The results are shown in Figure 21. + +# 6.5 Signal Before Shifting + +In real life, it is a rule to signal before changing lane. We try to add this factor into our model. We assume that a driver must signal first if the lane-changing conditions are satisfied. Then the related vehicle might decelerate to give way to the driver. Then the driver can change the lane in the next time step. We compare the cases with and without this signal mechanism. + +# 6.5.1 Flow Rate and Average Speed + +![](images/e9a4734d3317d63a4e3d4cdce9ead7df7ae26ad618271cc6e04a5d5454212c1e.jpg) +(a) Flow rate in light & heavy traffic (with or without signaling) +Figure 22 + +![](images/0187c4b357a95042792d955ef88441008780d48949fca651d40c7b005c6b8ffe.jpg) +(b) Average speed in light & heavy traffic (with or without signaling) + +When signaling is considered, acceleration will be constrained, which will reduce the flow rate and the speed of the traffic flow. Figure 22 illustrates this change. + +# 6.5.2 Sharp Braking Frequency and Shift Ratio + +![](images/712d92afd1690b1732ddadcf85cfa401470d9751db6ce16bee463d0b1dbbb252.jpg) +(a) Sharp braking frequency in light & heavy traffic (with or without signaling) +Figure 23 + +![](images/ff8da6eba1fec130f6ef49f2029857554f551a2949d613d3be6a24c090683357.jpg) +(b) Shift ratio in light & heavy traffic (with or without signaling) + +Because vehicles' giving-way behavior, the shift ratio increases and vehicles have to respond to the signal behavior, which will increase sharp braking frequency and shift ratio. Figure 23 demonstrates this statement. + +# 6.6 Conclusions + +Based on the results, we can conclude that: + +- The number of lanes is not an influential factor under any circumstances. +- The maximum speed limit plays a significant role in light traffic while it is of no importance in heavy traffic. +- The minimum speed limit plays a significant role in heavy traffic while it is of no importance in light traffic. +- Signaling behavior reduce flow rate and average while enhancing safety. + +# 7 Results: the Optimal Rule + +We examine five basic rules and design four improved rules. In order to determine the performance of the rules, we implement a fuzzy synthetic evaluation to consider all the criteria. + +# 7.1 Basic Rules of Overtaking + +We take a three-lane freeway as an example to state the rules, similar rules for other kind of freeway can be easily derived from these rules. Apart from the keep-right-except-to-pass rule, another four basic rules are presented as follows: + +- The free-overtaking rule: Drivers can overtake or change lane as they wish. In other words, there is no rule for overtaking. +- The no-overtaking rule: It restricts the vehicles in their own lane. The vehicles randomly move into the freeway and once they are moving on the freeway, they must stick to their current lane. This rule can be implemented by replacing the dashed lines separating lanes with full lines. This lane marking bans drivers from changing lanes. +- The different-speed-limit-on-each-lane rule: the maximum speed limit on the rightmost lane is the lowest. The maximum speed limit on the leftmost lane is the highest. The maximum speed limit on the middle lane is in the middle. This rule can be implemented by hanging corresponding speed limit sign over each lane, which will indicate the drivers to control their speeds. +- The complete-assigned-lane rule: Cars are assigned to the leftmost lane. Buses are assigned to the middle lane. Trucks are assigned to the rightmost lane. No vehicles can change lanes after it moves into the freeway. This rule can be implemented by hanging the corresponding guide sign over each lane. + +# 7.2 Criteria and Single Criterion Analysis for Basic Rules + +We adopt flow rate and average speed as criteria for the quality of traffic flow and we use sharp braking frequency as a criterion for safety. Moreover, we use satisfaction and standard deviation of speed as criteria for people's experience of the trip. We analyze the five basic rules with every single criterion to see their performance in both light and heavy traffic. + +# 7.2.1 Traffic Flow Criteria: Flow Rate and Speed + +![](images/15a6459d8fa5cd608250d09bc932449ff72879aa94485cb2adcf9222bea3f271.jpg) +(a) Flow rate in light & heavy traffic (different (b) Average speed in light & heavy traffic (different rules) + +![](images/cb64a6247b9b4fca687aa354ec30e72a4b02844a9fcd33b75a8926736062cf9e.jpg) +Figure 24 + +We can see that the speed is coherent with the flow rate. In the light traffic case, the no-overtaking rule performs the worst because the road resources cannot be fully used. There is no significant difference between other rules. + +In the heavy traffic case, the complete-assigned-lane rule behaves the worst. In the simulation, the leftmost lane, which is assigned to cars, suffers bad congestion while the other two lanes are relatively available. The different-speed-limit-on-each-lane rule performs the best. The rule bans large vehicles from entering the passing lanes as well as prevents a large number of cars from entering the rightmost lane. + +# 7.2.2 Safety Criterion: Sharp Braking Frequency + +![](images/9d0e15c86bc1e654dc11cb2fb3110ed3a5f6d03f478678ab96502c6d2663363d.jpg) +Figure 25 Sharp braking frequency in light & heavy traffic (different rules) + +Higher sharp braking frequency indicates a higher risk of accidents in reality. The results are presented in Figure 25. + +In the light traffic case, the no-overtaking rule performs the worst because vehicles of high speeds can only press the brake instead of changing lane if they encounters slow vehicles ahead. The well-performed rules are the keep-right-except-to-pass rule and the complete-assigned-lane rule. The latter bans overtaking too, but the speed is relatively even, so the sharp braking situation seldom occurs. + +In the heavy traffic case, the free-overtaking rule performs the worst. The different-speed-limit-on-each-lane rule performs the best because the rule can reasonably allocate the vehicles to three lanes by speed. Thus, the traffic can flow fluently. + +# 7.2.3 Experience Criterion: Satisfaction and Standard Deviation of Speed + +![](images/0438742a3d1a691c5e373f08204f8be9f7cd5d862f77a4edf06ea4b695e9638e.jpg) +(a) Satisfaction & heavy traffic (different rules) + +![](images/bb3fb9ecb455ce578d403d6fc249d7047ce7f90b26b68088c25e54ad25a5df27.jpg) +(b) Std. deviation of speed in light & heavy traffic (different rules) +Figure 26 +Figure 26(a) describes satisfaction and it is coherent with the speed Figure 24(a). This coherence indicates that people feel more satisfied with higher speed. + +Figure 26(b) compares the standard deviation of different rules. In the light traffic, different-speed-limit-on-each-lane performs the best. Other rules' performance are almost the same. In the heavy traffic, the complete-assigned-lane rule obtains the minimum value, but this is because of low speed. In this case, the criterion cannot accurately indicate people's experience. + +# 7.3 Fuzzy Synthetic Evaluation for Basic Rules + +The main challenge of the judging process is that we can obtain different results using different criteria. If we want to obtain a unique answer, we have to combine criteria to get a new, unique criterion. The relative importance of the criteria is hard to determine. Since we have no other information, we implement a fuzzy synthetic evaluation (FSE).[11] FSE is a multiple-criteria-decision-making method. It can determine the weights of each criterion based on the data only. One way to determine the weights is the coefficient of variation method. If criterion can differentiate the rules evaluated, the method will assign a large weight to the criterion. We use this technique to analyze the performance of each rule. + +In the following subsections, we use the light traffic as a case to introduce the + +process of FSE. The case of heavy traffic can be processed in a similar way. + +# 7.3.1 Identify Alternatives and Attributes + +In our problem, the alternatives are the five basic rules and the attributes are the five criteria. The values of each attribute of each alternative are listed as follows: + +Table 4 Criteria of Basic Rules in Light Traffic + +
flow rateaverage speedsharp breaking frequencysatisfactionstd. deviation of speed
keep-right-except-to-pass0.9644.5520.0410.8411.152
free-overtaking0.9284.2010.0770.7851.357
no-overtaking0.6312.8000.0910.5311.415
different-speed-limit-on-each-lane0.8454.1290.0630.7770.813
complete-assigned-lane0.9324.2560.0330.8081.481
+ +Then we derive the ideal alternative from Table ?? + +$$ +u = \left(u _ {1} ^ {0}, u _ {2} ^ {0}, u _ {3} ^ {0}, u _ {4} ^ {0}, u _ {5} ^ {0}\right) = (0. 9 6 4, 4. 5 5 2, 0. 0 3 3, 0. 8 4 1, 0. 8 1 3) +$$ + +# 7.3.2 Determine Fuzzy Evaluation Matrix + +The membership function is defined as + +$$ +r _ {i j} = \frac {\left| a _ {i j} - u _ {i} ^ {0} \right|}{\max \{a _ {i j} \} - \min \{a _ {i j} \}} +$$ + +Then we have the fuzzy evaluation matrix + +$$ +\mathbf {R} = \left[ \begin{array}{l l l l l} 0. 0 0 0 & 0. 0 0 0 & 0. 1 2 7 & 0. 0 0 0 & 0. 5 0 8 \\ 0. 1 0 8 & 0. 2 0 1 & 0. 7 5 0 & 0. 1 8 0 & 0. 8 1 4 \\ 1. 0 0 0 & 1. 0 0 0 & 1. 0 0 0 & 1. 0 0 0 & 0. 9 0 1 \\ 0. 3 5 7 & 0. 2 4 2 & 0. 5 0 4 & 0. 2 0 5 & 0. 0 0 0 \\ 0. 0 9 6 & 0. 1 6 9 & 0. 0 0 0 & 0. 1 0 4 & 1. 0 0 0 \end{array} \right] +$$ + +Using coefficient of variation method, we define $v_{j}$ and $w_{j}$ as: + +$$ +v _ {j} = \frac {s _ {j}}{\bar {x} _ {j}}, w _ {j} = \frac {v _ {j}}{\sum_ {i = j} ^ {5} v _ {j}} +$$ + +Then we calculate the weighted vector + +$$ +w = (0. 2 4 3, 0. 2 2 6, 0. 1 6 4, 0. 2 5 1, 0. 1 1 7) +$$ + +# 7.3.3 Aggregate using a fuzzy operator + +Then we use a fuzzy operator to aggregate and obtain the relative deviation. + +$$ +F _ {i} = \sum_ {j = 1} ^ {5} w _ {j} r _ {i j} +$$ + +The relative deviation measures the distance between a specific alternative to the ideal alternative. The lower the value is, the better the alternative is. + +# 7.3.4 The Results + +The relative deviation in both cases are listed below + +Table 5 Relative deviations of different rules in light traffic + +
keep-right-except-to-passfree-overtakingno-overtakingdifferent-speed-limit-on-each-lanecomplete-assigned-lane
Fi0.0800.3350.9980.2750.205
+ +Table 6 Relative deviations of different rules in heavy traffic + +
keep-right-except-to-passfree-overtakingno-overtakingdifferent-speed-limit-on-each-lanecomplete-assigned-lane
Fi0.5640.5700.6270.1560.669
+ +In the light traffic case, the keep-right-except-to-pass rule has an absolute advantage over other rules while in the heavy traffic case, the different-speed-limit-on-each-lane rule is the best. + +# 7.4 Improved Rules of Overtaking + +Based on the basic rules, we propose four improved rules by modifying or combining the five basic rules. When creating the rules, we consider not only the implementation of the rules but the overall quality of traffic flow, safety and experience as well. + +- The partial-assigned-lane rule: cars are assigned to the middle lane and the leftmost lane with the permission to change between these two lanes. Buses and trucks are assigned to the rightmost lane and they are banned from changing lanes. This rule can be implemented by hanging the corresponding guide sign over each lane. We present this rule because cars account for $60\%$ of the total vehicles and they have relatively high speed. +- The truck-on-rightmost-lane-only rule: It requires trucks to stick to the rightmost lane while cars and buses can change between the three lanes. This rule can be implemented by hanging the corresponding guide sign over each lane. We present this rule because trucks have the lowest speed and they may constrain the speed of others if they appear on the passing lane. +- The minimum-speed-on-leftmost-lane rule: It sets a special minimum speed for the leftmost lane. This rule can be implemented by hanging the special speed limit sign over each lane. We hope it will help increase the efficiency of the leftmost passing lane. +- The partial-assigned-lane-and-keep-right rule: We combine the partial-assigned-lane rule and the keep-right-except-to-pass rule to make this rule. In other words, vehicles are assigned by the partial-assigned-lane rule and cars overtakes by the keep-right-except-to-pass rule. We hope this rule can have the merits of previous two rules. + +# 7.5 Fuzzy Synthetic Evaluation for All Rules + +To test the performance of the improved rules, we apply a fuzzy synthetic evaluation to all rules. + +# 7.6 Conclusions + +We can make two conclusions from the results: + +Table 7 Relative deviations of different rules in light & heavy traffic + +
light trafficheavy trafficoverall
keep-right-except-to-pass0.1410.5190.269
free-overtaking0.3670.5180.418
no-overtaking0.9910.5760.851
different-speed-limit-on-each-lane0.3260.1160.255
complete-assigned-lane0.2550.7080.408
partial-assigned-lane0.0550.5630.227
trucks-on-rightmost-lane-only0.1700.5270.291
minimum-speed-on-leftmost-lane0.2330.3200.262
partial-assigned-lane-and-keep-right0.0000.5620.190
+ +- The partial-assigned-lane-and-keep-right rule is the best in a light traffic situation. +- The different-speed-limit-on-each-lane rule is the best in a heavy traffic situation. + +Based on the conclusions, we suggest that the different-speed-limit-on-each-lane rule should be used during the rush hour while the partial-assigned-lane-and-keep-right rule should be used at other times. + +# 8 Sensitivity Analysis + +Some inputs of our model may be hard to obtain or there might be some uncertainty in our inputs. Both these kinds of deviation might influence the result of our model. To test the robustness of our model, we implement a sensitivity analysis. We test our model in both light traffic and heavy traffic case. The analysis proves that our model does not demonstrate a chaotic behavior, showing a good sensitivity. + +# 8.1 Percentages of Vehicles + +We obtain this data from a freeway company.[9] Although the data are accurately collected, the percentages of vehicles may vary on different freeways. Therefore, we change the percentage of large vehicles $(40\%)$ by up to $15\%$ to obtain the changes in our criteria.(Table 8 and Table 9) We observe a $16.61\%$ increase in sharp braking frequency in the light traffic case. The other criteria changes little. + +In the heavy traffic case, all criteria changes little. This indicates that our model can be used on freeways with varying percentages of vehicles. + +Table 8 Sensitivity analysis—percentages of vehicles(light traffic) + +
flow rateaverage speedsharp breaking frequencysatisfactionstd. deviation of speed
-15%0.0%-5.9%16.6%-3.5%6.4%
-10%0.8%-0.4%6.3%-0.6%1.0%
-5%-2.6%-2.8%13.8%-2.5%6.6%
0%0.0%0.0%0.0%0.0%0.0%
5%-1.2%0.6%0.6%-0.1%1.0%
10%-3.8%-2.3%0.2%-0.1%-1.7%
15%-3.7%0.3%-5.5%0.9%-3.4%
+ +Table 9 Sensitivity analysis—percentages of vehicles(heavy traffic) + +
flow rateaverage speedsharp breaking frequencysatisfactionstd. deviation of speed
-15%0.1%-6.2%-2.5%-6.3%-3.4%
-10%-1.3%-4.4%-3.2%-5.0%-0.8%
-5%4.8%-0.8%-1.3%-1.5%0.4%
0%0.0%0.0%0.0%0.0%0.0%
5%7.0%2.6%-0.4%2.0%2.1%
10%6.1%3.6%0.1%4.6%1.4%
15%6.9%4.4%0.3%5.8%2.5%
+ +# 8.2 Probability of Randomization + +The probability of randomization $p_{slow}$ describes the random deceleration behavior of drivers. Obviously, this parameter is difficult to obtain and it may change severely under different circumstances. In our approach, we assume it to be 0.2 since very few data on this matter are available. We change it by up to $15\%$ and the sharp braking frequency shows a $15.75\%$ deviation, which is acceptable.(Table 10 and Table 11) + +Table 10 Sensitivity analysis—probability of randomization(light traffic) + +
flow rateaverage speedsharp breaking frequencysatisfactionstd. deviation of speed
-15%3.4%0.2%-1.4%0.4%0.3%
-10%0.3%0.6%0.4%0.0%1.4%
-5%1.6%-1.8%2.9%-0.7%1.8%
0%0.0%0.0%0.0%0.0%0.0%
5%4.7%2.3%-4.7%0.9%-1.4%
10%0.8%-2.0%3.2%-0.8%0.0%
15%-4.6%-4.9%15.7%-3.3%6.5%
+ +Table 11 Sensitivity analysis—probability of randomization(heavy traffic) + +
flow rateaverage speedsharp breaking frequencysatisfactionstd. deviation of speed
-15%7.3%6.0%3.6%5.4%0.4%
-10%4.7%3.1%0.9%3.0%2.8%
-5%3.6%1.0%-0.4%1.2%0.7%
0%0.0%0.0%0.0%0.0%0.0%
5%-4.4%-1.1%-3.4%-2.1%2.4%
10%-3.0%-2.8%-3.1%-3.6%0.9%
15%-1.6%-4.1%-4.7%-4.4%0.8%
+ +# 8.3 Probability of Willing to Change Lane + +We consider the case that a driver might choose not to change lane even if all the other conditions are satisfied. Obviously, the data are hard to get and it may suffer from a severe change during a short period of time. We assume probability of willing to change to the left lane and right lane are 0.5 and 0.7 respectively in our model. Therefore, we change the probabilities by up to $15\%$ proportionally. The maximum deviation is $7.05\%$ , which indicates a good robustness. + +Table 12 Sensitivity analysis—probability of willing to change lane(light traffic) + +
flow rateaverage speedsharp breaking frequencysatisfactionstd. deviation of speed
-15%1.9%-0.4%4.5%-0.8%2.4%
-10%-3.9%-1.5%6.7%-1.5%4.5%
-5%0.4%-1.6%4.1%-0.7%1.1%
0%0.0%0.0%0.0%0.0%0.0%
5%1.9%0.5%0.0%0.1%0.0%
10%-7.1%-4.7%6.6%-3.2%5.5%
15%-2.7%-2.4%4.2%-1.1%1.7%
+ +Table 13 Sensitivity analysis—probability of willing to change lane(heavy traffic) + +
flow rateaverage speedsharp breaking frequencysatisfactionstd. deviation of speed
-15%1.8%0.7%0.0%-0.4%1.4%
-10%3.5%0.5%-0.9%0.1%1.1%
-5%2.6%-0.2%-1.4%-1.1%1.9%
0%0.0%0.0%0.0%0.0%0.0%
5%3.8%0.8%-1.6%-0.1%2.2%
10%3.9%0.6%-1.0%-0.5%1.8%
15%-2.2%-0.5%-2.2%-0.3%1.1%
+ +# 9 Further Discussions + +# 9.1 Modifications for Countries Where Driving on the Left Is the Norm + +Imagine there is a mirror standing in the middle of a two-way freeway in a country where driving on the right is the norm. What's happening in the mirror is similar to a freeway in a country where driving on the left is the norm. The "passing lane" is on the right. The drivers are sitting in the right side of their cars. The cars obey the keep-left-except-to-pass rule. All the roads, vehicles and rules look like that in a country where driving on the left is the rule. + +However, there is one difference which is human beings. A left-handed person is always a left handed person, but in the "mirror world", he becomes a right-handed person and vice versa. This thought inspires us that we can modify + +our model to reflect this difference instead of simply changing the orientation. We assume in our model that the probability of willing to change to the left lane and right lane are 0.5 and 0.7 and this tendency do not change under any circumstances. Therefore, in a country like Great Britain, drivers still maintain this tendency, which means drivers have a higher tendency to move to the right lane – the passing lane. This is exactly the same as the case that we swap the two probabilities in our original model. + +Based on the discussions above, we modify our model and run a simulation.(table and table.) We observe a maximum change of $4.6\%$ in our criteria. From this analysis, we can safely conclude that + +- Simply changing the orientation is reasonable if the willingness of changing to the left lane equals the willingness of changing to the right lane. +- Even if the two values are different, the deviations are small enough to ignore. + +# 9.2 Modifications for An Intelligent System + +An intelligent system can fully control all the vehicles and let them strictly obey the rule. The keep-right-except-to-pass rule states that the vehicles must drive in the rightmost lane unless they are passing another vehicle, which means the vehicles MUST change to the right lane if the conditions are satisfied. In other words, the probability of willing to change to the right lane equals to 1. Based on this characteristic, we propose two intelligent systems: + +- The semi-intelligent system: this system forces vehicles to change to the right lane if the conditions are satisfied, while whether to change to the left lane relies upon human judgment. +- The complete intelligent system: this system not only forces vehicles to change to the right lane if the conditions are satisfied, but also forces vehicles to change to the left lane if the corresponding conditions are satisfied. + +We run the simulation to examine the performance of the intelligent systems. + +# 9.2.1 Flow Rate and Average Speed + +![](images/b10977265b23ed41ac44a70831816e241d07b1b16818fde5be9ef1760bc51222.jpg) +(a) Flow rate in light & heavy traffic (with or without intelligent system) + +![](images/69cfd4b4cd6b28f3869eb5fc02c76e97e7dff0c23559f6f26322f58cb60c81c9.jpg) +(b) Average speed in light & heavy traffic (with or without intelligent system) +Figure 27 + +In the light traffic case, the flow rate of complete intelligent system increases slightly, while that of semi-intelligent system decreases slightly compared with the situation without intelligent system. The little impact is due to rich passing lane resources. In the heavy traffic case, the flow rates of intelligent systems increase. This is because forcing vehicles to move back to the rightmost lane release more passing lane resources. This is consistent with the speed figure.(Figure 27) + +# 9.2.2 Sharp Braking Frequency and Shift Ratio + +In the light traffic case, the shift ratio and sharp braking frequency change slightly. In the heavy traffic case, the shift ratio increases significantly as expected. The intelligent system will certainly increase shift ratio. (Figure 28) + +Figure 28 +![](images/f413270e266baee37b0c69ad41d1418d53feb7c1f993a2365e0bb5e1de86458d.jpg) +(a) Sharp braking frequency in light & heavy (b) Shift ratio in light & heavy traffic (with or without intelligent system) without intelligent system + +![](images/99e1dd9c8013016fa589010b0673bf2f177c819b88d3b33b76d655e11c2c90ec.jpg) + +In the light traffic case, the sharp braking frequency decreases greatly if a complete intelligent system is implemented. It is because if a driver is about to be constrained by the vehicle ahead, he can change to the left lane in time to avoid a sharp braking. + +# 9.2.3 Lane Utilization Ratio + +Figure 29 +![](images/380d4d4758752ddb0972a30eb65576188cf8356154acf8bd87556636343d95a7.jpg) +(a) Lane utilization ratio of each lane in light (b) Lane utilization ratio of each lane in heavy traffic (with or without intelligent system) traffic (with or without intelligent system) + +![](images/3ac90c6f3460bf538f48c78e71cbe116d03c5fee52e6ff0fc83ea44ec44e1233.jpg) + +For a semi-intelligent system, the vehicles tend to move to the left lane. For a complete intelligent system, the vehicles tend to move to the right lane. Both the light traffic case and the heavy traffic case have this tendency. The overall changes are relatively small. + +# 9.3 Additional Research on the Refined Model with Ramps + +In section 5.3, we present a refined model taking ramps into account. Due to open boundary conditions, we use $\lambda$ to control the occupancy, which determines whether the traffic is heavy or light. Then we vary the value of $p_{exit}$ to study the model. In the simulation, we have some interesting findings. + +# 9.3.1 Flow Rate + +![](images/16c6c5a946de8d82ce0e82287e47a2ebf24d10b5a26ffd4c8ab43b1667f56f3e.jpg) +Figure 30 Flow rate in light & heavy traffic (different $p_{exit}$ ) + +The probability to exit from ramp $p_{exit}$ has little, if any, impact on flow rate, partly because we equate the numbers of vehicles entering from the on-ramp with the numbers of vehicles exiting through the off-ramp. + +# 9.3.2 Average Speed + +![](images/36b5a9fe660e92bdb151deb1ce440e541c6e1a9f6b9e13e8bab235cb888f7f14.jpg) +Figure 31 Average speed in light & heavy traffic (different $p_{exit}$ ) + +In the previous analysis, the average speed is always consistent with flow rate. Adding ramps, however, breaks this consistency, especially in the heavy traffic case. As $p_{exit}$ increases, a large number of vehicles need to exit through the off-ramp. They will decelerate in advance, which causes low average speed. + +# 9.3.3 Lane Utilization Ratio + +Figure 32 +![](images/430eddf434abba1b8e765392f9a62f02ab423021abd80f7b64500fc8040f3397.jpg) +(a) Lane utilization ratio of each lane in light (b) Lane utilization ratio of each lane in heavy traffic (different $p_{exit}$ ) traffic (different $p_{exit}$ ) + +![](images/b7e5ba5acf51eb538c5bd23d588b5986d76e9f4c18ef5eef9e0cb5a26c1431db.jpg) + +In the heavy traffic case, $p_{exit}$ will increase the utilization ratio of the rightmost lane. This reasonable because a large number of vehicles tend to use the right-most lane to move into the off-ramp. + +# 9.3.4 Failure Ratio + +A significant difference in our refined model is that some vehicles might fail to move to the exit of the freeway, a common situation in the real world. The failure ratio is defined as the ratio of the number of the vehicles failing to exit to the total number of vehicles. We seek to investigate how failure rate changes with varying $p_{exit}$ . + +![](images/739217b93f5209c26ca22008411d95fbd14233f915da3c0ef5cd42cf465190e7.jpg) +Figure 33 Failure ratio in light & heavy traffic (different $p_{exit}$ ) + +Figure 33 demonstrate that in the case of light traffic, $p_{exit}$ is an irrelevant factor. The vehicles can move to the rightmost lane with ease. $p_{exit}$ plays an important role in the heavy traffic case. When $p_{exit}$ remains low, the average speed is relatively high, which makes moving to the rightmost lane extremely difficult. When $p_{exit}$ becomes high, the average speed slows down. Vehicles have more time to move to the rightmost lane, which in turn reduces the failure ratio. + +# 10 Strengths and Weaknesses + +# 10.1 Strengths + +- Our models are fairly robust to the changes in parameters based on sensitivity analysis. It means a slight change in parameters will not cause a significant change in the result. +- Different types of vehicles are taken into consideration, and the mixing ratio is based on actual data. We consider the length of vehicles and different maximum speeds which makes the model closer to reality. +- We come up with various criteria to compare different situations. Hence an overall comparison can be made based on these criteria. +- Our models are capable of simulating the situation in real life. The results also agree with common sense and life experience. +- A refined model is established to consider the role of ramps, which is a bright spot of our model. + +# 10.2 Weaknesses + +- Factors of human judgments may be over-simplified. In order to consider that a driver may randomly decelerate and not choose to overtake when possible, we simply defined a possibility respectively. Actual situation may be more complicated. +- Some of the parameters are based on semi-educated guess because few data are available. However, based on our sensitivity analysis, they will not make a great difference if slightly changed. +- We did not consider the prediction of each driver. In our model, drivers change their speed only based on the information of the previous time step. But in fact, they can make a prediction and choose their speed in a more complicated way. + +# References + +[1] Wikipedia. Controlled-access highway. https://en.wikipedia.org/wiki/Controlled_access_highway + +[2] Wikipedia. Passing lane. https://en.wikipedia.org/wiki/Passing_lane +[3] Nagel, K., & Schreckenberg, M. (1992). A cellular automaton model for freeway traffic. Journal de Physique I, 2(12), 2221-2229. +[4] Rickert, M., Nagel, K., Schreckenberg, M., & Latour, A. (1996). Two lane traffic simulations using cellular automata. Physica A: Statistical Mechanics and its Applications, 231(4), 534-550. +[5] Chowdhury, D., Wolf, D. E., & Schreckenberg, M. (1997). Particle hopping models for two-lane traffic with two kinds of vehicles: Effects of lane-changing rules. Physica A: Statistical Mechanics and its Applications, 235(3), 417-439. +[6] Knospe, W., Santen, L., Schadschneider, A., & Schreckenberg, M. (1999). Disorder effects in cellular automata for two-lane traffic. Physica A: Statistical Mechanics and its Applications, 265(3), 614-633. +[7] Wikipedia. Shoulder (road). http://en.wikipedia.org/wiki/Shoulder_%28road%29 +[8] Chowdhury, D., Wolf, D. E., & Schreckenberg, M. (1997). Particle hopping models for two-lane traffic with two kinds of vehicles: Effects of lane-changing rules. Physica A: Statistical Mechanics and its Applications, 235 417-439. +[9] Anhui Expressway Company Limited Network. (2013). Traffic flow on Tianchang section of the national road 205. http://www.anhui-expressway.net/enterprise/quarterselect1.aspx?Year=2013&RoadID=205%u56fd%u9053%u5929%u957f%u6bb5 +[10] Roess, R. P., Prassas, E. S, & Mcshane, W. R. (2004). Traffic Engineering Third Edition. New Jersey: Pearson Education, Inc. +[11] Sadiq, R., Husain, T., Veitch, B., & Bose, N. (2004). Risk-based decision-making for drilling waste discharges using a fuzzy synthetic evaluation technique. Ocean Engineering, 31 31(16), 1929-1953. \ No newline at end of file diff --git a/MCM/2014/A/29282/29282.md b/MCM/2014/A/29282/29282.md new file mode 100644 index 0000000000000000000000000000000000000000..009931001ae117d1e2334e516bc0f01ca6a923ca --- /dev/null +++ b/MCM/2014/A/29282/29282.md @@ -0,0 +1,617 @@ +Team Control Number + +For office use only + +T1 + +T2 + +T3 + +T4 + +# 29282 + +Problem Chosen + +A + +For office use only + +F1 + +F2 + +F3 + +F4 + +# 2014 Mathematical Contest in Modeling (MCM) Summary Sheet Summary + +The keep-right-except-to-pass rule is widely implemented all around the world, but it may not be an optimum one. We define five evaluation criteria to evaluate the performance of a traffic rule, namely, the average traffic speed, traffic flow, danger index, the over-speed-limit effect and the under-speed-limit effect. To analyze the "keep right" rule's performance theoretically, we apply a state transition approach, which is similar to the Markov model, in light traffic situation. The result shows that all the cars will travel in the right lane at a low speed in the long term. In heavy traffic, we analyze its steady state and discover that cars will break the "keep right" rule because they cannot find a chance to return to the right lane after overtaking. To test the theoretical results, we build a simulation model based on the Cellular Automation (CA) to simulate the traffic system under a given traffic rule. The simulation results are consistent with what we have got through the state transition approach. + +In order to seek a better traffic rule, we develop 3 new rules based on the old rule. Then we evaluate their performance together with the old rule respectively using our CA-based simulator. After calculating the values of our evaluation criteria, we employ the Analytic Hierarchy Process (AHP) method to obtain the best solution. We find that the best rule is the one which forbids cars from overtaking to achieve the best safety performance and highest traffic flow. + +Finally, we discuss some further topics. The result is that we can apply our best solution to left-hand traffic countries by simply changing the orientation, and that the application of intelligent system will improve the performance of the "keep right" system in light traffic, but deteriorate it in heavy traffic. Results of the sensitivity analysis based on the CA simulator have shown the robustness of our conclusions. + +Our suggestion for the public is that everyone should consciously avoid the overtaking behavior to realize a better traffic condition. Further studies should focus on more complex circumstances such as the six-lane freeway. With more precise data available, we can further test and improve our models. + +# Contents + +1 Introduction 1 +2 General Assumptions 2 + +3 The Keep-Right-Except-To-Pass Rule 3 + +3.1 Traffic Rule Evaluation 3 +3.2 Performance In Light Traffic 4 + +3.2.1 Notations 5 +3.2.2 A State Transition Approach 5 +3.2.3 A Numerical Test And Conclusion 7 + +3.3 Performance In Heavy Traffic 8 + +4 A Simulation Model: Cellular Automation 9 + +4.1 Simulation Assumptions 9 +4.2 Notations 9 +4.3 Algorithm Descriptions 10 +4.4 Calculation And Results 12 +4.5 Small Conclusion 16 +4.6 Sensitivity Analysis 17 + +5 A New Traffic Rule 18 + +5.1 Descriptions of The New Rules 18 +5.2 Simulation Results 19 +5.3 Comparison 20 +5.4 Small Conclusion 23 + +6 Further Topics 23 + +6.1 Applicability In Left-Hand Traffic Countries 24 +6.2 Intelligent Traffic 25 + +6.2.1 Performance Analysis 25 +6.2.2 Simulation And Results 26 + +7 Strength and Weakness 27 +8 Conclusion 27 + +References 28 + +# 1 Introduction + +Recent changes in economics and technology have enabled more and more households to own their private cars, but at the same time, have posed more pressure on highway capacity. It is necessary for the government to implement proper traffic rules which can maximize the traffic flow while ensuring safety. The most popular traffic rule for the time being is the keep-right-except-to-pass rule, which states that all drivers should drive in the right-most lane unless they are passing another vehicle, and when overtaking, drivers should move one lane to the left to pass the vehicle ahead and return to the right lane as soon as possible. This rule is widely implemented in most countries in the world, including the U.S and China. In Great Britain and some other countries, this rule is adjusted with a simple change of orientation, and we can call it the keep-left-except-to-pass rule. + +However, is the "keep right" rule optimum? Or, is there any alternative traffic rule superior to this one in terms of traffic flow, safety, and other important factors? This is one main issue we want to deal with in this paper. In fact, we can divide the whole problem into four major subproblems: + +1. In both light and heavy traffic, what is the performance of the keep-right-except-to-pass rule? This requires us to set up several evaluation criteria. Based on the answer to this question, we can decide whether to design a new traffic rule to replace, and in which aspects improvement can be made. +2. Is there a better traffic rule? If yes, why can we say the new rule is better? We must integrate our evaluation criteria into a comprehensive one to decide between rules. This involves the determination of weights and a comparison of the two traffic rules. +3. Does the new rule apply to left-hand traffic countries? Except for a simple change of orientation, we should decide whether some other requirements need to be met. +4. Would there be any change in our analysis results if all the vehicles were under the control of an intelligent system? In fact it is an optimization problem. We can control the behavior of each vehicle in the freeway, like determining whether it should change lanes. In this case, we may achieve the best traffic condition. + +For the rest of our paper, we will first set up five criteria to evaluate traffic rules. Then we look into the keep-right-except-to-pass rule. We use a state transition approach to study its performance in light traffic, and the heavy traffic situation is also considered. Next, a simulation model based on the cellular automation is built to verify the results given by the theoretical model. Afterwards, we design three different new rules and evaluate them together with the old rule, where we use the AHP method to get the best solution. Finally, we discuss further topics about the rule under left-hand traffic and the old performance under control of the intelligent driving system. The basic logic framework of our paper is shown in Figure 1. + +![](images/aaca193e7acc153d5bd33e783245dff765ddbccdeceae18bfb2020c35505210c.jpg) +Figure 1: The logic framework of our paper. + +# 2 General Assumptions + +- Drivers consciously drive their cars and everything is in normal state. In other words, the drivers are rational and the road condition is good. We do not consider abnormal situations where drivers are drunk or sleepy while driving, or the road is frozen and slippery because of the extreme weather. +- In our models, we only consider countries where driving automobiles on the right is the norm. Driving on the right is different from driving in the right lane, since drivers can also drive in the left lane on the right in multi-lane traffic. The research object of this paper is the rule which requires drivers to drive in the right lane except to overtake. For completeness, we will consider the rules in countries where driving on the left is the norm in subsection 6.1. +- For simplicity, we only consider a four-lane freeway(two in each direction), with a center dividing strip between the opposing traffic flows. The freeway has only two lanes for each direction, the right one for driving and the left one for overtaking only(under the keep-right-except-to-pass rule). What's more, the center dividing strip ensures that there is no interactions between vehicles traveling in opposite directions. +- There are no stop lights or intersections to interrupt the flow of traffic. Also, there are no other entrances or exits, and no sharp turns. Vehicles come from only one entrance and drive their way through. Because vehicles behave differently in sharp turns, we overlook such area. This simplicity makes little difference to our results. +- There is only one type of vehicle in the freeway. All vehicles have the same brand, model and size (especially length). In other words, they are homogeneous. For convenience, we call them "cars". +- When traveling individually, cars move at a constant speed(free speed). But when encountering another car ahead, cars can change speed instantly. This assumption means that we overlook the accelerating process of + +cars. They can change speed with no time. But when traveling individually, they will all move at their own free speed. + +- Cars enter the freeway in a Poisson manner. That is to say, when two neighboring cars arrive at the beginning point of our freeway sequentially, the time interval is exponentially distributed. + +# 3 The Keep-Right-Except-To-Pass Rule + +Many methods can be used to describe vehicular traffic flow at low and moderate densities on long, uninterrupted freeways, including queuing theory, Markov chains, cellular automation, traffic flow differential equations and so on. The first two are macroscopic and the last one is microscopic, while cellular automation is often used to do the simulation[2][12]. + +In this section, our main task is to analyze the performance of the keep-right-except-to-pass rule. We first choose five statistics as our evaluation criteria to assess traffic rules. Then we use a state transition approach(similar to the Markov model) to analyze this rule in light traffic. Next, we extend the model to point out some important problems in heavy traffic. Afterwards, we employ the cellular automation method to simulate the behavior of cars in freeway. We calculate the values of our five evaluating criteria and give some interpretations. Finally, we analyze the sensitivity of our simulation model. This section is the basis of the entire paper, our further analysis, comparison and adjustment are to some extent dependent on the methods employed here. + +Before start, we shall mention the definition of "light" and "heavy" traffic here. In common sense, when the number of vehicles in the freeway is very huge, we call the situation as "heavy traffic". But what does "huge" mean? This definition is rather vague and not appropriate for research use. In this paper, we state "light" and "heavy" in this way: if a car which changes lanes to overtake cannot find a place to return to its initial lane within certain time range, the traffic is heavy. Otherwise the traffic is light. In section 3, we will use the variable $\lambda$ (arrival rate) to illustrate. The closer $\lambda$ is to 0.5, the heavier the traffic is. + +# 3.1 Traffic Rule Evaluation + +We must set up several basic criteria to analyze the performance of the keep-right-except-to-pass rule. These evaluation criteria include both static ones(such as safety) and comparative static ones(such as performance in extreme conditions). Also, the criteria should be easily calculated from available data. We choose the following five evaluation criteria: + +- Traffic flow: The number of cars passing an observing point per unit of time. Here we set its unit as "vehicles per second". + +- Danger index: The average number of lane changes for each car in our assumed freeway. It measures safety. Overtaking is risky because the car behind may crash into the car ahead when changing lanes. The more frequently a car changes lanes, the more likely an accident may take place. In contrast, when a car travels in a fixed lane, it can adjust speed instantly when encountering another car ahead, so it is not possible that an crashing accident took place. For simplicity, we assume that accidents will happen only when the car changes lanes. So we can use the number of lane changes per car as a measure of safety. Its unit is "times per vehicle". +- Average traffic speed: The average speed of all cars passing an observing point. Its unit is "kilometer per hour". It measures how fast cars are under a specific traffic rule. +- USL effect: It stands for "Under-posted Speed Limit effect". Celeris paribus, if the traffic flow in a situation where the speed limit is too low is $a$ , and the traffic flow in a situation where the speed limit is moderate is $b$ , then the USL effect equals the ratio of $a$ to $b$ . It measures the performance of a traffic rule in extreme conditions. If the traffic flow decreases too sharply when speed limit is too low or too high, we do not regard the traffic rule as a good one. +- OSL effect: Accordingly, it stands for "Over-posted Speed Limit effect". Its definition and function are similar to those of the USL effect. + +Then how to evaluate a specific traffic rule? Since the traffic flow and the average traffic speed both measure the efficiency and capacity of freeways, the larger they are, the better the rule performs. For the danger index, we hope it to be as small as possible so as to decrease the probability of an accident. USL and OSL effects measure the rule's role in extreme conditions. What we want is that the traffic flow of the freeway was still relatively high when the speed limit was too low or too high. So these two effects of a good traffic rule should also be large. + +# 3.2 Performance In Light Traffic + +In light traffic condition, there are not many cars, so the distance between neighboring cars is big. Consequently, it is comparatively easy to change lanes, overtake and return without worrying about being collided or finding nowhere to return. In fact, the traffic state at time $t$ can be computed from the traffic state at time 1 through multiple iterations. This is similar to the Markov model, but the difference lies in that the transition probability matrix is not constant. Following this thinking, we use a state transition approach to look for a steady state of traffic flow. + +In addition to the general assumptions mentioned above, we consider here the following situation: all the cars enter the system through the right lane. According to speed, they are divided into 3 discrete states: State 1, State 2 and State 3. The speed of a car in State $i$ is different from that in State $j$ if $i \neq j$ . After overtaking the car ahead, the car at a higher speed will return to the right-most lane as soon + +as possible, and travel at its previous speed $v_{i}$ , because the drivers may prefer to different speeds in the freeway. If a car chooses not to overtake, it will change to the same state with the preceding car, avoiding collision. For simplicity, we arbitrarily ignore the accelerating process of overtaking. In other words, the overtaking process is finished immediately. + +# 3.2.1 Notations + +- $v_{i}$ : The speed of cars in State $i$ , $i = 1,2,3$ . For $i > j$ , we have $v_{i} < v_{j}$ . +- $v_{4}$ : The speed of cars in the left lane. Whatever the car's free speed is, if it changes lanes to the left, it has to travel at the speed of $v_{4}$ . We assume $v_{4} > \max(v_{1}, v_{2}, v_{3})$ for the convenience of overtaking. +- $\pi_k^{(t)}$ : The proportion of cars in state $k$ after $t$ times of transitions. +- $p_{ij}$ : The probability that a car in State $i$ moves to State $j$ . According to subsection 3.3, $j \geq i$ . + +# 3.2.2 A State Transition Approach + +According to FRESIM(the freeway model within the CORSIM software), the probability of overtaking when a car catches up with another in the right lane positively correlate with their relative speed. For convenience, let $p_1$ represent the overtaking probability of when State 1 meets State 2, that is, $p_1 = p(v_1 - v_2)$ , where $p > 0$ is a proportional coefficient. Similarly, $p_2 = p(v_1 - v_3)$ , $p_3 = p(v_2 - v_3)$ . + +Figure 2 illustrates how an overtaking takes place. The car behind follows a probability distribution to decide whether to follow or overtake. If it choose to follow, its speed must slow down to avoid collision. + +![](images/20426c041465709b845873696986d90e8c43904dda850df548d4b23d3a26df06.jpg) +Figure 2: The choice between following and overtaking(ij). + +Initially, the proportions of cars in State 1,2 and 3 are $\pi_1^{(1)}$ , $\pi_2^{(1)}$ , $\pi_3^{(1)}$ , respectively. Using the Bayes theorem, we can get the probability of Sate 1 maintaining its state: + +$$ +p _ {1 1} = \frac {\pi_ {2} ^ {(1)}}{\pi_ {2} ^ {(1)} + \pi_ {3} ^ {(1)}} p _ {1} + \frac {\pi_ {3} ^ {(1)}}{\pi_ {2} ^ {(1)} + \pi_ {3} ^ {(1)}} p _ {2} +$$ + +Similarly, we can get all $p_{ij}$ . Putting them together, the transition probability matrix is + +$$ +\boldsymbol {T} ^ {(1)} = \left( \begin{array}{c c c} \frac {\pi_ {2} ^ {(1)}}{\pi_ {2} ^ {(1)} + \pi_ {3} ^ {(1)}} p _ {1} + \frac {\pi_ {3} ^ {(1)}}{\pi_ {2} ^ {(1)} + \pi_ {3} ^ {(1)}} p _ {2} & \frac {\pi_ {2} ^ {(1)}}{\pi_ {2} ^ {(1)} + \pi_ {3} ^ {(1)}} (1 - p _ {1}) & \frac {\pi_ {3} ^ {(1)}}{\pi_ {2} ^ {(1)} + \pi_ {3} ^ {(1)}} (1 - p _ {2}) \\ 0 & p _ {3} & 1 - p _ {3} \\ 0 & 0 & 1 \end{array} \right) \tag {1} +$$ + +Given the initial proportion vector $\pmb{\pi}^{(1)} = (\pi_1^{(1)},\pi_2^{(1)},\pi_3^{(1)})$ and the transition probability matrix $\pmb{T}^{(1)}$ , we can infer that if a car catches up with another in the system, the proportion vector will be updated like the following: + +$$ +\boldsymbol {\pi} ^ {(2)} = \left(\pi_ {1} ^ {(2)}, \pi_ {2} ^ {(2)}, \pi_ {3} ^ {(2)}\right) = \boldsymbol {\pi} ^ {(1)} \cdot \boldsymbol {T} ^ {(1)} \tag {2} +$$ + +Plugging (1) into (2), then extracting the first component of the vector, yields + +$$ +\pi_ {1} ^ {(2)} = \left[ \frac {\pi_ {2} ^ {(1)}}{\pi_ {2} ^ {(1)} + \pi_ {3} ^ {(1)}} p _ {1} + \frac {\pi_ {3} ^ {(1)}}{\pi_ {2} ^ {(1)} + \pi_ {3} ^ {(1)}} p _ {2} \right] \cdot \pi_ {1} ^ {(1)} +$$ + +The transition probability matrix will also be updated because the proportion of cars in different states will change after one-time transition. The iteration formula is $\pmb{\pi}^{(n + 1)} = \pmb{\pi}^{(n)}\cdot \pmb{T}^{(n)}$ . Repeating this process, we get + +$$ +\pi_ {1} ^ {(n + 1)} = \left[ \frac {\pi_ {2} ^ {(n)}}{\pi_ {2} ^ {(n)} + \pi_ {3} ^ {(n)}} p _ {1} + \frac {\pi_ {3} ^ {(n)}}{\pi_ {2} ^ {(n)} + \pi_ {3} ^ {(n)}} p _ {2} \right] \cdot \pi_ {1} ^ {(n)} \tag {3} +$$ + +Due to the property of simple average, we have + +$$ +\frac {\pi_ {2} ^ {(n)}}{\pi_ {2} ^ {(n)} + \pi_ {3} ^ {(n)}} p _ {1} + \frac {\pi_ {3} ^ {(n)}}{\pi_ {2} ^ {(n)} + \pi_ {3} ^ {(n)}} p _ {2} \leq \max \left(p _ {1}, p _ {2}\right) < 1 \tag {4} +$$ + +So, plugging (4) into (3) and iterating for $n$ times, we can get + +$$ +0 \leq \pi_ {1} ^ {(n + 1)} \leq [ \max (p _ {1}, p _ {2}) ] ^ {n} \cdot \pi_ {1} ^ {(1)} +$$ + +Taking the limit, then using the squeeze theorem, yields + +$$ +\lim _ {n \to \infty} \pi_ {1} ^ {(n + 1)} = 0 +$$ + +That means, after $N$ times of transitions, the proportion of State 1 cars is zero if $N$ approaches to infinity. Thus, $\pmb{\pi}^{(n)} = (0,\pi_2^{(n)},\pi_3^{(n)})$ , and the transition probability matrix between State 2 and State 3 is + +$$ +\boldsymbol {T} _ {\mathbf {2 3}} ^ {N} = \left( \begin{array}{c c} p _ {3} & 1 - p _ {3} \\ 0 & 1 \end{array} \right) +$$ + +From $T_{23}^{N}$ , we can see that State 2 will transfer to State 3 with the probability of $1 - p_3$ , while State 3 can only maintain State 3 itself. Thus State 3 is an absorbing state. In other words, after $N + M$ times of transitions, all cars will be in State 3 (the lowest speed) if $M$ approaches to infinity, that is + +$$ +\lim _ {M, N \rightarrow \infty} \boldsymbol {\pi} ^ {(N + M)} = (0, 0, 1) +$$ + +# 3.2.3 A Numerical Test And Conclusion + +We use a numerical method to simulate the iterating and matrix updating processes mentioned above, so as to test our model results. + +We might as well set $p_1 = 0.35$ , $p_2 = 0.45$ , $p_3 = 0.29$ . From our notations, $\pi_k^{(t)}$ is the proportion of cars in state $k$ after $t$ times of transitions, and we set their initial values as following: $\pi_1^{(1)} = 0.2$ , $\pi_2^{(1)} = 0.5$ , $\pi_3^{(1)} = 0.3$ . Then we use Matlab to get the following simulation figure: + +![](images/b5fa484bd5010f29e0d2746e7d817c24248cda38f65ec651874767dd0ce35f8c.jpg) +Figure 3: Proportions of cars in different states after several transitions. + +From Figure 3 we can clearly see that the proportion vector equals to $(0,0,1)$ after 6 times of transitions. This simulation result is consistent with our model. Later, we will employ a simulation method to test this result again in subsection 3.5. + +According to our above analysis, we can confidently conclude that if all the cars in the freeway obey the keep-right-except-to-pass rule, they will all travel in the right lane at a relatively low speed in the long term, while at the same time, the left lane is comparatively empty. So, the traffic flow, one of our evaluation criteria, under the "keep right" rule is small. In order to make better use of + +our freeways, a more flexible rule is needed which enable drivers to drive in the left lane under certain conditions. + +This conclusion only applies to a "light traffic" situation, where cars can overtake and then return to the right lane easily. When the traffic is heavy, all is a different story. Unfortunately, because all cars will travel at the same low speed, traffic in the right lane is doomed to become heavy in the long run. This means that after a certain time point, a car changing lane to the left for overtaking will find nowhere to return! We will look into the "keep right" rule's performance in heavy traffic in the next subsection. + +# 3.3 Performance In Heavy Traffic + +The model we have developed in light traffic cannot be applied to the situation where traffic is heavy. Since the car which changes lanes for overtaking can always find opportunities to go back to the right lane in light traffic, a transition probability matrix can be calculated to describe the whole system. However, in heavy traffic, a car overtaking another cannot always find a place to return to the right lane, and therefore has to stay in the left lane for a long enough time. As a result, cars which have a high speed and change lanes for overtaking will "pile up" in the left lane. That is to say, in the long run, cars in the right lane are all in State 3, traveling at the speed of $v_{3}$ , while cars in the left lane are all traveling at a higher speed of $v_{4}$ . Consequently, there will be no state transitions in heavy traffic, and we cannot get the transition probability matrix. This is the first reason why the state transition approach cannot be applied to heavy traffic. + +Another important factor we should take into consideration is the safe distance. Safe distance is the minimum following distance between two neighboring cars to avoid collision. According to Wikipedia [8] and New York State Department of Motor Vehicles [4], there exists a rule od thumb called "two-second rule" to calculate this safe distance, which states that a driver should ideally stay at least two seconds behind any vehicle that is directly in front of the driver's vehicle. So if the speed of a car is $v_{3}$ , then the driver should make sure that the distance between his car and the ahead one is no less than $2v_{3}$ . + +In light traffic, safe distance is ignored because there are few cars in the freeway. The distance between neighboring cars is big. However, in heavy traffic, we should consider the safe distance factor because cars in the right lane travel in a long queue. So it is not appropriate to reuse the analyzing method in light traffic. Figure 4 shows how the heavy traffic system works. + +From the analysis above, we can see that there also exists a steady state in heavy traffic situation. In the steady state, cars in the right lane are all at a speed of $v_{3}$ while cars in the left lane are all at a speed of $v_{4}$ . This violates the "keep right" rule in that the left lane is used for driving. Moreover, there will be no transitions between these two states. + +Finally, we can get two conclusions about the "keep right" rule in heavy traffic. First, the rule will be broken even if drivers obey it. Second, the danger index is + +![](images/a894dc1b99254322e798abade9aa8de93f6362bac089e08bd356585ca8b9f425.jpg) +Figure 4: Road condition when traffic is heavy. + +the lowest when traffic is heavy because there is no overtaking. + +# 4 A Simulation Model: Cellular Automation + +We model the movement of each car separately and use a discrete time simulation to simulate the driving process. Given a set of traffic rules (for example, the keep-right-except-to-pass rule), the simulator can produce a prediction of the movement following the rules. + +# 4.1 Simulation Assumptions + +- In the simulator, time is discrete. And we set the time step to be one second. +- At the entrance of the freeway, the time intervals between every two neighboring car conform to the exponential distribution, that is, the arriving cars emerge at the beginning of the right lane in a Poisson manner. +- The speed of each car is discrete and conforms to uniform distribution, furthermore, the speed is stable during the movement except the decelerating process. +- We assume that the safe distance for overtaking in proportion to the traffic speed. According to Wikipedia [8] and New York State Department of Motor Vehicles [4], there exists a "two-second rule" to calculate this safe distance, which states that a driver should ideally stay at least two seconds behind any vehicle that is directly in front of the driver's vehicle. +- Drivers can judge their speed and the distance of surrounding cars accurately, so they can estimate the safe distance between their cars and other cars. + +# 4.2 Notations + +- $T$ : The total time in the simulator. + +- $t$ : Present time in the simulator. +- $S$ : The length of the simulated freeway, with the unit of cells. +- $\lambda$ : Arrival rate, number of cars arriving per second. +- $L$ : Length of cell, length of cars. +- $cells(i,j)$ : The traffic speed and the value of each cell. $i$ values from 1 to 2, $j$ values from 1 to $S$ . +- $D$ : Safe distance, the safe distance for overtaking. +- $d$ : safe-distance coefficient, determine the relation between the traffic speed $cells(i,j)$ and safe distance $D$ . As we have discussed the safe distance in the assumption, we set $d$ to be 2. +- $p_1$ : Low-speed overtaking factor, determine the probability of overtaking when two vehicles have slight speed difference. We assume it to be 0.3. +- $p_2$ : High-speed overtaking factor, determine the probability of overtaking when two vehicles have large speed difference. We set it to be 0.7. + +# 4.3 Algorithm Descriptions + +Since we simulate the traffic condition in a freeway, we might as well first describe the freeway and cars in our cellular automation model. First, the freeway is a rectangular grid, with two rows and $S$ columns. We only consider two lanes of the forward direction (see Figure 5). The first row of the grid is the left lane and the second row of the grid is the right lane. Each row has $S$ square cells, the length of each side is $L$ . We assume that $L$ equals to 5 meters. + +![](images/90329f2a2b12dc12d91710705b39ef3ebcec210c5cff1a26ca2a549447b902c1.jpg) +Figure 5: Description of the simulated freeway. + +Second, in the simulation, we use a $2 \cdot S$ matrix to represent the freeway. Each cell is an element of the matrix and each element has a value $cells(i,j)$ . The initial + +value of $cell(i,j)$ is zero, but when there is a car in such a cell, the value of $cells(i,j)$ changes to a positive integer that represents the speed of the car. The matrix is + +$$ +\left( \begin{array}{c c c c} c e l l s (1, 1) & c e l l s (1, 2) & \dots & c e l l s (1, S) \\ c e l l s (2, 1) & c e l l s (2, 2) & \dots & c e l l s (2, S) \end{array} \right) +$$ + +Third, arriving cars emerge in the beginning of the right lane (the entrance of cars). The speed of each arriving car is discrete and conforms to uniform distribution, furthermore, the speed is a multiple of $L$ (length of a cell). For example, the speed can be $15\mathrm{m / s}$ , $20\mathrm{m / s}$ , $25\mathrm{m / s}$ , $30\mathrm{m / s}$ and so on. + +Finally, we set the total time in the simulator to be 1000 seconds and the length of simulated freeway to be 1000 cells, that is 5000 meters. + +Now we discuss the steps of overtaking. According to the keep-right-except-to-pass rule, there are two stages of overtaking process, the "Go" process and "Return" process. In the "Go" process, the car behind moves from the right lane to the left lane, pass the car ahead; in the "Return" process, cars return to right lane and finish the whole overtaking process. To express it more clearly, we called the car which is in front of another car "the car ahead", and the car which is behind another car "the car behind". We have several steps to describe each of the process. + +# The "Go" process + +Step 1: For the car in the right lane, judge the value of each cell within safe distance $D$ ahead of the car. If there are cars within the safe distance, go to Step 2; + +Step 2: When the speed of the car behind larger than the speed of the car ahead, judge the value of each cell within safe distance ahead and behind in the left lane, as we showed in the Figure 6. So that the driver can make sure it is safe to overtake other cars. If there are positive integers within the safe distance, the car behind decelerates to the same speed as the car ahead, otherwise, the car behind judge the speed difference of them. If the speed difference lower than $L$ m/s, go to Step 3, otherwise, go to the Step 4; + +Step 3: the car behind has a possibility of $p_1$ to overtake, the car changes from the right lane to the left lane; + +Step 4: the car behind has a possibility of $p_2$ to overtake, the car changes from the right lane to the left lane, obviously, $p_2$ is larger than $p_1$ . + +# The "Return" process + +Step 1: For a car in the left lane, firstly, it judges the value of each cell within safe distance ahead and behind in the right lane. If there is no car, the car returns to right lane, and the "Return" process is finished (see Figure 7), otherwise, go to the Step 2; +Step 2: The car judge the value of each cell within safe distance ahead of the car. If there are cars within the safe distance and have a lower speed, the car behind decelerates to the same speed as the car ahead. + +Now we use the flowchart to explain our algorithm more clearly. Figure 8 shows the general process of the simulation. Considering the complexity of the + +![](images/f6e2104017dbb64ea35710b6db25047ca59f09d0e04ba209db5c64608ed5044c.jpg) +Figure 6: The "Go" process. + +![](images/85a2a38af5be620f91449653479a6a72991d2e6d56a98db64db6ccc253ca8df7.jpg) +Figure 7: The "Return" process. + +"Go" process of overtaking, we use a sub flowchart to further explain it, as is shown in Figure 9. + +# 4.4 Calculation And Results + +Besides the algorithm descriptions, we want to introduce methods to calculate the five evaluation criteria in the simulation. To calculate the arrival rate, the simulator counts the cars arriving at the entrance. As for danger index, the simulator can count the total times of lane-changing which means cars change from one lane to another. In addition, we set an observation point to calculate the average traffic speed and traffic flow. At the observation point, we count the car passing by and add up their speeds. When the first car comes to the observation point, we start the observation. Then, we divide the number of cars by the observation time and get traffic flow. Similarly, we divide the total speed by the observation time and get average traffic speed. + +![](images/6cfa8d97f100689aac6d379aa7eefb18a022fff24564ec7215f13fe49746c856.jpg) +Figure 8: The general process of the simulation. + +![](images/c932297dd7ef2f7172eb4043f40394a2e32264a470b104781bdd1ff4150a0094.jpg) +Figure 9: Sub flowchart of the "Go" process. + +So, our simulator can calculate the five evaluation criteria as long as the arrival rate is provided. For example, results are listed below in Table 1 when $\lambda = 0.25$ (vehicles per second) or $\lambda = 0.4$ (vehicles per second). + +Table 1: Simulation results. + +
Arrival rate λ0.250.4
Danger index1.28480.8681
Average traffic speed3.38183.3277
Traffic flow)0.84591.2690
OSL effect1.77811.7409
USL effect0.92790.9527
+ +To analyze the relation between the five evaluation criteria and arrival rate, we create plots of these criteria separately. Figure 10 to Figure 14 shows results by running simulating program. As we mention before, greater arrival rate indicates a greater volume of traffic, so the pictures can indicate the role of the old rule in different traffic conditions. + +![](images/350a49477af23083cdda1cbb51f9868feb7ad13cfb081d02712482e770c8fe76.jpg) +Figure 10: Danger index vs. $\lambda$ . + +![](images/2cdea77feac49e1b7f7b0be5ce8e6ce38cb59579e53a20d5cc9a8c4677518571.jpg) +Figure 11: Average traffic speed vs. $\lambda$ . + +![](images/ad5f5e7d0af068d0fc3b7ec11e57ca5d3991870e75c547f49a7b17067c8d4ebe.jpg) +Figure 12: Traffic flow vs. $\lambda$ . + +![](images/d8ef4aee99de3a571f698b1ef048c96f78f0aaa5dc76d8f35e87c6b782ce3a47.jpg) +Figure 13: OSL effect vs. $\lambda$ . + +![](images/c67b78a3921ad6151518bc1de3cb9531418ffe70e761491c18ae26680a47c6a9.jpg) +Figure 14: USL effect vs. $\lambda$ . + +As can be seen in Figure 10, the danger index firstly increases with the increasing of arriving cars. Then the danger index reaches a peak value when $\lambda = 0.15$ . After the peak value, the danger index drops and the traffic condition appeals to be more safe. Actually, in the real world, drivers are more safe when the traffic is too light or too heavy compared to the normal volume. For one thing, there is no need to overtake when the freeway has few cars; for another, it is difficult to overtake when the traffic is full of cars. + +As is shown in Figure 11, one of the criteria to evaluate the traffic condition, average traffic speed, is approximately inverse ratio to the arrival rate. Since greater arrival rate indicates a greater volume of traffic, so the more the car is, the slower the driving is. + +Figure 12 shows the relation between the traffic flow and arrival rate. As the arrival rate increases, the traffic flow is increasing, while growth rate is decreasing. Therefore the traffic flow will not increase unboundedly and will surely saturate when the arrival rate is very large. That is the heavy traffic or even the traffic jam in the freeway. Because the safe distance is vague and uncertain in that condition, our picture do not show it. + +Figure 13 and Figure 14 show the under- or over-posted speed limits' effects to the rule. Generally speaking, these two factors will decrease the traffic flow in light traffic, while increase the traffic flow in heavy traffic. + +# 4.5 Small Conclusion + +We use our simulator to test the models established in section 3. Firstly, as for the light condition, we have mentioned a state transition approach to analyse the problem. Then we draw the conclusion that if all the cars in the freeway obey the keep-right-except-to-pass rule, they will all travel in the right lane at a relatively low speed in the long term, while at the same time, the left lane is comparatively empty. In the simulation, we set the arrival rate $\lambda$ to be 0.25, and we can clearly see from the Figure 15 that cars in the right lane are much more than cars in the left lane, so the theoretical model is proved. + +Secondly, as for the heavy condition, we have concluded that cars in the right lane + +![](images/24ca8467c83eaae5279bc6f1e1c2ce72d47a9f4ecaebf5ce2f7543c06dec0fc2.jpg) +Figure 15: A printscreen of simulation in light traffic, green lines represent cars. + +are all at a same speed while cars in the left lane are all at another speed, moreover, they will be no transitions between these two states. To verify this conclusion, we assume the arrival rate $\lambda$ to be 0.5, the result is shown in Figure 16. From the picture, we notice that many cars in the left lane have no chance to return to the previous lane, so the "keep right" rule is broken even if drivers obey it. + +![](images/97d8a23ec861621fa1fa9472b4fb1b5a92d9c16e6b98b4a0b8a06f5bcfa5d643.jpg) +Figure 16: A printscreen of simulation in heavy traffic, green lines represent cars. + +# 4.6 Sensitivity Analysis + +We set several parameters according to the keep-right-except-to-pass rule, in addition to predict the movement of cars in the simulation. However, we lack the official data of these parameters, therefore, we should carefully examine the simulation model's sensitivity by changing in our setting. We change parameters respectively and results are shown in following tables. With the changes of the parameters, our model criteria only have limited changes, therefore, our simulation model is robust under reasonable conditions. + +Table 2: Sensitivity analysis of the parameter ${p}_{1}$ . + +
p1DIATSTLOSLUSL
p1 = 0.3 (original value)
0.2-5.42%-0.68%-0.75%1.34%0.46%
0.252.99%0.60%0.78%0.12%-1.63%
0.357.56%-0.13%-0.40%-0.72%0.66%
0.42.05%0.75%1.13%-1.06%0.30%
+ +Table 3: Sensitivity analysis of the parameter ${p}_{2}$ . + +
p2DIATSTLOSLUSL
p2 = 0.7 (original value)
0.6-17.29%-1.65%-1.38%-2.47%0.73%
0.65-9.69%-0.92%0.68%-3.97%-1.46%
0.7511.6%1.64%2.56%-1.88%-3.87%
0.820.88%2.21%4.29%-3.13%-3.54%
+ +Table 4: Sensitivity analysis of the parameter $d$ . + +
dDIATSTLOSLUSL
d = 2 (original value)
1.6-17.39%-0.25%-0.75%1.03%-0.59%
1.811.59%-0.22%1.30%-2.43%-1.70%
2.2-7.97%-0.28%1.35%-2.96%-0.56%
2.4-14.51%0.85%1.15%-2.65%-1.28%
+ +# 5 A New Traffic Rule + +As we learn from the above analysis, if all the cars in the freeway obey the keep-right-except-to-pass rule, they will mostly travel in the right lane so that the traffic flow will be small and the freeway will be inefficient. Moreover, when the traffic is heavy, many cars in the left lane have no chance to return to the previous lane, so the "keep right" rule is broken even if drivers obey it. Firstly, we will introduce three new rules to make the freeway more efficient and safe. Secondly, we use the simulator to calculate the five evaluation criteria. Then, we create a Radar chart to compare the evaluation criteria of each rule (including the old rule). Finally, with the help of the AHP method, we find the best rule among the new rules. + +# 5.1 Descriptions of The New Rules + +# New Rule 1 + +Cars with different level of speed travel in the different lanes. The car drive in the faster lane can pass another vehicle by move one lane to the slower lane, pass, and return to their former travel lane. We divide the acceptable speed range to several intervals, and cars are required to drive in the lane according to their travel speed. Left lanes have higher speed requirement and right lanes have lower speed requirement. So the left-most lane have the highest speed requirement while the right-most lane have the lowest one. The car in the left lane can pass another vehicle by move one lane to the right, pass, and return to their former travel lane. + +For a four-lane freeway (two for each direction), the rule requires cars with higher speed to drive in the left lane and cars with lower speed to drive in the right lane. The car in the left lane with higher speed can overtake another vehicle by occupying + +the right lane for a while and then return to their former travel lane. For example, when the speed limit is $100\mathrm{km / h}$ , the speed requirement for left lane is $50 - 100\mathrm{km / h}$ while that for right lane is $0 - 50\mathrm{km / h}$ . And a faster car in the left lane can overtake another car by occupying the right lane for a while. + +# New Rule 2 + +Cars with different level of speed travel in the different lanes. Overtaking is forbidden in this rule and all cars travel along their own lane. We divide the acceptable speed range to several intervals, and cars are required to drive in the lane according to their travel speed. + +As for a four-lane freeway(two for each direction), the rule requires cars with higher speed to drive in the left lane and cars with lower speed to drive in the right lane. For example, if the speed limit is $100\mathrm{km / h}$ , the speed requirement for left lane is $50 - 100\mathrm{km / h}$ while that for right lane is $0 - 50\mathrm{km / h}$ . + +# New Rule 3 + +A rule that allows drivers to drive on all lines and there are no difference between lanes. Cars should travel in their own lane unless they are passing another vehicle, in which case they move one lane to the another, pass, and return to their former travel lane. + +# 5.2 Simulation Results + +These three new rules we mentioned above seem to be able to increase traffic flow and average traffic speed at first glance. But is any one of them really superior to the old keep-right-except-to-pass rule? Again, we use the cellular automation model, which is similar to the one we have used in section 4, to simulate the respective traffic condition under each of the three new rules. To avoid unnecessary repetition, we leave out the algorithms here. + +Of course, the value of each evaluation criterion is dependent on the value of $\lambda$ (arrival rate). Since $\lambda = 0$ means light traffic and $\lambda = 0.5$ means heavy traffic, we set $\lambda = 0.25$ to calculate. In this way, we can compare the rules' performance in moderate traffic. Through running programmes in Matlab, we calculate the five evaluation criteria of each traffic rule. We show the results in Table 5. + +Table 5: Evaluation of the new rules. + +
the old ruleNew Rule 1New Rule 2New Rule 3
Traffic Flow0.84590.95330.97910.8035
Danger Index1.28480.166100.2773
Average Traffic Speed3.38183.72193.79243.2021
USL Effect0.92790.81600.91560.9611
OSL Effect1.77811.61981.70021.8130
+ +As we have pointed out before, for danger index, the smaller it is, the safer and + +better the traffic rule is. While for other four evaluation criteria, the larger, the better. + +Interestingly, we find that the values of traffic flow under each traffic rule is much larger than the arrival rate $\lambda$ . At first glance, it seems unreasonable because the cars at the observing point all come from the starting point, so the value of traffic flow should be smaller than $\lambda$ . But after deep thinking, we shall point out that this phenomenon is not strange. Because that the number of cars passing the observing point is counted only after the first car comes, and that it takes some time for the first car to travel from the beginning point to the observing point, our observing time is less than the total time. As a consequence, the value of traffic flow becomes larger than arrival rate. (The denominator of traffic flow is the observing time, while the denominator of arrival rate is the total time) + +# 5.3 Comparison + +Although we have got the exact values of the five evaluation criteria for each new rule, it is hard to directly compare the rules because a rule may perform well in one aspect while bad in another aspect, and the units of different criteria are also different. Another reason is about the danger index. It negatively relates to the performance of a rule while other criteria show positive relations. In order to effectively compare these new traffic rules and choose the best one, we have to normalize all the criteria, and then convert values to an interval between 0 and 1, from worst to best. We call these normalized criteria as "evaluation indexes" (EI). + +The definition of these indexes are as follows. Let $S_{ij}$ denote the simulation result of the rule $i(1 \leq i \leq 4)$ under evaluation criterion $j(1 \leq j \leq 5)$ . And $EI_{ij}$ denotes the evaluation index of the rule $i$ under evaluation criterion $j$ . When $j = 1, 3, 4, 5$ , the evaluation indexes are defined as + +$$ +E I _ {i j} = \frac {(S _ {i j} - m i n _ {1 \leq i \leq 4} S _ {i j})}{m a x _ {1 \leq i \leq 4} S _ {i j} - m i n _ {1 \leq i \leq 4} S _ {i j}} +$$ + +These indexes are for the traffic flow, the average traffic speed, the USL effect and the OSL effect. For simplicity, we call them TFI, ATSI, USLI, OSLI respectively. For danger index (when $j = 2$ ), we call its EI as "safety index" (SI), the larger SI is, the safer the rule is. + +$$ +S I = E I _ {i 2} = \frac {1}{1 + S _ {i 2}} +$$ + +For all $1 \leq i \leq 4$ , $1 \leq j \leq 5$ , we have $0 \leq EI_{ij} \leq 1$ . + +The values of all evaluation indexes are listed below in Table 6. + +We use a radar chart of Figure 17 to display our results. + +The traffic rule whose border is the most outside is the best. From this radar chart we can see that New Rule 2 may be the most optimum because it performs the best in three aspects out of five (TFI, SI and ATSI). However, we need more evidence. + +To further evaluate the performance of each new traffic rule, we want to integrate the five evaluation indexes into a comprehensive one. We realize that some of the + +Table 6: Values of the five normalized evaluation indexes. + +
the old ruleNew Rule 1New Rule 2New Rule 3
TFI0.24150.85311.00000
SI0.43770.85761.00000.7829
ATSI0.30440.88061.00000
USLI0.771200.68641.0000
OSLI0.819400.41611.0000
+ +![](images/50fde487a39f02c8385b7dd3a93ab7a14cf95738ea40027983f97398d7052936.jpg) +Figure 17: A view of evaluation indexes for each traffic rule. + +evaluation indexes are somehow more important than others, so the comprehensive index is + +$$ +E I _ {C} = \sum_ {k = 1} ^ {5} \omega_ {k} \cdot E I _ {k} \tag {5} +$$ + +where $\omega_{i}$ is the weight for each single evaluation index. + +We use the Analytical Hierarchy Process (AHP) [6] to determine the weights. AHP is a framework for solving multi-criterion decision problems. This method relies on the preference of the decision maker, and the preference, or the extent of importance in our mind can be quantified to evaluate all the alternatives. Saaty's 1-9 scale for AHP preference [7] is presented in the following table: + +According to Table 7, we can easily build a $5^{*}5$ reciprocal matrix to measure preference, just as follows: (each element $a_{ij}$ in the matrix means the preference + +Table 7: The meaning of Saaty's scale for AHP preference. + +
ScaleMeaning
1Requirements i and j have equal value.
3Requirement i has a slightly higher value than j.
5Requirement i has a strongly higher value than j.
7Requirement i has a very strongly higher value than j.
9Requirement i has an absolutely higher value than j.
2, 4, 6, 8Intermediate scales between two adjacent judgements.
ReciprocalsRequirement i has a lower value than j.
+ +scale between requirement $i$ and $j$ ) + +$$ +\begin{array}{r} T F I S I A T S I U S L I O S L I \\ T F I \left( \begin{array}{c c c c c} 1 & 1 & 3 & 3 & 4 \\ 1 & 1 & 2 & 3 & 3 \\ 1 / 3 & 1 / 2 & 1 & 2 & 3 \\ 1 / 3 & 1 / 3 & 1 / 2 & 1 & 1 \\ 1 / 4 & 1 / 3 & 1 / 3 & 1 & 1 \end{array} \right) \end{array} +$$ + +Since traffic flow index and safety index are the most important for freeways, we give them special preference. After getting the matrix, we should test the consistency of it. The first step is to compute the consistency index. Saaty gave the following formula: + +$$ +C I = \frac {\max (\lambda) - n}{n - 1} +$$ + +Where $\max (\lambda)$ is the principal eigenvalue and $n$ is the number of criteria. After calculation, the $CI$ index of our preference matrix is 0.0223. + +For the second step, we use the consistency ratio to measure the level of inconsistency. Saaty defined the consistency ratio as $CR = CI / RI$ , where $RI$ is the average value of $CI$ for random matrices and $CI$ is the consistency index. Since $n = 5$ , $RI = 1.12$ (we get the value from the table in [1]). The matrix is acceptable only when $CR < 0.1$ . We test our matrix using the $CR$ method, and get $CR = 0.0199 < 0.1$ , thus our matrix can be accepted. + +Now we can input the above reciprocal matrix into Matlab to compute the weights we need. Concrete computing details can be seen in [6]. Table 8 shows the values of $\omega_{k}$ . + +Table 8: Values of weights. + +
Single Evaluation IndexTFISIATSIUSLIOSLI
Weight0.35020.30010.17230.09440.0830
+ +Plugging the weights (Table 8) and the values of evaluation indexes (Table 6) for each traffic rule into Equation (5), we finally got the comprehensive evaluation index. See Table 9. + +Table 9: Values of the comprehensive evaluation index. + +
Traffic RuleOld RuleNew Rule 1New Rule 2New Rule 3
\( E I_{C} \)0.40920.70780.92190.4123
+ +# 5.4 Small Conclusion + +Comparison results clearly show that the keep-right-except-to-pass rule is absolutely not optimum. All the new rules are better than it on the whole, which means that a new traffic rule should be developed and applied. What's more, to our surprise, we see that the performance of New Rule 2 is the best. New Rule 2 is superior to other rules not only in 3 of our 5 individual evaluation indexes (larger traffic flow, higher safety index, large average traffic speed), but also in the comprehensive evaluation index. + +This result means that the relatively best performance can be achieved if overtaking is prohibited in the freeway. We can see the tradeoff between individual and collective interest here. Drivers who overtake to seek a higher speed indeed realize their individual interest because they may go to work or return home faster. But at the same time, traffic flow, average speed of all vehicles and safety may decrease, many resources are wasted and the collective interest cannot be realized. If we want to maximize the utility of the whole society, like maximize the traffic flow, we must implement a rule prohibiting overtaking, which deteriorates people's individual interest. + +We can also see the tradeoff between five evaluation criteria. A highly efficient traffic rule in normal conditions will not perform well in extreme conditions. New Rule 2 is of this kind. When the speed limit is moderate, traffic under this rule has the biggest flow and average speed, also very safe. But unfortunately, when the speed limit is too low or too high, this rule cannot maintain its good performance. New Rule 3 is also of this kind, but faces the opposite dilemma. For the old keep-right-except-to-pass rule, there exists a tradeoff between speed and safety. If a driver chooses to overtake to seek a high speed, he or she must bear the risk of being collided. + +In the simulation process of this section, we arbitrarily set the arrival rate as $\lambda = 0.3$ . So it is necessary to make a sensitivity analysis for this parameter. Excluding extreme conditions like that the traffic is too light or too heavy, we calculate again the values of the five evaluation criteria and indexes of each traffic rule when $\lambda = 0.2, 0.4$ . Results show that New Rule 2 is still the best traffic rule. + +# 6 Further Topics + +Our models and simulations above are based on these two assumptions: first, cars travel on the right side; second, the overtaking rule relies upon human judgment. In this section, we will separately discuss them and test whether our methods are applicable to the systems without these assumptions. + +In the first part of this section, by comparing the left-hand traffic and the right-hand traffic in terms of safety, we apply the best solution given in section 5 to the left-hand traffic by simply changing orientation. Then, we define an intelligent driving system that takes control of the overtaking behavior and evaluate its performance by modeling and simulating the traffic system. + +# 6.1 Applicability In Left-Hand Traffic Countries + +The analysis above is based on the right-hand traffic(RHT), in which cars should keep to the right side of the road. In this section, we will take the left-hand traffic(LHT) into consideration and try to apply our solution to the LHT countries. + +Right-hand and left-hand traffic were thought to be formed due to some historic reasons. For example, keeping right was convenient for American people in the late 18th century to use large freight wagons, and the United States passed its first keep-right law in 1792. (More historic details in [5]) Then what is the difference between RHT and LHT? Researches on this issue have pointed out that the right-hand traffic(RHT) and left-hand traffic(LHT) is different in terms of safety. J. J. Leeming found that the collision rate in LHT is lower than that in RHT [3]. Ref [9] [11] [10] suggest that humans are commonly right-eye dominant, which may contribute to the lower collision rate in LHT. Moreover, the right-hand dominant and right-foot dominant also work [5]. For example, when faced with accidents, people will response more quickly in LHT because drivers often control the steering wheels using their right hands. Generally speaking, safety is the only factor that cannot be omitted in our analysis, thus the following discussion will focus on this point. + +In RHT, we strongly recommend the New Rule 2, in which overtaking is forbidden. Actually, we can also apply the solution to LHT, as the following Figure 18 shows: + +![](images/139e850b94f38154b705ca4940024d0b8691ba89645eb127d3d94e9bea137820.jpg) +Figure 18: RHT and LHT traffic systems. + +The traffic system shown on the left is the application of the New Rule 2 in LHT, while the other one is the condition in RHT for comparison. Between every + +two lanes is a solid line, which will prohibit cars from overtaking. As is shown in the figure above, the model is applicable to the LHT with a simple change of orientation. + +In terms of safety, both of the systems shown in the figure above are perfect if evaluated by the safety index we derived in subsection 5.3. By prohibiting the overtaking behavior, the application of the New Rule 2 will significantly improve the safety performance of LHT system in the real world. + +# 6.2 Intelligent Traffic + +In section 3, we have discussed the circumstance without the IDS, under which the old rule relies upon human judgment for compliance. In this section, we assume that the "Intelligent Driving System" (IDS) would take control of the vehicle transportation in freeways, and that the traffic rule is still the keep-right-except-to-pass one. For clarity, we might as well define the IDS as follows: first, the IDS can get all information about the vehicles in the freeway to decide whether to overtake the car ahead; second, the overtaking under the control of the IDS is absolutely safe. + +As we have mentioned before, when there is no IDS, a car will overtake its preceding car with a probability $p(v_{i} - v_{j})$ in concern with safety. However, a car can always overtake the car ahead under the control of the IDS if condition permits. The IDS will judge whether to overtake the preceding car according to the following judgements: + +1. Safe distance. The IDS will decide to overtake the preceding car only if the distance is within a safe range. +2. Time spent in the left lane. The IDS will prohibit a driver from overtaking when he cannot find an opportunity to return to the right lane in a specific length of time, because traveling in the left lane for too long will break the "keep right" rule. + +In the following, our definitions of light traffic and heavy traffic are the same as what we have defined in section 3. + +# 6.2.1 Performance Analysis + +In light traffic situation, cars at a higher speed will always overtake cars at a lower speed, because we do not have to consider the factor of safety. Therefore, cars will maintain its initial speed at which they enter the system, and the steady state we mentioned in subsection 3.2 that all cars in the right lane travel at the lowest speed will not be reached. So, the average traffic speed under the control of the IDS should be much bigger than that without IDS. + +In heavy traffic situation, no cars will overtake the preceding car under the control of the IDS. Because they cannot find an opportunity to return to the right lane, and traveling in the left lane for too long will break the "keep right" rule. Therefore, the steady state of the system is that all the cars travel on the right lane in a low speed, with no cars on the left lane. Compared with the steady state of + +subsection 3.1 in heavy traffic, we can infer that the average traffic speed under the control of the IDS should be smaller than that without IDS. + +# 6.2.2 Simulation And Results + +Again, we use the cellular automation model to simulate the traffic condition under the IDS system. Results are shown from Figure 19 to Figure 22. We use the results in a system without the IDS as a comparison. + +![](images/23ed369173ab9d407ae74b75418862484d3a967547bd10856341291de1069561.jpg) +Figure 19: Comparison in traffic flow. + +![](images/0a3c6641814493a27c20ec9fc7ea44d6a985835702a54e115801c4d0e2d42f45.jpg) +Figure 20: Comparison in ATS. + +![](images/eeee614ca13a04d917bbce48f1debfcf8d9a1702e596865c734c0a405ac58b19.jpg) +Figure 21: Comparison in USL effect. Figure 22: Comparison in OSL effect. + +![](images/a9ba1c0e4c1b2ed0cc6510b085f5ba4e6235ed1577221d9a52f8a19adf49899d.jpg) + +From the above figures, we can get some conclusions. First, the average traffic speed of the system with IDS is higher than that of the system without IDS, but the gap is narrowing as the arrival rate increases. This is consistent with the result of our analysis above. In other words, as for the average traffic speed, the system with the IDS is much better than that without the IDS in light traffic, but worse in heavy traffic. Second, the system with the IDS has a bigger traffic flow. Third, the performance of the system with the IDS is better than that without the IDS in extreme conditions. Finally, since the overtaking behavior under the control of the IDS is absolutely safe, the system with the IDS is much better in terms of safety. + +# 7 Strength and Weakness + +# Strength: + +Our simulation results are consistent with that from our theoretical models. In other words, we can draw reliable conclusions from our models, and these conclusions can be used to describe the traffic system in freeways to some extent. Moreover, our simulator can be used to test different rules using our evaluation criteria and select the best one. Last but not least, as we have carefully tested the sensitivity and robustness of the simulator, results of our models can be trusted. + +# Weakness: + +- The definition of danger index is simple. The danger index is calculated by counting total times of lane changing, and do not cover other possible factors, such as the speed difference between neighboring cars. +- The speed of each arriving car is discrete and has limited number of value, which is unrealistic. +- As for the simulator, the total simulation time and the observation length of freeway is limited, so it is difficult to evaluate the long-term performance. +- We only consider the four-lane freeway. Further development is expected under more complex circumstances. + +# 8 Conclusion + +To discuss the performance of traffic rules, we set up five evaluation criteria including both static and comparative static standards. Then we apply them to the keep-right-except-to-pass rule in four-lane freeways. In light traffic situation, we conclude that if all the cars in the freeway obey the "keep right" rule, they will mostly travel in the right lane at a relatively low speed. Consequently, the traffic flow will be small and the freeway is inefficient. In heavy traffic situation, however, many cars in the left lane have no chance to return to the previous lane, so the "keep right" rule is broken even if drivers obey it. Then, we create a simulation model to test the theoretical models, it turns out that the simulation results verify our former conclusions well. + +Since the old "keep right" rule is not optimum, we come up with three new rules to compare with it. We use the radar chart and AHP method to evaluate each of them. What we find is that the best traffic rule completely forbids the overtaking behavior. The new rule performs very well in terms of traffic flow, safety and average traffic speed, it also has the highest comprehensive evaluation index. + +In further topics, we discuss the applicability of the rule in left-hand traffic countries and draw the conclusion that our model is applicable to these countries just with a simple change of orientation, few other requirements are needed. Finally, + +when the whole traffic is put under the control of the intelligent driving system, we get the result that for the average traffic speed, the system with the IDS is much better than that without the IDS in light traffic, but worse in heavy traffic. + +# References + +[1] E. H. Forman. Random indices for Incomplete Pairwise Comparison Matrices. European Journal of Operational Research, 1990,48, pp. 153-155. +[2] Homer J. Holland. A Stochastic Model for Multilane Traffic Flow. Transportation Science, 1967, 1(3), pp. 184-205. +[3] J.J.Leeming. *Accidental Expert*, The Alliance of British Drivers. 2003. http://www.abd.org.uk/jjleeming.htm +[4] New York State Department of Motor Vehicles. Driver's Manual, Chapter 8, Defensive Driving. 2011. http://dmv.ny.gov/dmanual/chapter08-manual.htm. +[5] “Right- and left-hand traffic”, Wikipedia, 2014. http://en.wikipedia.org/wiki/Right-\{\}and\{\}left-hand\{\}traffic\#United\{\}States. +[6] T. L. Saaty. The Analytical Hierarchy Process. McGraw Hill, New York. 1980. +[7] T. L. Saaty, J. M. Alexander. Thinking With Models: Mathematical Models in the Physical, Biological and Social Sciences. Chapter 8. Pergamon Press, London. 1981. +[8] “Two-second rule”, Wikipedia, 2014. http://en.wikipedia.org/wiki/Two-second\_\{}rule. +[9] US National Library of Medicine National Institutes of Health. Eyedness. 1976. http://www.ncbi.nlm.nih.gov/pubmed/970109?dopt=Abstract +[10] US National Library of Medicine National Institutes of Health. Eye preference within the context of binocular functions. 2005. http://www.ncbi.nlm.nih.gov/pubmed/15838666?dopt=Abstract +[11] US National Library of Medicine National Institutes of Health. Ocular dominance: some family data. 1997. http://www.ncbi.nlm.nih.gov/pubmed/15513049?dopt=Abstract +[12] Zeyuan Allen Zhu, Tianyi Mao, Yichen Huang. Three Steps to Make the Traffic Circle Go Round. The UMAP Journal, 30.3(2009), pp. 261-279. \ No newline at end of file diff --git a/MCM/2014/A/29911/29911.md b/MCM/2014/A/29911/29911.md new file mode 100644 index 0000000000000000000000000000000000000000..911cb56894be14637590eba379b1c24060149621 --- /dev/null +++ b/MCM/2014/A/29911/29911.md @@ -0,0 +1,793 @@ +# Team Control Number + +For office use only + +T1 + +T2 + +T3 + +T4 + +# 29911 + +Problem Chosen + +A + +For office use only + +F1 + +F2 + +F3 + +F4 + +# Summary + +The keep-right-except-to-pass (KRETP) rule has been adopted by many countries around the world, but does this rule actually make our transportation system more efficient? This report aims to analyze this rule along with several other traffic regulations. + +Using a discrete cellular-automaton (CA) model and a continuum model, we can simulate real-life traffic situation on freeways via the Monte Carlo method and PDE system respectively. Through comparison with other two traffic rules, we obtain the conclusion that the KRETP rule is rather effective. + +First we define three parameters---traffic flow, safety index and average energy consumption (AEC) to evaluate the performance of the KRETP rule under various vehicle density. By calculating the optimal maximum velocities during light and heavy traffic, we obtain the influence of under-posted and over-posted speed limits. We also assert that our model can be transferred in "left-most" countries with a simple change of orientation. + +Then we introduce two other traffic rules---the "Slow-Cars-To-Right" (SCTR) rule and the "Free Driving & Free Overtaking" (FDFO) rule. By comparing these three rules in terms of our pre-defined parameters, we confirm KRETP rule's superiority and provide strategic advice for future freeway construction. + +Next, under the control of an intelligent system, a "median" optimization method is proposed to improve the overall quality of freeway transportation system. According to simulation result, our optimization method does improve the performance in terms of all three parameters. + +Finally, we discuss upon several defects of our model that require further research. + +Keywords: KRETP Rule, CA Model, Continuum Model, Monte Carlo Method, Traffic Flow, Safety Index, Average Energy Consumption (AEC), Optimization + +# The Keep-Right-Except-To-Pass Rule + +# Contents + +1 Introduction. 4 +2 Assumptions 5 +3 Modeling for Right-Most Rule. 5 + +3.1 Discrete Modeling for Right-Most Rule 5 + +3.1.1 Model Establishment 5 +3.1.2 Parameter Evaluation 10 +3.1.3 Model Solution and Analysis 10 + +3.2 Continuum Modeling for Right-Most Rule 17 + +3.2.1 Model Establishment 17 +3.2.2 Model Solution and Analysis 18 + +3.3 Comparison of Discrete Model and Continuum Model 22 +3.4 Transferability Analysis of Model in "Left-Handed" Countries 22 + +4 Modeling for Alternative Freeway Traffic Rules 22 + +4.1 "Slow-Cars-To-Right" Rule 22 +4.2 "Free Driving & Free Overtaking" Rule 23 +4.3 Comparison of Different Traffic Rules 24 + +5 Modeling for Intelligent-System Control. 26 + +5.1 "Median" Optimization 26 +5.2 Possible Further Improvements 29 + +6 Superiority and Weakness 29 + +6.1 Superiority Analysis 29 +6.2 Weakness Analysis 30 +6.3 Future Research 30 + +6.3.1 Possible Optimization of Discrete Model 30 +6.3.2 Possible Optimization of Continuum Model 31 + +7 Conclusion. 31 +8 References 32 +9 Appendix 32 + +# 1 Introduction + +Traffic rule plays an essential role in a nation's transportation system. An optimal traffic rule can dramatically enhance the capacity and efficiency of the transportation network, providing common citizens with tremendous convenience. In this report, we will mainly focus on one such rule which requires automobiles to stay in the right lane unless they have to overtake. + +America first enacted the "keep-right-except-to-pass" law in New York State in 1804, followed by many other countries in the following decades. [1] Up to now, most countries are "right-handed" countries with very few exceptions such as the UK and Australia. On American multilane freeways, for example, drivers are required to drive in the right-most lane unless they need to overtake, in which case they move one lane to the left and then switch back to the right-most lane. + +# See in Figure 1. + +Despite the long history of this rule, there has been little scientific inquiry into the effect of this rule. In such circumstances, a systematic analysis of this rule's realistic effect, including the change in traffic flow, safety index and energy consumption. + +More specifically, we are expected to examine the following issues in our report: + +- Analyze the effect of the "keep-right-except-to-pass" rule in various terms under different vehicle densities (light and heavy traffic); +- Calculate the optimized speed to maximize a weighted function of the traffic flow, safety index and average energy consumption. +- Discuss the role of under-posted and over-posted speed limits and the transferability of our model in "left-handed" countries where driving on the left lane is the norm; +- Propose other possible freeway traffic rules and compare them with the "keep-right-except-to-pass" rule; +- Discuss the transferability of our model in "left-handed" countries where driving on the left lane is the norm; +- Design optimization method to enlarge traffic flow when equipped with an intelligent system. + +![](images/072894db139887b86748879abf0a21f50db4c7eb3c38756d98df91250b43a27e.jpg) +Figure 1. Illustration of "Keep-Right-Except-To-Pass" Rule + +# 2 Assumptions + +Assuming automobile to move in straight lines; +- Assuming that jams only occur because of gradual piling of automobiles; +- Without consideration of sudden crashes that will influence the behavior of subsequent vehicles; +- Assuming that the parallel movement of a vehicle into another lane takes no time; +- Assuming that all the drivers on the freeway are abide by the traffic rules; +- Assuming that the vehicles have the same length and mass; +- Assuming that freeways have no extra entrance or exit. + +# 3 Modeling for Right-Most Rule + +In order to analyze the effect of the "right-most" rule, first we have to establish a model to simulate the traffic flow on the multilane freeway. Then we add certain driving and lane-changing rules to make sure that in our simulation the automobiles are abide by the "right-most" rule. + +# 3.1 Discrete Modeling for Right-Most Rule + +Discrete models treat both time and the position of vehicles as discrete quantities and simulate the moving of automobiles step by step. We start out simulation with a single kind of vehicle (single maximum speed) on a double-lane freeway (right lane is slow lane and left lane is fast lane) and then extend to two kinds of vehicles, namely, fast car and slow car (two maximum speeds), which is closer to realistic situation. + +# 3.1.1 Model Establishment + +# - Vehicles of a Single Kind + +Among the various discrete models, we adopt the "Particle-Hopping" model, which was initially formulated by Nagel and Schreckenberg (NS) to idealize the + +movement of vehicles as the discrete "hopping" of particles. Figure 2 is a vivid illustration of our model. We treat the double-lane freeway as a two-column lattice and each vehicle contains exactly one lattice. + +![](images/a1e4992395c895f00db17e6371db5fee4abb21ccf7f52e080ed315b695ef4492.jpg) +Figure 2. Representation of the "Particle-Hopping" Model + +We apply the Monte Carlo method to simulate the traffic flow on the double-lane freeway.[2] The quantities involved in the simulation are shown in + +Table 1. + +
NotationDefinition
X(k)The position of car numbered k
V(k)The velocity of car numbered k
ΔXpf(k)The gap in front of car numbered k in the present lane
ΔXpb(k)The gap behind car numbered k in the present lane
ΔXf0(k)The gap in front of car numbered k in the other lane
ΔXb0(k)The gap behind car numbered k in the other lane
Vmax(k)The maximum velocity the kth car can achieve on the freeway
QThe traffic flow(number of cars passed per lane per unit time)
αsafeThe safety index
LThe length of the lane during simulation*
TThe duration of the simulation*
PdThe probability of a car decelerating*
PcThe probability of a car changing lane*
nThe density of vehicles *
NThe number of car during simulation N = 2nL
+ +(Terms marked with an asterisk $(^{*})$ require pre-evaluation before the simulation begins) + +Table 1. Definition of Notation in "Particle-Hopping" Model + +Before the simulation begins, we need to evaluate the following five parameters: number of cars $N$ , the length of lane $L$ , the duration of the simulation $T$ , the maximum velocity vehicles can achieve $V_{max}$ , the probability for a driver to decelerate during each unit time $P_d$ and the probability for a driver to change lane $P_c$ if the current situation qualifies the rule of changing lanes. + +After these parameters have been evaluated, we set the position of each car $X(n)$ randomly in the double-lane freeway with half cars on the left (fast) lane and the other half on the right (slow) lane and the velocity of each car $V(n)$ randomly between $[V_{max} / 2, V_{max}]$ . We set the entire freeway to be a loophole so that once a car reaches the last lattices it will re-enter the double-lane freeway in the first lattices. Under such conditions, the density of the vehicles remain unchanged throughout the simulation. + +After the simulation begins, at each time unit the driver will either remain in the same lane or change lane if situation permits. The rule of acceleration and deceleration is as follows: + +$$ +\begin{array}{l} (i) \text {I f} V (n) < V _ {\max }, \text {t h e n} V (n) = V (n) + 1; \\ (i i) \text {I f} r a n f < p _ {d}, t h e n V (n) = V (n) - 1; \\ \end{array} +$$ + +# Table 2. The Rule of Acceleration and Deceleration + +where $\text{ran } f$ is a random number between 0 and 1 generated at each time unit. + +The permission rule for changing lane is much more complex and comprised of several basic rules. It is derived from Wagner's according to the statistics gathered from a Germany freeway where "right-most" rule is adopted:[3] + +
Rule safetyRule stay except blocked (Rule #0)Rule change when possible (Rule #1)
(i) ΔXp^f(k) > 0, enough space in front(i) ΔXp^f(n) < Vmax + 1, not enough space in front(i) ΔXo^f(n) > ΔXp^f(n) (more space on the other lane) or ΔXo^f(n) > V(k) (enough space on the other lane)
(ii) The nearest neighbor site in the other lane is empty.(ii) ΔXo^f(n) > ΔXp^f(n), more space on the other lane(ii) ran f < Pc
(iii) ΔXo^b(n) > Vmax, enough space to the next car on the other lane(iii) ran f < Pc
+ +Table 3. The Basic Rules + +
right → leftleft → right
(i)Rule safety(i)Rule safety
(ii)Rule #0(ii)Rule #1
+ +Our Monte Carlo simulation has three outputs: traffic flow $Q$ and safety index $\alpha_{safe}$ and the energy cost $E_0$ . For traffic flow $Q$ , we choose a fixed point (in our simulation we choose the end point of the lattices since the freeway is a loop) to count the total number of cars passing that point denoted as $N_{total}$ within the duration of the simulation $T$ , then we use the following equation to calculate $Q$ : + +$$ +Q = \frac {N _ {t o t a l}}{T}. +$$ + +As for $\alpha_{safe}$ , we make the assumption that safety index is proportionate to the reaction time of all the drivers on the freeway throughout the entire simulation. We take the proportionate coefficient to be 1 for convenience: + +$$ +\alpha_ {s a f e} = \frac {\sum_ {T} \sum_ {1} ^ {N} [ \exp (- \Delta \mathrm {X} _ {p} ^ {f} (n) / V (n)) ]}{N \cdot T}. +$$ + +In Table 5, we present the specific steps of our Monte Carlo simulation: + +Table 4. The Rule of Lane Changing corresponding to the "rightmost" rule + +
InputLength of the lane,L.Duration of the simulation T.
Vehicle density n.
Deceleration probability Pd.
Lane changing probability Pc.
OutputTraffic flow Q.
Safety index αsafe.
Energy cost E0
Step 1Randomly generate the initial position, speed and max speed of vehicle i X(k), V(k) and Vmax(k).
Step 2Repeat Step 3 ~ 16 T times.
Step 3Repeat Step 4~5 for each vehicle
Step 4Apply the safety rule to vehicle i, skip Step 5 if vehicle i doesn't pass the rule
Step 5Apply either change lane rule #0 or change lane rule #1 to vehicle i based on which lane it is on, its current speed and the current model, then decide whether it should change its lane.
Step 6Update each vehicle to its new lane.
Step 7Repeat Step 8~16 for each vehicle number k.
Step 8Let the expected new speed V'(k) = V(k) + 1
Step 9If ΔXpf(k) < V'(k), let V'(k) = ΔXpf(k) - 1.
Step 10If V'(k) > 0, generate a random number in [0, 1] and check whether it is smaller than Pd, let V'(k) = V'(k) - 1 if so.
Step 11Let X'(k) = X(k) + V'(k)
Step 13αsafe = αsafe + exp(−ΔXpf'(k)/V'(k))
Step 14If X'(k) ≥ L, then X'(k)←= L, Q = Q + 1
Step 15If V(k) < V'(k), E0+= V(k) + V'(k)
Step 16Let X(k) = X'(k), V(k) = V'(k)
Step 17E0 = E0/Q, αsafe = αsafe/Q, Q = Q/T
Step 18Output and halt.
+ +Table 5. Monte Carlo Procedures + +# - Vehicles of Two Kinds (Fast Cars & Slow Cars) + +Next, we consider a more complex situation in which cars are classified into fast cars and slow cars with respective maximum speed $V_{max}^{f}$ and $V_{max}^{s}$ , with other conditions and rules unchanged. + +Next, we apply a similar Monte Carlo algorithm to simulate the traffic flow with two kinds of vehicles and output corresponding flow $Q$ and safe index $\alpha_{max}$ . + +# 3.1.2 Parameter Evaluation + +The length of the lattices $L = 2048$ +The vehicle density $n = 0.02, 0.04, 0.06, 0.08, 0.1, 0.13, 0.16, 0.2, 0.25, 0.3$ ; +The number of the total vehicles $N = 2nL$ ; +The duration of the simulation $T = 4096$ +The deceleration probability $P_{d} = 0.1$ +The change lane probability $P_{c} = 0.7$ +In the case of vehicles of single kind, set maximum velocity $V_{max} = 5$ +- In the case of vehicles of two kinds, set maximum velocity of fast car $V_{max}^{f} = 5$ , set maximum velocity for slow car $V_{max}^{s} = 3$ ; +- The ratio of fast car to slow car is 4:1, the initial number of vehicle in each lane is 1:1. + +# 3.1.3 Model Solution and Analysis + +We mainly analyze the influence of the right-most rule on three parameters: Traffic flow $Q$ , safety index $\alpha_{safe}$ and average energy consumption $E_0$ . Then we assign a relative weight to each parameter to calculate an optimal velocity during light traffic and heavy traffic. We will only present the simulation result of the two-kind vehicle model. + +# Traffic Flow + +As we have discussed before, we define the traffic flow $Q$ as the number of cars passing a fixed point per unit time, or + +$$ +Q = \frac {N _ {t o t a l}}{T} +$$ + +In Figure 3, we present the value of traffic flow $Q$ under different vehicle density $n$ . + +![](images/0b215a3e5cff9c758654f54ae304da6f1254f7e24d6fce7a5be72ec555b4f4e3.jpg) +Figure 3. Relationship of Traffic Flow $Q$ and Vehicle Density $n$ + +Apparently, the pattern in Figure 3 accords with realistic situation qualitatively. When the vehicle density on a freeway is relatively low, slight increase of vehicle, or in other words, vehicle density will cause traffic flow to rise dramatically because according to our rules vehicles accelerate very easily. In our simulation, the peak value of flow $Q$ occurs when density is around 0.2. After the climax, the traffic flow will tend to drop as vehicle density continues to rise because more likely car jams will obstruct vehicles from accelerating freely. Finally, when the density approaches 1, in other words, the freeway is almost "full", vehicles could barely move thus the flow approaches 0. + +# Safety Index + +Notice that the longer reaction time is, the safer the situation will be. Thus according to our definition of the safety index + +$$ +\alpha_ {s a f e} = \frac {\sum_ {T} \sum_ {1} ^ {N} [ \exp (- \Delta_ {p} ^ {f} X (n) / V (n)) ]}{N \cdot T} +$$ + +a larger value of $\alpha_{safe}$ indicates a more dangerous situation. We again plot safety index $\alpha_{safe}$ against vehicle density $n$ . The result is shown in Figure 4. + +![](images/82ee17b2dad2bfbf2e9653dfcc2e093625c5fa6fdfefb0f44a9c307e790f468e.jpg) +Figure 4. Relationship of Safety Index $\alpha_{safe}$ and Vehicle Density $n$ (lower is safer) + +It is easy to understand the pattern in Figure 4. When the vehicle density is relatively small, accidental risk rises as the number of car increases because vehicles are likely to attain high velocities. After the peak value, the risk drops as density rises because though the number of vehicle rises, the average velocity decreases dramatically. Since most vehicles drive much slower, the overall situation becomes actually safer. Finally, when all the vehicles cannot move at all (when $n = 1$ ), the situation is absolutely safe. Notice that the climax vehicle density are nearly the same in traffic flow and safety index because both quantities are closely related to average velocity. + +# Average Energy Consumption + +Now we define another important parameter called the average energy consumption denoted $E_0$ . We use it to measure the energy consumed during each acceleration. We approximate the quantity as proportionate to change of kinetic energy. For a particular vehicle $n$ , we denote its velocity at time $t$ and $t + 1$ as $V^{(t)}(n)$ and $V^{(t + 1)}(n)$ . Furthermore, we make the assumption that each vehicle has unit mass. Now we can derive the formula for $E_0$ : + +$$ +\begin{array}{l} E _ {0} \propto \frac {\sum_ {t = 1} ^ {T} \sum_ {n = 1} ^ {N} \frac {1}{2} m [ (V ^ {(t)} (n)) ^ {2} - (V ^ {(t - 1)} (n)) ^ {2} ]}{Q} \\ \propto \frac {\sum_ {t = 1} ^ {T} \sum_ {n = 1} ^ {N} \frac {1}{2} m [ V ^ {(t)} (n) - V ^ {(t - 1)} (n) ] [ V ^ {(t)} (n) + V ^ {(t - 1)} (n) ]}{Q} \\ \end{array} +$$ + +$$ +\propto \frac {\sum_ {t = 1} ^ {T} \sum_ {n = 1} ^ {N} [ V ^ {(t)} (n) + V ^ {(t - 1)} (n) ]}{Q} +$$ + +In order for convenience, we evaluate the proportionate coefficient as 1. So that + +$$ +E _ {0} = \frac {\sum_ {t = 1} ^ {T} \sum_ {n = 1} ^ {N} [ V ^ {(t)} (n) + V ^ {(t - 1)} (n) ]}{Q} +$$ + +We again plot the average energy consumption against vehicle density in Figure 5. + +![](images/f351bed0c400a027f9527e2689d5dfe8c3400448bffc476ae0618b469d88df46.jpg) +Figure 5. Relationship of Energy Consumption $E_0$ and Vehicle Density $n$ (lower is better) + +In the first half of the figure, the increase of $E_0$ could be explained by the frequent lane changing and acceleration due to a low vehicle density. In the latter half, although vehicles are much less likely to accelerate, notice that traffic flow also falls dramatically, therefore the average energy consumption still increases. + +# - Optimal Maximum Velocity Estimation (Weighted) + +In this section, we will design a weighted function $\phi$ dependent on the three key variables---traffic flow $Q$ , safety index $\alpha_{safe}$ and average energy consumption $E_0$ . Then we will calculate an optimal speed limit to maximize the value of $\phi$ under light traffic and heavy traffic respectively. + +We define the weighted function as follows: + +$$ +\phi (Q, \alpha_ {s a f e}, E _ {0}) = \omega_ {1} \frac {Q}{1 . 2} + \omega_ {2} (1 - \frac {E _ {0}}{8 0 0}) + \omega_ {3} \sqrt [ 3 ]{1 - (\frac {\alpha_ {s a f e}}{0 . 4}) ^ {3}} +$$ + +$$ +s. t. \quad \omega_ {1} + \omega_ {2} + \omega_ {3} = 1 +$$ + +where $\omega_{1},\omega_{2},\omega_{3}$ are weight coefficient. We base on the following four rules to design our function: + +- When $\phi$ reaches its peak value, we define the corresponding $V_{max}$ to be the optimal maximum velocity; +- Since the three variables $Q, \alpha_{\text{safe}}, E_0$ are independent of each other, $\phi$ should be expressed in the form of simple sums. +- $Q$ and $E_0$ should appear in linear forms in $\phi$ ; +- The value of $\phi$ should drop sharply when the safety index is rather high. Hence $\alpha_{safe}$ should appear in the form of an upward-convex-function in $\phi$ . + +Now under light traffic where $n = 0.1$ and heavy traffic where $n = 0.2$ respectively we can calculate the corresponding $V_{optimal}^{light}$ and $V_{optimal}^{heavy}$ . We plot $\phi$ against different values of $V_{max}$ under the condition $(\omega_1 = 0.5, \omega_2 = 0.2, \omega_3 = 0.3)$ in Figure 6 and Figure 7. + +![](images/560ecea17fe71307edc93358ebdfbe97cf1ebf5b48d081141b23381fae937ba3.jpg) +Figure 6. Optimal Maximum Velocity (Light Traffic) + +![](images/190f158ccef107f32c659cc23337b543bd8475448c3c4442a1b583c5400734f9.jpg) +Figure 7. Optimal Maximum Velocity (Heavy Traffic) + +Considering the duration of light traffic and heavy traffic in a given time period, we set the ratio 4:1 (light :heavy) to obtain the result for a "mixed- traffic" situation. See in Figure 8. + +![](images/2ba9f8d7b9a785042d48b7c34987bd7f1190ef4d8dc06707cbe289b71d339ded.jpg) +Figure 8. Optimal Maximum Velocity (combined) + +As shown in Figure 8, the global maximum value of $\phi$ locates at $V_{max} = 5$ , therefore the optimal maximum velocity $V_{optimal} = 5$ . In other words, when we set the speed limit at 5, we achieve a weighted optimization of traffic flow, safety index and energy consumption. + +# Analysis of Under-Posted and Over-Posted Speed Limits + +In the section above, we have calculated the optimal maximum velocity $V_{\text{optimal}} = 5$ . We will discuss the influence of under-posted and over-posted speed limits in light and heavy traffic respectively. + +# Light Traffic + +If a speed limit is over-posted, say, $V_{max} = 7$ , traffic flow, safety index and average energy consumption all increase. In other words, we sacrifice energy and safety for larger traffic flow. + +If a speed limit is under-posted, say, $V_{max} = 3$ , traffic flow, safety index and average energy consumption all decrease. In other words, we sacrifice traffic flow for lower energy consumption and a safer condition. + +# Heavy Traffic + +If a speed limit is over-posted, say, $V_{max} = 7$ , traffic flow will remain almost the same while energy consumption and safety index will rise. This is highly unwelcome. + +If a speed limit is under-posted, say, $V_{max} = 3$ , situation will be similar to that of light traffic. Still, we will be sacrificing traffic flow for lower energy consumption and a safer condition. + +# - Visualization + +To make our simulation results more intuitive, we use MATLAB to visualize our simulation. In the figures below, each row signifies the distribution of vehicles with black dots representing vehicles occupation and margin representing empty space. Time increases from bottom to top and vehicles move from left to right. We visualize our simulation at low vehicle density ( $n = 0.05$ ) and high density ( $n = 0.2$ ) respectively in Figure 9 and Figure 10. Subfigure on the left represents the left lane and the subfigure on the right represents the right lane. + +![](images/14859b010d7350292bae5fabcd0afa13b1ae0a6169603e45fd0021403e09d40d.jpg) +Figure 9. Visualization at Low Density + +![](images/0c8ba2c3d97df8db6138b3d2e60a63c7b692d1992c6918dca414c64a823a8ce2.jpg) +Figure 10. Visualization at High Density + +# 3.2 Continuum Modeling for Right-Most Rule + +Apart from the discrete, microscopic cellular automaton (CA) models we have adopted above, in this section we will establish a continuum, macroscopic model to describe the vehicle flow on the freeway. + +# 3.2.1 Model Establishment + +Our ultimate goal is to make a continuum model for the traffic flow on a double-lane freeway. We start our work from the single-lane situation. According to [4], we can use partial-derivative equations (PDE) system to describe the traffic flow on a single-lane freeway. There are two variables in the equation system: the average speed of vehicles $u(x,t)$ and the density of vehicles $\rho(x,t)$ . The PDE system contains two parts: + +- Continuity equation: + +$$ +\frac {\partial u}{\partial t} + \frac {\partial (\rho u)}{\partial x} = 0 +$$ + +Dynamic equation: + +$$ +\frac {\partial u}{\partial t} + u \frac {\partial u}{\partial x} = \frac {u _ {e} (\rho) - u}{T} + c _ {0} \frac {\partial u}{\partial x} +$$ + +In the second equation, the right side contains a relaxation term $(u_{e}(\rho) - u) / T$ , which reflects the process when a driver tries to adjust his velocity to the equilibrium velocity $u_{e}(\rho)$ in the time interval $T$ and an anticipation term $c_{0}\partial u / \partial x$ , which represents the process when the driver reacts to the traffic condition ahead with the propagation speed of small disturbance $c_{0}$ . + +Now we extend that model to accommodate the double-lane situation to accommodate the "keep-right-except-to-pass" rule. We will establish the equations for the two lanes separately. In order to describe lane-changing between the right (driving) lane and the left (overtaking) lane, we will introduce two variables $S_{ji}$ and $S_{ij}$ , which represent the lane changing rate. In particular, $S_{ji}$ denotes the changing rate from lane $i$ to lane $j (i, j \in \{1, 2\})$ . The overtaking lane is denoted by Lane 1 and the driving lane is denoted by Lane 2. Thus, the net lane change into lane $i$ is $S_{ji} - S_{ij}$ . + +Hence the continuity equation could be written as: + +$$ +\frac {\partial \rho_ {i}}{\partial t} + \frac {\partial (\rho_ {i} u _ {i})}{\partial x} = S _ {j i} - S _ {i j} +$$ + +For the dynamic equation, we introduce two constant parameters $r_1$ , $r_2$ to be the coefficients of $S_{ji}$ , $S_{ij}$ . The equation should be: + +$$ +\frac {\partial u _ {i}}{\partial t} + u _ {i} \frac {\partial u _ {i}}{\partial x} = \frac {u _ {e i} (\rho_ {i}) - u _ {i}}{T _ {i}} + c _ {0 i} \frac {\partial u _ {i}}{\partial x} + r _ {1} S _ {i j} - r _ {2} S _ {j i} \quad (i = 1, 2) +$$ + +According to [5], we assume: + +$$ +S _ {1 2} = a Q _ {e 1} (\bar {\rho}) \rho_ {1} (1 - \frac {\rho_ {2}}{\rho_ {m}}) +$$ + +$$ +S _ {2 1} = a (1 + b (Q _ {e 1} (\bar {\rho}) - Q _ {e 2} (\bar {\rho}))) Q _ {e 2} (\bar {\rho}) \rho_ {2} (1 - \frac {\rho_ {1}}{\rho_ {m}}) +$$ + +where $Q_{e1}$ denotes the equilibrium flow $Q_{e1} = \rho_1 u_{e1}(\rho_1)$ + +$\bar{\rho}$ denotes the average density of Lane 1 and Lane 2 $\bar{\rho} = (\rho_{1} + \rho_{2}) / 2$ + +$\rho_{m}$ denotes the density in a jam + +$a, b$ are two constant parameters. + +Such assumption is reasonable. Firstly, notice that the expression of $S_{12}$ and $S_{21}$ is not symmetric because of the asymmetry of the double-lane freeway. Secondly, if Lane 2 is suffering from a traffic jam, then $\rho_2 = \rho_m$ ; if there are no vehicles in overtaking lane 1, then $\rho_1 = 0$ . Under both circumstances, we can derive $S_{12} = 0$ , which implies that no car can change into the driving lane when it has a traffic jam or when there are no vehicles on the overtaking lane. + +# 3.2.2 Model Solution and Analysis + +# Study of Steady State + +According to theories of PDE, we can obtain a steady state solution of our + +model by setting the derivative terms to zero. Then we can derive the following two equations: + +$$ +Q _ {e 1} (\bar {\rho}) \rho_ {1} \left(1 - \frac {\rho_ {2}}{\rho_ {m}}\right) = \left(1 + b \left(Q _ {e 1} (\bar {\rho}) - Q _ {e 2} (\bar {\rho})\right)\right) Q _ {e 2} (\bar {\rho}) \rho_ {2} \left(1 - \frac {\rho_ {1}}{\rho_ {m}}\right) +$$ + +$$ +\frac {u _ {e i} (\rho_ {i}) - u _ {i}}{T _ {i}} + c _ {0 i} \frac {\partial u _ {i}}{\partial x} + r _ {1} S _ {i j} - r _ {2} S _ {j i} = 0 (i = 1, 2) +$$ + +Plugging in $2\bar{\rho} = \rho_{1} + \rho_{2}$ , we can derive the expression of $u_{e1}(\rho)$ and $u_{e2}(\rho)$ from the equations above according to [6]: + +$$ +u _ {e 1} (\rho) = u _ {f} (1 - \frac {\rho}{\rho_ {m}}) / (1 + E (\frac {\rho}{\rho_ {m}}) ^ {4}) +$$ + +$$ +u _ {e 2} (\rho) = \left\{ \begin{array}{l l} u _ {f} (1 - \frac {\rho}{\rho_ {m}}) / (1 + E (\frac {\rho}{\rho_ {m}}) ^ {4}) (\frac {1}{2} + \frac {\rho}{2 \rho_ {c}}) & (\rho < \rho_ {c}) \\ u _ {f} (1 - \frac {\rho}{\rho_ {m}}) / (1 + E (\frac {\rho}{\rho_ {m}}) ^ {4}) & (\rho \geq \rho_ {c}) \end{array} \right. +$$ + +where $u_{f}$ denotes the free flow speed and $E$ is a constant parameter. + +Using the relationship given above, we can derive the relationship between $u_{i}$ and $\bar{\rho}$ . Then we use the definition $q_{i} = \rho_{i}u_{i}$ to express the flow on each lane with the average density $\bar{\rho}$ . + +According to [7], we assign the following values to the parameters: + +$$ +T _ {1} = T _ {2} = 1 0 s, u _ {f} = 3 0 m / s, a = 0. 0 1, b = 5, c _ {0 1} = c _ {0 2} = 1 0 m / s +$$ + +$$ +\rho_ {m} = 0. 1 4 v e h / m, r _ {1} = 1 5 0 m ^ {2} / s, r _ {2} = 5 0 m ^ {2} / s, \rho_ {c} = 0. 0 5 v e h / m, E = 1 0 0. +$$ + +Since we have established the traffic flow function of average density on both lane, we can plot the relationship as shown in Figure 11. + +![](images/24aa81015ced97d98e7902f9c9f86bedcc2ae4f29ce3df9285727c9a415d1b4a.jpg) +Figure 11. The Relationship of Traffic Flow $Q$ and Average Density $\bar{\rho}$ In Figure 11, we can discover that all three kinds of flows experience a + +similar changing pattern---when the average density is relatively small, traffic flow experiences a monotonic increase as the density gets larger. After reaching its climax, the flow drops gradually to 0 as the density keeps rising. This changing pattern is identical to the results we have obtained in the discrete model. We can also discover that the overtaking (fast) lane reaches its climax sooner than that of the driving (slow) lane. Due to difference of the maximum velocity, the peak value of fast lane's traffic flow is larger than that of the slow lane. In addition, the curves of fast lane and slow lane almost overlap when $\rho > \rho_{c} = 0.05$ . The overlapping part of curves actually indicates that the utility rate of both lanes is almost the same when $\rho$ is relatively large. + +# - Study of Numerical Simulation + +In this section, we mainly study the density function $\rho_{i}(x,t)(i = 1,2)$ as a solution of the PDE system. In order to simulate the two-lane traffic movement, we assume that $L = 32.2km$ . The following initial variation of the average density $\rho_{h}$ proposed in [8] can be used on either the fast or the slow lane: + +$$ +\rho (x, 0) = \rho_ {h} + \Delta \rho_ {h} [ \frac {1}{c o s h ^ {2} (\frac {1 6 0}{L} (x - \frac {5 L}{1 6}))} - \frac {1}{4 c o s h ^ {2} (\frac {4 0}{L} (x - \frac {1 1 L}{3 2}))} ] +$$ + +Then, our model (the PDE system) can be solved by the numerical scheme given in reference [9]. Here we just provide the figures of the solution under three different conditions. (All of these three figures are dynamic traffic images of an asymmetric double-lane system with curve (a) representing the fast lane and curve (b) representing the slow lane.) + +# At Low Average Density $\bar{\rho}$ + +According to Figure 12, any perturbations on the fast lane or the slow lane will quickly dissipate. Because of the lane-changing tendency, a cluster appeared in the fast lane may cause a small hump in slow lane. + +On the opposite side, as shown in Figure 13, a cluster in slow lane may quickly dissipate because people trapped by the cluster in slow (driving) lane may choose the fast (overtaking) lane for passing. + +![](images/e77743f514622d2e200385176bc26c0a5cecf84cbf13cd2b22afb7d6f752631f.jpg) + +![](images/feefbb7c6d28e5a25f5f48cde8c02b81b060bb3ee943299390030cd10c8d9b12.jpg) +Figure 12. Low Density(Jam First Appeared on Fast Lane) + +![](images/6a211dc590486dc6cbf8df0abba5e4d3bafe3e9dec3d4eb33beffaf3b71b01aa.jpg) + +![](images/af4ca3172ec50753eca4e499954121066bd7ffed3b01d89037e78b2ea1cc60f9.jpg) +Figure 13. Low Density(Jam First Appeared on Slow Lane) + +# At Normal (or Higher) Average Density $\overline{\rho}$ + +As average density continues to rise, free space for vehicles decreases and it becomes less likely for the double-lane system to maintain stability. Correspondingly, serious traffic jams and "stop-and-go" waves become more probable. See in Figure 14. + +![](images/9a41c4e246a0ba0428e4b10a883adcf3be33845bcbff0d28e3725ea384bab39b.jpg) + +![](images/936b9cefb55782f5af211284fc5a9481ab0c276ac560cc483cb1ec81c1095628.jpg) + +# Figure 14. Normal or Higher Density + +Parameter evaluation of the three figures: + +Figure 12: $\rho_{1} = 0.047veh / m, \rho_{2} = 0.021veh / m, \rho_{h} = 0.06veh / m$ ; +Figure 13: $\rho_{1} = 0.047veh / m, \rho_{2} = 0.021veh / m, \rho_{h} = 0.06veh / m$ ; +Figure 14: $\rho_{1} = 0.053veh / m, \rho_{2} = 0.053veh / m, \rho_{h} = 0.06veh / m$ . + +# 3.3 Comparison of Discrete Model and Continuum Model + +In the discrete (CA) model, we focus on the relationship between traffic flow and vehicle density and obtain the following fact: when density is relatively small, flow rises as density increases; after reaching its peak value, flow then decreases gradually to 0. This coincides with the conclusion derived in the continuum model as shown in Figure 9. More intuitively, the visualization images of both models also convey the same conclusion. + +# 3.4 Transferability Analysis of Model in Left-Handed Countries + +According to our research, the difference between the left-most and right-most tradition is mainly historical. In addition, our model has no partiality for either left orientation or right orientation. In other words, orientation is symmetric in our model. Therefore our model could be transferred in left-handed countries with a single change of orientation. + +# 4 Modeling for Alternative Freeway Traffic Rules + +In this section, we will use similar Monte Carlo method to simulate the traffic flow under two alternative traffic rules: "slow-car-to-right" rule and "free driving & free overtaking" rule. + +# 4.1 "Slow-Cars-To-Right" Rule + +# Model Explanation + +In the "slow-cars-to-right" rule, the slow cars are designated the right lane, the fast cars are designated among the left and right lanes and no lane changing is allowed. In other words, this is a model of two disconnected single lanes. We run two simulations. First, test a lane with only fast cars and calculate + +corresponding $Q^{f}$ , $\alpha_{\text{safe}}^{f}$ , $E_{0}^{f}$ ; then test a lane with 60% fast cars and calculate corresponding $Q^{s}$ , $\alpha_{\text{safe}}^{s}$ , $E_{0}^{s}$ . The overall fast car percentage is 80% matching previous simulations. The final result is a mixture of previous results with weight (1:1). + +$$ +\begin{array}{l} Q = 0. 5 \cdot Q ^ {f} + 0. 5 \cdot Q ^ {s} \\ \alpha_ {s a f e} = 0. 5 \cdot \alpha_ {s a f e} ^ {f} + 0. 5 \cdot \alpha_ {s a f e} ^ {s} \\ E _ {0} = 0. 5 \cdot E _ {0} ^ {f} + 0. 5 \cdot E _ {0} ^ {s} \\ \end{array} +$$ + +Parameter evaluation before the simulation is as follows: + +The length of the lattices $L = 2048$ + +The vehicle density $n = 0.02, 0.04, 0.06, 0.08, 0.1, 0.13, 0.16, 0.2, 0.25, 0.3$ ; + +The number of the total vehicles $N = 2nL$ ; + +The duration of the simulation $T = 4096$ + +The deceleration probability $P_{d} = 0.1$ + +The change lane probability $P_{c} = 0.7$ + +The maximum velocity of fast car $V_{max}^{f} = 5$ , maximum velocity for slow car $V_{max}^{s} = 3$ . + +# Model Solution and Analysis + +Using computer simulation, we plot traffic flow $Q$ , safety index $\alpha_{safe}$ and average energy consumption $E_0$ respectively against vehicle density $n$ . The results will be shown in Section 4.3. + +# 4.2 "Free Driving & Free Overtaking" Rule + +# Model Explanation + +Under the "free driving & free overtaking" rule, vehicles are free to drive and overtake with no specific requirement. The two lanes are symmetric and the only limitation is the respective maximum velocity $V_{max}^{f}$ and $V_{max}^{s}$ for fast vehicles and slow vehicles. Parameters and their evaluation are still the same as in the "slow-cars-to-right" rule. + +# Model Solution and Analysis + +Again, we consider the influence of this traffic through measuring of the three key factors---traffic flow $Q$ , safety index $\alpha_{safe}$ and average energy consumption $E_0$ . Further comparison of the three traffic rules will be shown in the Section 4.3. + +# 4.3 Comparison of Different Traffic Rules + +Same as that in the slow-cars-to-right rule, we will mainly focus on the first half of the figures to analyze the difference of the three rules. + +# Traffic Flow + +We plot traffic flow $Q$ against vehicle density $n$ in Figure 15. + +![](images/a267dd432dc874e209e331b4de698e1edd4484018e61c134a875f32c83348f1c.jpg) +Figure 15. Traffic Flow under Different Traffic Rules + +As shown in Figure 15, during light-traffic situation, traffic flow under the "keep-right-except-to-pass" rule and "slow-cars-to-right" rule are larger, traffic flow under the "free driving & free overtaking" is the relatively small. + +During heavy-traffic situation, flow under three rules tends to be the same, which is easy to understand because the more probable jams make the advantages of the last two traffic rule hard to discern. + +# Safety Index + +We plot safety index $\alpha_{safe}$ against vehicle density $n$ in Figure 16. + +![](images/4a7ee1ad681e239cb661fe9100e0c6650728b129188785b030543d9dee233dd3.jpg) +Figure 16. Safety Index under Different Traffic Rules (lower is safer) + +During light-traffic situation, the condition under the "keep-right-except-to-pass" rule and "slow-cars-to-right" rule is much safer than that under the "free driving & free overtaking" rule. + +During heavy-traffic situation, same as that of traffic flow, safety index under three traffic rules tend to be the same because vehicle tend to drive at a much lower speed which make the overall condition much safer. + +# Average Energy Consumption + +We plot average energy consumption $E_0$ against vehicle density $n$ in Figure 17. + +![](images/869efba8ee93a0c3f83719fdb4a46294713d21915bb453bff92f39b1d5674e59.jpg) +Figure 17. Average Energy Consumption under Different Traffic Rules (lower is better) + +Since a large portion of energy consumption comes from traffic jams, and unstable lane changing easily result in traffic jams, the "free driving & free overtaking" rule will definitely be more energy-costly. Such trend is manifest during light traffic when acceleration is fairly easy. + +During heavy traffic however, lane changing and acceleration become much more difficult, most of the energy consumption is from the accelerations and decelerations in traffic jams, therefore the average energy consumption under three rules tends to be the same. + +# - Summary & Suggestions + +From the figures above, we can discover that "free driving & free overtaking" rule is inferior to the "slow-cars-to-right" rule and "keep-right-except-to-pass" rule. This accords with real life situation because most countries adopt a combinational policy of "slow-cars-to-right" and "keep-right-except-to-pass". + +Meanwhile, it is unrealistic to adopt the "slow-cars-to-right" because total forbidding of lane changing and overtaking is unreasonable. Our suggestion is that maintain the policy of "keep-right-except-to-pass" rule and meanwhile construct separate fast and slow lanes for those freeways frequently suffered from car jams and obstructions. + +# 5 Modeling for Intelligent-System Control + +In this section, we make the general assumption that equipped with an intelligent system. The intelligent system knows about the exact velocity and location of all the vehicles on the freeway and controls them together. We aim to propose a new traffic rule such that the overall traffic flow of the freeway could be even larger. + +# 5.1 "Median" Optimization + +# Model Explanation + +We now introduce a rule called "median" rule. The basic idea is to let vehicles with higher speed stay on the left lane more, while vehicles with lower speed stay on the right lane, reducing the possibility of fast cars following slow cars. The separation point is the median speed, which is known easily by a system that controls all the vehicles. The advantage of median speed is that it is automatically + +adjusted according to the current average speed, then exactly demand half of the cars go to the left lane and vice versa. Therefore this rule will not let the left lane and right lane become unbalanced both in light and heavy traffic comparing to the usage of a fixed speed cap. + +![](images/c86ad3c0719d0c6fe7a93d67078bd40a6191427f48cfcbb134f217db853ba023.jpg) +The rule is as stated in Figure 18. +Figure 18. "Median" Illustration + +# Model Solution and Analysis + +Under the same maximum velocity, we plot traffic flow, safety index and average energy consumption against different vehicle density. See in Figure 19-21. + +![](images/965a64cb446aabf5b78e2e515f93d74af13166752e82cfaf38657aa1e6022e0c.jpg) +Traffic Flow Q against Vehicle Density n + +![](images/283af791a1b7e8e4d8b862cff18258c702179e4c78ec10d35882c04267aba9c1.jpg) +Figure 19. Relationship of Traffic Flow and Vehicle Density +Safety Index $a_{\text{safe}}$ against Vehicle Density n +Figure 20. Relationship of Safety Index and Vehicle Density (lower is safer) + +![](images/51567940fe3fcdaf9bf9d8d89ad936d22ace68d514b26e89f4c82548d89d64d1.jpg) +Average Energy Consumption $\mathbf{E}_0$ against Vehicle Density n +Figure 21. Relationship of Average Energy Consumption and Vehicle Density (lower is better) + +From the above Figures, we can discover that the "median" optimization does perform better in terms of traffic flow, safety condition and energy consumption, even though such optimization is not very dramatic. This also indicates that the current "keep-right-except-to-pass" rule is already a pretty mature one. + +# 5.2 Possible Further Improvements + +The "median" rule is only a simple rule, more advantages of the intelligent systems can be utilized. In the discrete (CA) model, a driver can only change his speed according to the distance of cars closely in front or behind because he doesn't know about the location of other cars and intentions of their drivers. Therefore when he is in a car flow without any gaps as shown in Figure 22, the only thing he can do is stop and wait for the front car to move. That's why we only compare his current velocity $V$ with $\Delta X_{p}^{f}$ and $\Delta X_{o}^{f}$ . + +![](images/54cc511e040f4dc3030415a6169dd798fc768e378206007f1793f38d6d848b76.jpg) +Figure 22. Illustration of Online Algorithm + +But with an intelligent system, it controls all the cars at the same time. Under such circumstances, it can optimize traffic flow by accelerating the three cars together with the same acceleration. This approach is extremely effective in traffic jams. + +# 6 Superiority and Weakness + +# 6.1 Superiority Analysis + +- By reasonable discretization and Monte Carlo method, CA model properly simulates normal driving and overtaking on real-life double-lane freeways; +- The solution of the CA model could be presented via various forms and standards, visualization technique makes the result even more intuitive; + +- Our model has robust flexibility, only minor modification could realize simulation under different traffic rules and parameters; +- By comparing with other traffic rules, we can fully realize the influence of keep-right-except-to-pass rule from various aspects; +- By designing algorithms under an intelligent system, our model has reference value in terms of future freeway construction; +- Our continuum model assigns realistic values to parameters, confirming the results derived from the discrete model from a macroscopic perspective. + +# 6.2 Weakness Analysis + +- Due to the restriction of computer simulation, some of our assumptions are highly ideal, such as the length of vehicles and the value of acceleration. +- The core of our system depends on computer simulation, therefore the calculation of key quantities such as traffic flow and safety index is time-costly. + +# 6.3 Future Research + +# 6.3.1 Possible Optimization of Discrete Model + +- If technique permits, we strive to make computer simulation as similar as possible to realistic situations. One possible measures is that we can use 1 minute as the basic time unit and assign realistic values to lattice length (such as $0.5\mathrm{km}$ ). Since most vehicles travel at velocity between $60~\mathrm{km / h}$ and $120~\mathrm{km / h}$ , within one time unit (1 min), a vehicle is expected to pass 2-4 lattices within one minute. +- For more accurate approximation, vehicle profile should be taken into consideration. For example, limousines, sedans, jeeps are usually small but relatively fast, we can assign them with one-lattice length and larger maximum velocity; trucks, on the other hand, could be assigned with length of multiple lattices and a smaller maximum velocity. +- Finally, we add a more professional factor in the transportation field---stopping sight distance and its limit. The stopping sight distance is used to measure several secure distances in driving. We can assign an integer to the stopping sight distance, denoted as $d_0$ . When the front gap is larger than the stopping sight distance $(\Delta X_p^f > d_0)$ , we compare $V + 1$ with + +$d_0$ instead of with $\Delta X_P^f$ or $\Delta X_o^f$ to determine the vehicle's velocity at next time interval. We can further combine weather conditions to evaluate the stopping sight distance. Such model is of great importance in deciding whether to close freeways under severe weather conditions. + +# 6.3.2 Possible Optimization of Continuum Model + +- In many countries around the world, the number of freeways are dramatically increasing. We can continue to develop multi-lane (more than two lanes) models. Speed limits could also be incorporated in the continuum model. +- Construct a systematic theory to analyze the graphic form of $\rho(x,t)$ and look for accurate analytical expressions. + +# 7 Conclusion + +We use a discrete model and a continuum model research upon the influence of "Keep-Right-Except-To-Pass" rule on traffic flow, safety index and average energy consumption respectively. Furthermore, to fully determine the pros and cons of the rule, we continue to build models to analyze two alternative traffic rules---the "Slow-Cars-To-Right" rule and the "Free Driving & Free Overtaking" rule. Finally, we propose two possible rules to optimize traffic flow when equipped with an intelligent system. The results of our models are as follows: + +# - "Keep-Right-Except-to-Pass" Rule + +According to our model, traffic flow and safety index first rise to peak value then decrease to zero with increasing vehicle density, while energy flow keeps rising but with a bating speed. Furthermore, under-posted and over-posted speed limits relative to our calculated optimal maximum velocity have different influences during light and heavy traffic. In addition, our model could be transferred in left-handed countries with a single change of orientation. + +# - Alternative Traffic Rules + +The "Keep-Right-Except-To-Pass" rule and the "Slow-Cars-To-Right" rule are superior (to the "Free Driving & Free Overtaking" rule), a combination of these two rules will have strengthening effects especially on heavy-traffic freeways. + +# - Intelligent System Control + +An ideal online algorithm is discussed, and a "median" optimization method is simulated and proves to be beneficial to enlarge traffic flow. + +# 8 References + +[1] Peilin Li, "Globalization and the 'Left-Most' Policy of Automobiles", Journal of Financial Introduction, Vol.8, 2004. +[2] Chowdhury, D., Wolf, D. E., & Schreckenberg, M. (1997). Particle hopping models for two-lane traffic with two kinds of vehicles: Effects of lane-changing rules. Physica A: Statistical Mechanics and its Applications, 235 (3), 417-439. +[3] Giordano, Frank R. A first course in mathematical modeling. Cengage Learning, 2013. +[4] Haijun Huang $\cdot$ Tieqiao Tang $\cdot$ Ziyou Gao: Continuum modeling for two-lane traffic flow, Acta Mech Sinica (2006) 22: 131-137. +[5] Tang Chang-Fu, Jiang Rui, Wu Qing-Song: Extended speed gradient model for traffic flow on two-lane freeways. Chinese Physics, 2007/16(06) +[6] Lee H Y, Lee H W and Kim D 1999 Phys. Rev. E 59 5101 +[7] Nagel K and Schreckenberg M 1992 J. Phys. I 2 2221 +[8] Kerner B S and Konsh'äuser P 1993 Phys. Rev. E 48 2335 +[9] Fu C J, Wang B H, Yin C Y and Gao K 2006 Acta Phys. Sin. 55 4032 + +# 9 Appendix + +$\bullet$ $\mathbf{C} + +$ Source Code for Monte Carlo Simulation: + +```cpp +gen.h: ##ifndef __GEN #define __GEN #include #include #include