wuxiaojun commited on
Commit
13653b2
·
verified ·
1 Parent(s): 3fbf685

Add files using upload-large-folder tool

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. MCM/1995-2008/1995MCM/1995MCM.md +0 -0
  2. MCM/1995-2008/1996MCM/1996MCM.md +0 -0
  3. MCM/1995-2008/1997MCM/1997MCM.md +0 -0
  4. MCM/1995-2008/1998MCM/1998MCM.md +0 -0
  5. MCM/1995-2008/1999MCM&ICM/1999MCM&ICM.md +0 -0
  6. MCM/1995-2008/2000MCM&ICM/2000MCM&ICM.md +0 -0
  7. MCM/1995-2008/2001ICM/2001ICM.md +0 -0
  8. MCM/1995-2008/2001MCM/2001MCM.md +0 -0
  9. MCM/1995-2008/2002ICM/2002ICM.md +0 -0
  10. MCM/1995-2008/2002MCM/2002MCM.md +0 -0
  11. MCM/1995-2008/2003ICM/2003ICM.md +0 -0
  12. MCM/1995-2008/2003MCM/2003MCM.md +0 -0
  13. MCM/1995-2008/2004ICM/2004ICM.md +0 -0
  14. MCM/1995-2008/2004MCM/2004MCM.md +0 -0
  15. MCM/1995-2008/2005ICM/2005ICM.md +0 -0
  16. MCM/1995-2008/2005MCM/2005MCM.md +0 -0
  17. MCM/1995-2008/2006ICM/2006ICM.md +0 -0
  18. MCM/1995-2008/2006MCM/2006MCM.md +0 -0
  19. MCM/1995-2008/2007ICM/2007ICM.md +0 -0
  20. MCM/1995-2008/2007MCM/2007MCM.md +0 -0
  21. MCM/1995-2008/2008ICM/2008ICM.md +0 -0
  22. MCM/1995-2008/2008MCM/2008MCM.md +0 -0
  23. MCM/2010/2010MCM&ICM/2010MCM&ICM.md +0 -0
  24. MCM/2010/A/2010-MCM-A-Com/2010-MCM-A-Com.md +107 -0
  25. MCM/2010/A/6749/6749.md +1158 -0
  26. MCM/2010/A/7571/7571.md +779 -0
  27. MCM/2010/A/7586/7586.md +284 -0
  28. MCM/2010/A/7920/7920.md +528 -0
  29. MCM/2010/B/2010-MCM-B-Com/2010-MCM-B-Com.md +53 -0
  30. MCM/2010/B/2010-MCM-B-Com2/2010-MCM-B-Com2.md +83 -0
  31. MCM/2010/B/7273/7273.md +390 -0
  32. MCM/2010/B/7507/7507.md +639 -0
  33. MCM/2010/B/7947/7947.md +439 -0
  34. MCM/2010/B/8362/8362.md +558 -0
  35. MCM/2010/B/8449/8449.md +1597 -0
  36. MCM/2010/B/8479/8479.md +476 -0
  37. MCM/2010/C/2010-ICM-Com-A/2010-ICM-Com-A.md +51 -0
  38. MCM/2010/C/2010-ICM-Com-J/2010-ICM-Com-J.md +87 -0
  39. MCM/2010/C/6947/6947.md +263 -0
  40. MCM/2010/C/7812/7812.md +370 -0
  41. MCM/2010/C/8048/8048.md +333 -0
  42. MCM/2010/C/8088/8088.md +244 -0
  43. MCM/2011/2011MCM&ICM/2011MCM&ICM.md +0 -0
  44. MCM/2011/A/9159/9159.md +451 -0
  45. MCM/2011/B/10496/10496.md +552 -0
  46. MCM/2011/B/11759/11759.md +967 -0
  47. MCM/2011/B/12114/12114.md +582 -0
  48. MCM/2011/B/2011-MCM-B-Com/2011-MCM-B-Com.md +98 -0
  49. MCM/2011/B/2011-MCM-B-Com2/2011-MCM-B-Com2.md +65 -0
  50. MCM/2011/B/9440/9440.md +0 -0
MCM/1995-2008/1995MCM/1995MCM.md ADDED
The diff for this file is too large to render. See raw diff
 
MCM/1995-2008/1996MCM/1996MCM.md ADDED
The diff for this file is too large to render. See raw diff
 
MCM/1995-2008/1997MCM/1997MCM.md ADDED
The diff for this file is too large to render. See raw diff
 
MCM/1995-2008/1998MCM/1998MCM.md ADDED
The diff for this file is too large to render. See raw diff
 
MCM/1995-2008/1999MCM&ICM/1999MCM&ICM.md ADDED
The diff for this file is too large to render. See raw diff
 
MCM/1995-2008/2000MCM&ICM/2000MCM&ICM.md ADDED
The diff for this file is too large to render. See raw diff
 
MCM/1995-2008/2001ICM/2001ICM.md ADDED
The diff for this file is too large to render. See raw diff
 
MCM/1995-2008/2001MCM/2001MCM.md ADDED
The diff for this file is too large to render. See raw diff
 
MCM/1995-2008/2002ICM/2002ICM.md ADDED
The diff for this file is too large to render. See raw diff
 
MCM/1995-2008/2002MCM/2002MCM.md ADDED
The diff for this file is too large to render. See raw diff
 
MCM/1995-2008/2003ICM/2003ICM.md ADDED
The diff for this file is too large to render. See raw diff
 
MCM/1995-2008/2003MCM/2003MCM.md ADDED
The diff for this file is too large to render. See raw diff
 
MCM/1995-2008/2004ICM/2004ICM.md ADDED
The diff for this file is too large to render. See raw diff
 
MCM/1995-2008/2004MCM/2004MCM.md ADDED
The diff for this file is too large to render. See raw diff
 
MCM/1995-2008/2005ICM/2005ICM.md ADDED
The diff for this file is too large to render. See raw diff
 
MCM/1995-2008/2005MCM/2005MCM.md ADDED
The diff for this file is too large to render. See raw diff
 
MCM/1995-2008/2006ICM/2006ICM.md ADDED
The diff for this file is too large to render. See raw diff
 
MCM/1995-2008/2006MCM/2006MCM.md ADDED
The diff for this file is too large to render. See raw diff
 
MCM/1995-2008/2007ICM/2007ICM.md ADDED
The diff for this file is too large to render. See raw diff
 
MCM/1995-2008/2007MCM/2007MCM.md ADDED
The diff for this file is too large to render. See raw diff
 
MCM/1995-2008/2008ICM/2008ICM.md ADDED
The diff for this file is too large to render. See raw diff
 
MCM/1995-2008/2008MCM/2008MCM.md ADDED
The diff for this file is too large to render. See raw diff
 
MCM/2010/2010MCM&ICM/2010MCM&ICM.md ADDED
The diff for this file is too large to render. See raw diff
 
MCM/2010/A/2010-MCM-A-Com/2010-MCM-A-Com.md ADDED
@@ -0,0 +1,107 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Judges' Commentary:
2
+
3
+ # The Outstanding Sweet Spot Papers
4
+
5
+ Michael Tortorella
6
+
7
+ Rutgers University
8
+
9
+ Piscataway, NJ
10
+
11
+ mtortore@rci.rutgers.edu
12
+
13
+ # Introduction
14
+
15
+ Apparently the march of technology in Major League Baseball (MLB) is more of a crawl. The basic tools of baseball have not changed or been substantially modified for a long time. It would seem that the business goals of MLB are being adequately met with tools that are decades—if not centuries—old.
16
+
17
+ In particular, the baseball bat is pretty much the same implement that it was when Abner Doubleday walked the earth. It is not often that a tool persists basically unchanged without some improvement being brought to bear. Some began to wonder what the properties of such a remarkable tool might possess.
18
+
19
+ # A Few Words About the Problem
20
+
21
+ Like most problems in the Mathematical Contest in Modeling (MCM) $^{\text{®}}$ , this problem was deliberately designed to be open-ended. In particular, the key phrase "sweet spot" in the statement of the problem was not defined. This was fortunate because teams brought many definitions forward and this produced a richer experience not only for the teams but also for the judges. Some of the useful interpretations of "sweet spot" included:
22
+
23
+ - the spot where a batted ball would travel farthest,
24
+ - the spot where the sensation of vibration in the batter's hands is minimized,
25
+ the center of percussion,
26
+ - the location that produces the greatest batted ball speed, and
27
+ - the location where maximum energy is transferred to the ball.
28
+
29
+ Several other definitions or interpretations are easily found through even a cursory literature search. Teams that did not discover this were generally eliminated in triage.
30
+
31
+ This observation compels us to consider also the relationships among problem statements, the Internet, and competing teams. It is extremely difficult, if not impossible, to imagine a problem that would be suitable for the MCM and for which there has been no prior art. Truly original problems, ones at which the MCM teams are the first to have a go, must be rare. Sometimes, we see a situation in which the proposed problem—while in its most general form is familiar—may be novel as an application to a particular situation which has received scant prior attention.
32
+
33
+ An example of this kind of problem is the Tollbooth Problem of the 2005 MCM.1 While it would have been nearly impossible to find prior art applied specifically to the situation presented (namely, to barrier tolls on the Garden State Parkway in New Jersey), standard methods of queueing theory and dimensional analysis could be brought to bear.
34
+
35
+ In general, the Internet provides teams with a powerful resource to help find what, if anything, has been done on a topic before. This is a recent development that was not in play even a decade ago. Teams, coaches, and judges need to find a fair way of coping with this changed situation:
36
+
37
+ - On one end of the spectrum, it is not reasonable for a team to simply copy what they find on the Internet and submit this as their solution. No one learns any modeling from this.
38
+ - At the other end of the spectrum, teams may develop entirely new models that do not resemble anything found online. While this may be desirable, it is probably unusual.
39
+
40
+ Most submitted papers will fall somewhere between these extremes. The challenge for everyone is to make the MCM a learning experience for the teams and an enriching one for the judges in the face of this new technology. A general discussion of this issue is beyond the scope of this article; so suffice it to say that for this particular problem, the presence on the Internet of substantial material on solving the problem was appropriately treated by the winning teams. Teams who simply copied material from sources without adding any value of their own were not considered winning teams.
41
+
42
+ # Interpretation Is Important
43
+
44
+ As always, interpretation is a key to success in modeling problems. Teams must recognize that in addition to their usual semantic or prose usage, key words in the problem statement must also be given a mathematical meaning in
45
+
46
+ the context of a model. Successful papers began by providing definitions of at least two possible interpretations of "sweet spot." Once that is accomplished, it begins to be possible to talk in quantitative terms about how to determine such a sweet spot (or spots).
47
+
48
+ # Modeling
49
+
50
+ Whatever model is chosen, it is necessary to produce an expression relating the sweet spot (SS) to physical parameters of the batter-bat-ball system. For instance, the Zhejiang University team investigated the SS as the location on the bat where the batted ball speed is greatest upon leaving the bat. Then the team developed a relationship between this definition and
51
+
52
+ - impact location,
53
+ ball mass,
54
+ - ball initial speed,
55
+ - the moment of inertia of bat,
56
+ - the swinging bat speed, and
57
+ - the coefficient of restitution (COR) of the ball.
58
+
59
+ This team made good use of clear illustrations to help the reader grasp the work involved.
60
+
61
+ The Huazhong University of Science and Technology team made use of a weighted average of two SS criteria and found, not surprisingly, that the resulting location of SS is a compromise between batter comfort and batted-ball departure speed. This is a nice example of how a team amplified results available on the Internet to generate new insights. The Princeton University team defined SS as the location on the bat that imparts maximum outgoing velocity to the batted ball.
62
+
63
+ An interesting comment on the choice of SS criteria is that most teams did not explicitly connect their choice to the strategy of the game. That is, the criteria for the SS should be related in some explicit way to the result that the batter is trying to achieve, namely, to score runs. From this point of view, criteria such as "maximum batter comfort" are perhaps secondary desirable features but are probably not the most important ones in the short term. It may be more suitable to choose criteria such as maximum batted-ball departure velocity, maximum location controllability, or something that is directly related to producing runs. Most teams accepted their criteria as being implicitly connected with results of the game, but few if any discussed this point—clearly a key point!—at all.
64
+
65
+ The Outstanding teams were able to develop clear equations, based on the dynamics of the batter-bat-ball system, for the location of the SS. Most teams followed this approach, but the Outstanding papers were especially clearly reasoned and made good use of illustrations to help clarify points for the reader.
66
+
67
+ The contest weekend is a busy weekend; but those papers that took the time to pay attention to helping the reader with good organization, clear writing, and attractive presentation in their report received more favorable reviews. Of course, these desirable features cannot make up for a weak solution; but lack of such features can easily cover up a good solution and make it harder to discern. This is not a trivial concern, because triage reads are very fast and it would be distressing if a triage judge were to pass over a worthwhile paper because its presentation made it hard for the judge to discern its solution quality.
68
+
69
+ Some teams, including the Huazhong University of Science and Technology team, expressed their results very precisely (for example, the SS is $20.15\mathrm{cm}$ from the end of the bat). This may be more than is required, partly because of the limited precision of real-world measuring instruments, but also because teams should be aware that stating a result in such a fashion compels a sensitivity analysis for this quantity. The Outstanding teams determined that even though a location for the SS could be calculated, the point of impact of the ball with the bat could vary somewhat from the SS without too much change in the value of the objective function (e.g., the batted-ball departure velocity). The Huazhong University of Science and Technology team, as well as several other teams, defined a "Sweet Zone" to capture the notions that
70
+
71
+ - different SS criteria lead to different locations on the bat, and
72
+ - most of the objective functions employed are not very sensitive to the specific location of the bat-ball impact.
73
+
74
+ # Conclusion
75
+
76
+ Studying these Outstanding papers offers good lessons in preparing entries for the MCM. Here are a few:
77
+
78
+ - Make your paper easy to read. That means at the very least:
79
+
80
+ -number the pages and the equations,
81
+ - check your spelling and grammar,
82
+ - provide a table of contents, and
83
+ - double-space the text (or at least use a font size large enough for easy readability).
84
+
85
+ All four Outstanding papers did a good job with this.
86
+
87
+ - Good organization will not make up for poor results, but poor organization can easily overwhelm good results—and make them hard to dig out. It can help to organize the paper into sections corresponding to the requirements in the problem statement and into subsections corresponding to parts of the problem. The teams from the U.S. Military Academy and Princeton University did an especially good job with this.
88
+
89
+ - Define all terms that a reader might find ambiguous. In particular, any term used in the model that also has a common prose meaning should be carefully considered.
90
+ - Complete all the requirements of the problem. If the problem statement says that certain broad topics are required, begin by making an outline based on those requirements. Typical examples are statement and discussion of assumptions, strengths and weaknesses of model, and sensitivity analysis.
91
+ - Read the problem statement carefully, looking for key words implying actions: "design," "analyze," "compare," and other imperative verbs. These are keys to the work that you need to do and to the sections that your paper ought to contain.
92
+ - When you do "strengths and weaknesses" or sensitivity analysis, go back to your list of assumptions and make sure that each one is addressed. This is your own built-in checklist aiding completeness; use it.
93
+ - Your summary should state the results that you obtained, not just what you did. Keeping the reader in suspense is a good technique in a novel, but it simply frustrates judges who typically read dozens of papers in a weekend. The Princeton University paper has an excellent summary: crisp, concise, and thorough.
94
+ - Use high-quality references. Papers in peer-reviewed journals, books, and government Websites are preferred to individuals' Websites. Note also that it is not sufficient to copy, summarize, or otherwise recast existing literature; judges want to see your ideas. It's okay to build on the literature, but there must be an obvious contribution from the team.
95
+ - Verify as much as you can. For example, the physical characteristics of baseballs and baseball bats are readily verifiable. Make whatever sanity checks are possible: Is your answer for the departing ball's speed faster than the speed of light? If so, should it be?
96
+ - Finally, an Outstanding paper usually does more than is asked. For example, the team from the U.S. Military Academy (and many other teams that lacked other qualities needed to be Outstanding) studied two different models for the problem and compared the results from each approach; the reasonably good agreement that they obtained showed that either
97
+
98
+ - they were on the right track; or
99
+ - they were victims of very bad luck, in that both of the methods gave nearly the same bad answers!
100
+
101
+ # About the Author
102
+
103
+ ![](images/1ef472e2dfcf7dd2841942a02b430c14b216a9967935f47f45e0152c285b085c.jpg)
104
+
105
+ Mike Tortorella is Visiting Professor at RUTCOR, the Rutgers Center for Operations Research at Rutgers, the State University of New Jersey, and Managing Director of Assured Networks, LLC. He retired from Bell Laboratories as a Distinguished Member of the Technical Staff after 26 years of service. He holds the Ph.D. degree in mathematics from Purdue University. His current interests include stochastic flow networks, network resiliency and critical infrastructure protection, and stochastic processes in reliability and
106
+
107
+ performance modeling. Mike has been a judge at the MCM since 1993 and particularly enjoys the MCM problems that have a practical flavor of mathematical analysis of social policy. Mike enjoys amateur radio, playing the piano, and cycling.
MCM/2010/A/6749/6749.md ADDED
@@ -0,0 +1,1158 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Summary
2
+
3
+ Aim to determine the location of "sweet spot" and the different "sweet spot" effects of the uncorked, corked and metal bats, we employ methods in dynamics to build models and generate data of batted ball speed by simulation, which exactly match the actual data obtained from experiments.
4
+
5
+ Based on classical mechanics, we firstly develop a model to describe the collision between ball and bat, from which we obtain the distribution of batted-ball speed (BBS) as the function of the impact location. Then we successfully deduce the location of "sweet spot", where the BBS reaches the maximum. With all possible case studies, we conclude that "sweet spot" is about $140\mathrm{mm}$ from the end.
6
+
7
+ Considering the more complex structure of the corked bat, we augment our basic model by building double-spring model and adopting three empirical formulas. Then we are able to adopt the foregoing analytical method to examine the trampoline effect. We carefully analyze the "sweet spot" effect corresponding to the geometrical parameters of "corked hole" and make simulations. The results are illustrated in mesh figures and demonstrate that "sweet spot" effect of the corked bat significantly depends on the density of stuffing materials-"corking" with rubber enhances the "sweet spot" effect while with cork is the opposite.
8
+
9
+ Based on our models, we design a special metal bat exhibiting faster BBS and similar controllability compared with ordinary wood bats. From the special case, we reach a general conclusion that metal bats tends to outperform wood bats through technical parameters optimization.
10
+
11
+ What's more, on the basis of our models, we provide technique tips and formulas for the corked or metal bat design, which will significantly enhance the "sweet spot" effect or make it easier to control. In conclusion, our model successfully achieves our goals that building a useful model both for illumination and application.
12
+
13
+ # Contents
14
+
15
+ 1 Introduction 2
16
+ 2 Assumption 3
17
+ 3 Symbols 3
18
+ 4 Details of the Model 4
19
+
20
+ 4.1Model Overview 4
21
+ 4.2 Fundamental Model 5
22
+
23
+ 4.2.1 Velocity for the Departing Ball 6
24
+ 4.2.2 Find the Location of "Sweet Spot" 7
25
+
26
+ 4.3 Advanced Model 9
27
+
28
+ 4.3.1 Overview of Corking 9
29
+ 4.3.2 Double-Spring Model 9
30
+ 4.3.3 Determining Equivalent BBCOR by DS Model 10
31
+ 4.3.4 Evaluating the "Sweet Spot" Effect 11
32
+
33
+ 5 Simulation and Analysis 16
34
+
35
+ 5.1 Basic Data Used in Simulation 16
36
+ 5.2 Solution to Problem I 17
37
+
38
+ 5.2.1 Simulation 17
39
+ 5.2.2 Where is the "Sweet Spot"? 20
40
+
41
+ 5.3 Solution to Problem II 20
42
+
43
+ 5.3.1 BBS Formula Simulation 20
44
+ 5.3.2 “Sweet Spot” Effects of Corking 27
45
+
46
+ 5.3 Solution to Problem III 30
47
+
48
+ 5.3.1 An Illustration of a Typical Metal Bat 30
49
+ 5.3.2 Predicting Behavior of Metal and Wood Bats 31
50
+ 5.3.3 Reason for MLB's Prohibition of Metal Bats 34
51
+
52
+ 6 Technique Tips for Bat Design 34
53
+
54
+ 6.1 Optimum Mass for Better "Sweet Spot" Effect 34
55
+ 6.2 Designing a Special Aluminum Bat 35
56
+
57
+ 7 Discussion and Conclusion 38
58
+
59
+ 7.1 Model Validation 38
60
+ 7.2 Bending Vibration 39
61
+ 7.3 Problems Review 42
62
+ 7.4 Strengths 42
63
+ 7.5 Weaknesses 43
64
+
65
+ 8 Reference 43
66
+ Appendix I 45
67
+ Appendix II 48
68
+ Appendix III 50
69
+
70
+ # An Identification of “Sweet Spot”
71
+
72
+ # 1 Introduction
73
+
74
+ The baseball game not only gains great popularity among young people, but also exerts a certain fascination on physicists. Adair, Brody, Cross, Nathan and Russell all published a number of notable papers addressing both experimental and theoretical issues involved in the ball-bat collision. Their methods range from classical mechanics model to Finite Element simulation. The "sweet spot", the corked bat, trampoline effect, vibration and metal bats all have been studied to different extent with different approaches. However, the researchers' work focus on explaining the physical phenomena occurring before, during and after ball-bat collision. Therefore, their research results may not be appropriate to solve the proposed problems. We need to develop a model, neglecting the transient process of collision, possess concise and clear physical meaning. What's more, the model should be problem-oriented.
75
+
76
+ The three proposed problems are:
77
+
78
+ - Problem I: Explain why the "sweet spot" is not at the end of the bat.
79
+ - Problem II: Confirm the fact that "corking" enhances the "sweet spot" effect and explain why MLB prohibits "corking".
80
+ - Problem III: Predict different behavior for wood and metal bats and explain why MLB prohibits metal bats.
81
+
82
+ The bat-ball collision is complicated by the nonlinear compress behavior of the ball and the vibration behavior of the bat. With the help of the current literatures, we first expect to reasonably neglect the influences of the some factors for simplicity and build a classical mechanical model to find the location of "sweet spot" at which the maximum power is transferred to the ball in collision. We want to obtain the "sweet spot" location by solving the dynamic equation, while some more specific problems and case has to be examined. All parameters in the expression have clear physical meanings and could be easily measured. What's more, we even expect that the expression only includes basic arithmetic operations in order to guide the exercise of baseball sport.
83
+
84
+ As for corked bats and metal bats, their differences in structure and material may result in better and worse performance compared with ordinary wood bats. So we plan to augment the basic model is to deeply examine the physical meanings of parameters and to modify these parameters in order to fully describe the corked bats and metal bats. On the basis of our model's results, we provide the explanation for the MLB's banning of corked bats and metal bats.
85
+
86
+ # 2 Assumption
87
+
88
+ 1. No consideration for the failure of the bat. Our model is developed under the precondition that both the uncorked and corked bat are working well.
89
+ 2. The loss of energy caused by bending vibration is neglected (except for Bending Vibration Part).
90
+ 3. The uncorked bat is uniform in density and is symmetric to the parallel axis.
91
+ 4. The stuffing object of the corked bat has the same perpendicular axis as the corked bat.
92
+
93
+ # 3 Symbols
94
+
95
+ <table><tr><td>COR</td><td>The coefficient of restitution</td></tr><tr><td>BBCOR</td><td>The coefficient of restitution of bat-ball</td></tr><tr><td>CM</td><td>The center of mass</td></tr><tr><td>MOI</td><td>The moment of inertia with respects to CM</td></tr><tr><td>BBS, vf</td><td>The batted ball speed</td></tr><tr><td>J, JCM</td><td>MOI</td></tr><tr><td>e*</td><td>The equivalent BBCOR</td></tr><tr><td>bsweet spot</td><td>The location of “sweet spot”</td></tr><tr><td>Vi, ωi</td><td>The swinging speed and angular speed</td></tr><tr><td>zcm, a</td><td>The location of CM from the knob</td></tr></table>
96
+
97
+ # 4 Details of the Model
98
+
99
+ # 4.1Model Overview
100
+
101
+ To find out the "sweet spot", we propose a classical mechanics model without consideration for bending vibration. We derive the expression of batted-ball speed (BBS) as the function of 1)impact location, 2)ball mass, 3)ball initial speed, 4) the moment of inertia of bat, 5)the swinging bat speed and 6) the coefficient of restitution(COR). We choose a standard wood bat and employ analytical method to obtain the "sweet spot" location, in which initial speed of ball and swinging bat speed vary over meaningful and practical ranges. The "sweet spot" lies at a point about $15\mathrm{cm}$ away from the end of bat.
102
+
103
+ Then we build a double-spring (DS) model to modify BBCOR by elaborating the collision on corked bat. We combined the two models to calculate the BBS considering the effects of cording on three parameters. We prove that a typical cork "cording" will weaken the "sweet spot" effect, whereas a typical rubber "cording" will enhance the "sweet spot" effect.
104
+
105
+ To predict and compare the different behavior of wood and metal bats, we first design a special aluminum bat with the same mass, outline shape, mass center and moment of inertia as that of the standard wood bat; We demonstrate that the special metal bats will have faster $BBS$ and same controllability. Based on the special case, we reach a general conclusion that metal bats tend to have faster $BBS$ and same controllability by optimizing technical parameters. At last, we list some possible negative effects posed by metal bats and explain MLB's prohibitions of metal bats.
106
+
107
+ # 4.2 Fundamental Model
108
+
109
+ ![](images/33a2de83665a7cee9c9fdee76f36209a288e422b76466cc8bead9e9c58a5a5a7.jpg)
110
+ Figure 1. Illustration of theoretical derivation
111
+
112
+ Based on the Momentum Theorem, we obtain
113
+
114
+ $$
115
+ I = M \left(V _ {f} - V _ {i}\right) + m \left(v _ {f} \cos \phi - v _ {i} \cos \theta\right) \tag {1}
116
+ $$
117
+
118
+ In which
119
+
120
+ $I$ is a linear impulse in $x$ direction which the batter communicates to the ball-bat system during the period of contact, $kg \cdot m / s$ ;
121
+
122
+ $M$ is the mass of the bat, $kg$ ;
123
+
124
+ $V_{f}$ is the velocity of the mass center of the bat at contact after collision (positive or negative), $m / s$ ;
125
+
126
+ $V_{i}$ is the velocity of the mass center of the bat at contact before collision (positive or negative), $m / s$ ;
127
+
128
+ $m$ is the mass of the ball, $kg$ ;
129
+
130
+ $v_{f}$ is the velocity of the mass center of the ball at contact after collision (positive or negative), $m / s$ ;
131
+
132
+ $v_{i}$ is the velocity of the mass center of the ball at contact before collision (positive or negative), $m / s$ ;
133
+
134
+ $\phi, \theta$ are the angles of the directions of the ball as illustrated in Figure 1.
135
+
136
+ Based on the Angular Momentum Theorem, we obtain
137
+
138
+ $$
139
+ L = m b \left(v _ {f} \cos \phi - v _ {i} \cos \theta\right) + J \left(\omega_ {f} - \omega_ {i}\right) \tag {2}
140
+ $$
141
+
142
+ In which
143
+
144
+ $L$ is a angular impulse about the direction of $\omega$ which the batter communicates to the ball-bat system during the period of contact, $m$ ;
145
+
146
+ $b$ is the distance from the hit spot to the center of mass of the bat, $m$ ;
147
+
148
+ $J$ is the moment of inertia of the bat about the center of the mass, $kg \cdot m^2$ ;
149
+
150
+ $\omega_{f}$ is the angular velocity of the bat at contract after collision with respect to a vertical axis of rotation passing through its mass center, $rad / s$ ;
151
+
152
+ $\omega_{i}$ is the angular velocity of the bat at contract before collision with respect to a vertical axis of rotation passing through its mass center, $rad / s$ .
153
+
154
+ By definition of the coefficient of restitution, we obtain
155
+
156
+ $$
157
+ e \left[ \left(V _ {i} + \omega_ {i} b\right) - v _ {i} \cos \theta \right] = v _ {f} \cos \phi - \left(V _ {f} + \omega_ {f} b\right) \tag {3}
158
+ $$
159
+
160
+ In which $e$ is the coefficient of restitution with respects to ball and bat (BBCOR, in this model only dependent on materials).
161
+
162
+ These three equations are sufficient for the determination of the three primed unknowns. In particular, we find the $x$ component of the velocity for the departing ball
163
+
164
+ $$
165
+ v _ {f} \cos \phi = \frac {(1 + e) (V _ {i} + \omega_ {i} b) + v _ {i} \cos \theta \left(\frac {m}{M} + \frac {m b ^ {2}}{J} - e\right) + \frac {I}{M} + \frac {b L}{J}}{1 + \frac {m}{M} + \frac {m b ^ {2}}{J}} \tag {4}
166
+ $$
167
+
168
+ # 4.2.1 Velocity for the Departing Ball
169
+
170
+ # Case I: eliminating the impulse $I, L$
171
+
172
+ With a hard ball (high $COR$ ) the duration of contact is short and the impulse $I$ and $L$ is therefore small. When the collision has been initiated there is little more for the batter to do [P. Kirkpatrick 1963]. This is also supported by Howard Brody's experiment in which he observed the vibrations of a hand-held baseball bat [1989]. According to Howard Brody's experiment [1989], the hand-held bat
173
+
174
+ behaves as it were a free body, in other words, the impulse $I$ and $L$ are nearly 0, which is correct in both hardball and softball (more details in Appendix II).
175
+
176
+ # Case II: eliminating the linear velocity $\omega_{i}\pmb{b}$
177
+
178
+ When the collision takes place near the mass center of the bat, as it very frequently does, all terms containing $b$ may be deleted [P. Kirkpatrick 1963].
179
+
180
+ Therefore, in many cases, it is quite reasonable to regard the impulse $I, L$ and $\omega_{i}b$ as zero. As simplified by these approximations, Eq. 4 can be reduced to
181
+
182
+ $$
183
+ v _ {f} = \frac {(1 + e) (V _ {i} + \omega_ {i} b) + v _ {i} \cos \theta \left(\frac {m}{M} + \frac {m b ^ {2}}{J} - e\right)}{\left(1 + \frac {m}{M} + \frac {m b ^ {2}}{J}\right) \cos \phi} \tag {5}
184
+ $$
185
+
186
+ In order to make the expression simple and easy to understand, we import new symbols:
187
+
188
+ recoil factor $r = m / M + mb^2 /J$ (6)
189
+
190
+ collision efficiency $q = (e - r) / (1 + r)$ (7)
191
+
192
+ pitch speed (positive) $v_{ball} = -v_{i}$
193
+
194
+ bat speed (positive) $v_{bat} = V_{i} + \omega_{i}b$
195
+
196
+ batted ball speed $BBS = v_{f}$
197
+
198
+ the Eq.5 becomes the following two expressions, both of which will be in later analysis.
199
+
200
+ $$
201
+ v _ {f} = \frac {(1 + e) (V _ {i} + \omega_ {i} b) + v _ {i} \cos \theta (r - e)}{(1 + r) \cos \phi} \tag {8}
202
+ $$
203
+
204
+ $$
205
+ B B S = \frac {q v _ {\text {b a l l}} \cos \theta + (1 + q) v _ {\text {b a t}}}{\cos \phi} \tag {9}
206
+ $$
207
+
208
+ # 4.2.2 Find the Location of "Sweet Spot"
209
+
210
+ For the reason that the directions of velocity of ball merely affect the "sweet spot", we only look for the expression for the location of "Sweet spot" in one simple situation in which $\theta = 0, \phi = 0$ . If $\theta \neq 0$ or $\phi \neq 0$ , the expression for "Sweet spot" location can be deduced in the same way and the result has no significant difference.
211
+
212
+ Substituting $m / M + mb^2 / M$ for $r$ in Eq. 8 gives
213
+
214
+ $$
215
+ v _ {f} = \frac {(1 + e) (V _ {i} + \omega_ {i} b) + v _ {i} \left(\frac {m}{M} + \frac {m b ^ {2}}{J} - e\right)}{1 + \frac {m}{M} + \frac {m b ^ {2}}{J}} \tag {10}
216
+ $$
217
+
218
+ Note that when $\omega_{i} = 0$ (the bat has no initial rotational energy), this expression is reduced to
219
+
220
+ $$
221
+ v _ {f} = \frac {(1 + e) V _ {i} + v _ {i} \left(\frac {m}{M} + \frac {m b ^ {2}}{J} - e\right)}{1 + \frac {m}{M} + \frac {m b ^ {2}}{J}} \tag {11}
222
+ $$
223
+
224
+ And it is clear that in this case $\mathbf{v}_{\mathrm{f}}$ will have a maximum at $b = 0$ , which indicates that the "sweet spot" is at the center of mass (CM) when the bat is stationary before the collision.
225
+
226
+ To obtain the value of $b$ at which $v_{f}$ can reach the maximum in the general case (the bat is not initial stationary), we differentiated the expression of $v_{f}$ with respect to $b$ , set the result be zero and get
227
+
228
+ $$
229
+ \omega_ {i} b ^ {2} - 2 (v _ {i} - V _ {i}) b - \frac {\omega_ {i} (M + m) J}{m M} = 0. \tag {12}
230
+ $$
231
+
232
+ This equation can be solved for $b$ :
233
+
234
+ $$
235
+ b = \frac {v _ {i} - V _ {i}}{\omega_ {i}} \pm \sqrt {\left(\frac {v _ {i} - V _ {i}}{\omega_ {i}}\right) ^ {2} + \frac {J (m + M)}{m M}}. \tag {13}
236
+ $$
237
+
238
+ The "+" sign will be used here because the value of $\frac{v_i - V_i}{\omega_i}$ is negative, and thus we get
239
+
240
+ $$
241
+ b = \frac {v _ {i} - V _ {i}}{\omega_ {i}} + \sqrt {\left(\frac {v _ {i} - V _ {i}}{\omega_ {i}}\right) ^ {2} + \frac {J (m + M)}{m M}}. \tag {14}
242
+ $$
243
+
244
+ It is clearly shown in this expression that this point is not the COP location, since the value of $b$ is dependent upon the ball and bat velocity and the properties of the bat. Particularly, the values of $m, M$ and $J$ can be obtained experimentally and $v_{i}$ as well as $V_{i}, \omega_{i}$ are determined respectively by the pitcher throwing the ball and the hitter swinging the bat.
245
+
246
+ If $\left[\frac{J(m + M)}{mM}\right] / \left(\frac{v_i - V_i}{\omega_i}\right)^2$ is less than one, then the square root can be expanded and the result is
247
+
248
+ $$
249
+ b \cong \frac {(z + 1) \omega_ {i} k ^ {2}}{2 \left(V _ {i} - v _ {i}\right)} \tag {15}
250
+ $$
251
+
252
+ Where $z = M / m$ , $k = \sqrt{J / M}$ .
253
+
254
+ # 4.3 Advanced Model
255
+
256
+ # 4.3.1 Overview of Corking
257
+
258
+ A corked bat is one which has been hollowed a cylinder on the middle of its tip. The size of the drilling hole can be 1 inch in diameter, and 6 to 10 inch deep; the removed part can be replaced with cork, rubber or Styrofoam; a cap is lastly plugged [Nathan 2003]. Since only the "one piece of solid wood bat" is permitted in the MLB [official rules], ordinary wood corked bat is considered in the following parts.
259
+
260
+ With corked structure, not only is the bat lighter, but the center of mass, or balance point, of the bat moves closer to hands. In technical physics language, the moment of inertia (MOI) of the bat about the CM is reduced which leads to easy-control. What's more, since the thickness of bat's shell decreases after cording, the bat's shell may be compressed during the collision with the ball and springs back, much like a trampoline, resulting in much less loss of energy than would be the case if the ball hit a completely rigid surface. In other words, the equivalent BBCOR (in model above) has decreased.
261
+
262
+ Summarily, the physics of corked bat and ball collision can be fully identified by the parameters of model above, however, in contrast with the original bat (uncorked), the corked bat exhibits different values of specific parameters, including $MOI$ , $M_{ball}$ , $COR$ . By examining both the characteristics of the structure of corked bat and the new physical meanings of key parameters in the first model, we augment our first model on the basis of getting the equivalent $BBCOR$ (relating to structure of corked bat) and varying values of parameters.
263
+
264
+ # 4.3.2 Double-Spring Model
265
+
266
+ During the collision between hollowed bat and ball, the "spring" on the bat is exited (the shell of bat is compressed) so that the trampoline effect could be observed in mini-scale. Because of the hoop structure, a radial standing wave and the mentioned bending wave add to a resultant vibration.
267
+
268
+ ![](images/12ba2875de051de39d0c8b7e57b8b4eadd9d4e325dea262cd391a8cad29dea96.jpg)
269
+ Figure 2. Longitudinal sections and cross sections for Hoop Modes of standard bat [Russell, 2004]
270
+
271
+ To both investigate the radial vibration and avoid developing a novel vibration model, we consider a most common elastic element in Physics, the springs, and use an uncorked bat with a spring attached to simulate the corked bat. What is worth mention is that the model incorporates the effects of elastic collision and energy loss completely in equivalent BBCOR; however, the advanced double spring model illuminates what new physical parameters make up equivalent BBCOR: they are $k_{ball}, k_{bat}, e_{ball}, e_{bat}$ .
272
+
273
+ # Figure 2 shows the scheme for model when colliding. Where
274
+
275
+ $k_{\text{ball}}$ is the stiffness of the ball
276
+
277
+ $k_{bat}$ is the stiffness of the bat
278
+
279
+ $e_{\text {ball }}$ is the coefficient of restitution (COR) of the object with ball's material bouncing off a stationary completely elastic object (only dependent on material)
280
+
281
+ $e_{bat}$ is COR of the object with bat's material bouncing off a stationary completely elastic object (only dependent on material)
282
+
283
+ ![](images/aac75038959ed15094d36664d06902f025af3f02746df914e26958eff2233849.jpg)
284
+ Figure 3. scheme for double spring model
285
+
286
+ # 4.3.3 Determining Equivalent BBCOR by DS Model
287
+
288
+ In a reference frame where center of mass of the system ( $CM_{BSB}$ , consisted of ball, spring and bat) remains at rest, the collision can be divided into the following four procedures:
289
+
290
+ i) The ball and bat (with spring respectively) approach each other;
291
+ ii) The two springs contact and compress until the velocity (in $CM_{BSB}$ )
292
+
293
+ reference frame) of bat and ball turn to be zero;
294
+
295
+ iii) The ball and bat are accelerated by the springs respectively. In procedure ii and iii, the loss of energy happens through the effect between bat (ball) and spring.
296
+ iv) The two springs is no longer at contact, and the ball and bat separate each other.
297
+
298
+ By definition of the coefficient of restitution (the ratio of the differences in velocities before and after the collision) and the physical meaning in energy form, we obtain the equivalent BBCOR (with trampoline effect and material effect inside) in a reference of home frame
299
+
300
+ $$
301
+ e ^ {*} = \frac {v _ {f} - V _ {f}}{v _ {i} - V _ {i}} = \sqrt {\frac {E _ {f}}{E _ {i}}}. \tag {16}
302
+ $$
303
+
304
+ Considering the conservation of momentum and the process of collision, the equivalent BBCOR in Fundamental Model is as following
305
+
306
+ $$
307
+ e ^ {* 2} = \frac {k _ {b a t}}{k _ {b a l l} + k _ {b a t}} e _ {b a l l} ^ {2} + \frac {k _ {b a l l}}{k _ {b a l l} + k _ {b a t}} e _ {b a t} ^ {2}, \tag {17}
308
+ $$
309
+
310
+ where
311
+
312
+ $e^{*2}$ the fraction of energy restored in ball and bat (kinetic energy) after collision;
313
+
314
+ $\frac{k_{bat}}{k_{ball} + k_{bat}}$ the fraction of initial energy stored in ball;
315
+
316
+ $e_{ball}^{2}$ the fraction of stored energy returned to kinetic energy of ball;
317
+
318
+ $\frac{k_{ball}}{k_{ball} + k_{bat}}$ the fraction of initial energy stored in bat;
319
+
320
+ $e_{bat}^{2}$ the fraction of stored energy returned to kinetic energy of bat.
321
+
322
+ (Notes: Mathematical deduction see Appendix I)
323
+
324
+ # 4.3.4 Evaluating the "Sweet Spot" Effect
325
+
326
+ # Parameters of bat analysis
327
+
328
+ # Parameter affecting BBS
329
+
330
+ $\succ$ Angular and linear velocity $\omega_{i}, V_{i}$ According to Eq.4, when $\omega_{i}, V_{i}$ increase, $BBS$ will increase. Considering the limit of athletes' biological energy, $\omega_{i}$ is affected by $J$ and $V_{i}$ by $M$ .
331
+ >Recoil factor $r$
332
+
333
+ According to Eq.4, the increment of $r$ leads to higher BBS.
334
+
335
+ Moment of inertia and mass $J, M$
336
+
337
+ According to Eq. 4, when $J, M$ increase, $r$ will decrease which results in higher BBS. However, the increase of $J, M$ will respectively leads to the decrease of $\omega_{i}$ and $V_{i}$ , which will make BBS decrease.
338
+
339
+ $\succ$ Location of "Sweet Spot" $b$
340
+
341
+ According to Eq. 4, we can obtain the location of "Sweet Spot" $b_{\text{seet spot}}$ , and
342
+
343
+ using $b_{\text{sweet spot}}$ to calculate the maximum of BBS.
344
+
345
+ Equivalent BBCOR $e^{*}$
346
+
347
+ According to Eq. 4, the increment of $e^{*}$ will leads to higher BBS. Since $e_{bat}$ and $e_{ball}$ are determined by materials, $e^{*}$ will be only determined by $k_{ball} / k_{bat}$ .
348
+
349
+ For clarity, we list the above analysis results in Table 1.
350
+ Table 1 The interrelationship between parameters and effects on BBS
351
+
352
+ <table><tr><td>EFFECTS</td><td>ωi</td><td>Vi</td><td>r</td><td>e*</td><td>BBS</td></tr><tr><td>ωi ↑</td><td></td><td></td><td></td><td></td><td>◎</td></tr><tr><td>Vi ↑</td><td></td><td></td><td></td><td></td><td>◎</td></tr><tr><td>r ↑</td><td></td><td></td><td></td><td></td><td>◎</td></tr><tr><td>J ↑</td><td>↓</td><td></td><td>↓</td><td></td><td>◎ +◎ = ?</td></tr><tr><td>M ↑</td><td></td><td>↓</td><td>↓</td><td></td><td>◎ +◎ = ?</td></tr><tr><td>kball/kbat ↑</td><td></td><td></td><td></td><td>↑</td><td>◎</td></tr></table>
353
+
354
+ The independent parameters of forked bat only includes $M, J, e_{ball}, e_{bat}, k_{ball}, k_{bat}$ , all of which are easily obtained:
355
+
356
+ $M, J$ can be easily measured with basic experimental instruments;
357
+
358
+ $e_{\text{ball}}, e_{\text{bat}}$ can be obtained in the tool book, like Mechanical Design Handbook.
359
+
360
+ $k_{ball}, k_{bat}$ can be directly obtained through stiffness measure experiment, instead of other complex models.
361
+
362
+ # Parameters affecting Easy Control
363
+
364
+ The lighter weight (smaller $M$ ) and smaller swing weight (small $J$ ) also lead to better bat control[Nathan, 2004], which has a beneficial effect for a contact-type hitter, who is just trying to meet the ball squarely rather than get the highest batted
365
+
366
+ ball speed. The batter can accelerate the bat to high speed more quickly with a corked bat, allowing the batter to react to the pitch more quickly, wait longer before committing on the swing, and more easily change in mid-swing.
367
+
368
+ # Parameters Estimation
369
+
370
+ Swinging Angular Speed Estimated by $J$
371
+
372
+ As showed above, the angular velocity $\omega_{i}$ and moment of inertia $J$ have coupling effects on BBS. In order to determine the effect of $J$ 's increment on BBS, we need an empirical equation illustrating the relationship between $\omega_{i}$ and $J$ . We adopt the formula in Daniel A. Russell's paper [Russell, 2007]. Based on the analysis of the bat swing speed data from the Crisco-Greenwald field study and data fitting done by Alan Nathan, the empirical estimation is
373
+
374
+ $$
375
+ \omega_ {i} = 4 5. 3 \left(\frac {J _ {k n o b}}{1 6 0 0 0}\right) ^ {- 0. 3 0 7 6 9} \tag {18}
376
+ $$
377
+
378
+ If we know the location of center of mass, we can use the parallel axis theorem to calculate $I_{knob}$ and the conversion is roughly $I_{knob} = J + Ma^2$ with units of $oz \cdot in^2$ .
379
+
380
+ Swinging Speed Estimation by $M$
381
+
382
+ Like the interrelationship between $\omega_{i}$ and $J$ , $V_{i}$ and $M$ have the same relationship and coupling effects on BBS. So an empirical equation of $V_{i}$ and $M$ is wanted. We adopt the model of A.Terry Bahill and Miguel Morna Freitas [1995]. Based on the data for Leah, a member of the University of Arimona, NCAA National Champion softball team, Bahill and Freitas use fitting method to examine the relationship between the mass ( $M$ with unit of oz) and swinging velocity of bat( $V_{i}$ with unit of mph), getting the following formula
383
+
384
+ $$
385
+ (M + 7 0. 4) \left(V _ {i} + 5. 4\right) = 6 0 3 2. \tag {19}
386
+ $$
387
+
388
+ ![](images/8dc01896ac57f8f5428191917c39b74b3889fe9750599fb36f078bc0f40a147b.jpg)
389
+ Figure 4 The Data used for Fitting Formula, cited from Bahill and Freitas's paper
390
+
391
+ # > Stiffness of Hoop Spring Estimation through thickness (t)
392
+
393
+ The observed fundamental hoop vibration mode, which accounts for the majority of vibrant energy and is responsible for trampoline effect, has a frequency of about $1\mathrm{kHz}$ . It means that when the ball leaves the bat, taking place about 1ms after the ball touches the bat [Brody, 1985], the fundamental hoop vibration mode has not been set up. In other words, at the moment that ball exit, it has not "seen" either the knob or tip of the bat. This fact indicates us that we can view the bat as a hoop spring.
394
+
395
+ The hoop stress constant $k_{bat}$ can be identified by an empirical relationship
396
+
397
+ $$
398
+ k _ {b a t} \propto \left(\frac {t}{R}\right) ^ {3}, \tag {20}
399
+ $$
400
+
401
+ Where
402
+
403
+ $t$ is the thickness of the shell, $m$ ;
404
+
405
+ $R$ is the radius of the bat cross section, $m$ .
406
+
407
+ # Methods for Evaluating "Sweet Spot" Effects
408
+
409
+ # Overview of corking effects
410
+
411
+ # I) Effects on parameters by corking
412
+
413
+ > Since the hallowed wood bat was filled by cork/rubber which has a smaller/larger density, the bat has smaller/larger $M$ and $J$ (in comparison with the uncorked), which leads to smaller/larger $r$ (recoil factor) and higher/ lower $V_{i}$ and $\omega_{i}$ .
414
+
415
+ > Since the shell has been thin, the trampoline effect of the bat will be more evident than the uncorked. Enlarging the trampoline effect will significantly increase $BBS$ .
416
+
417
+ # II) Evaluating "Sweet Spot" Effects
418
+
419
+ # Maximum of $BBS$
420
+
421
+ Since the $BBS$ is the function of location of hitting point, which gets the maximum in "Sweet Spot" point, we calculate the maximum $BBS$ to estimate the "Sweet Spot" Effect of corked bats. For the reason that there are both positive and negative effects on the maximum $BBS$ after corking, quantitative method is needed.
422
+
423
+ # $\succ$ Easy Control
424
+
425
+ Since both $M$ and $J$ decrease, we make sure that the forked bat is better to control.
426
+
427
+ # Methods for quantitative evaluation
428
+
429
+ ![](images/e60674d82b7b2070542a070a0a1544dd76e6b043f8b665d192e1c0a9c6de66f3.jpg)
430
+ Figure 5. A typical example of forked bat
431
+
432
+ # i) Neglect the significance of trampoline effect
433
+
434
+ Through rough estimation, the effect of hoop mode of the corked wood bat could be neglected (See details of deduction in Appendix II.). This estimation was also proved by the work of Alan M. Nathan [2003], who gives the quantitative results through experiments. He pointed out that there is nearly no trampoline effect from the hollowed-out wood bat or the cork filler, for the reason that "it requires much greater force to compress such a bat than it does to compress an aluminum bat".
435
+
436
+ # ii) Calculating the MOI and mass of corked bat ( $J_{\text{corked}}, M_{\text{corked}}$ )
437
+
438
+ Based on the definition of moment of inertia and parallel axis theorem, we obtain the MOI of the corked bat as following
439
+
440
+ $$
441
+ J _ {c o r k e d} = J ^ {*} + \left[ M + \frac {\pi}{4} (\rho - \rho_ {0}) d ^ {2} h \right] \left(x _ {C M} - \frac {L}{2}\right) ^ {2}, \tag {21}
442
+ $$
443
+
444
+ where
445
+
446
+ $$
447
+ J ^ {*} = J _ {0} + \frac {\pi}{4 8} (\rho - \rho_ {0}) d ^ {2} h \left(\frac {3 d ^ {2}}{4} + h ^ {2}\right) + \frac {\pi}{4} (\rho - \rho_ {0}) d ^ {2} h \left(L - a - \frac {h}{2}\right) ^ {2},
448
+ $$
449
+
450
+ $$
451
+ x _ {C M} = \frac {M (L - a) + \frac {\pi}{8} (\rho - \rho_ {0}) d ^ {2} h ^ {2}}{M + \frac {\pi}{4} (\rho - \rho_ {0}) d ^ {2} h},
452
+ $$
453
+
454
+ $J_0 = MOI$ with respect to CM of the uncorked bat.
455
+
456
+ And the mass of corked bat is
457
+
458
+ $$
459
+ M _ {c o r k e d} = \frac {\pi}{4} \rho_ {0} D ^ {2} L + \frac {\pi}{4} (\rho - \rho_ {0}) d ^ {2} h. \tag {22}
460
+ $$
461
+
462
+ iii) Measure the structure of forked bat and obtain the values of $D, L, d, h$ . Set $\rho_0 =$ the density of wood,
463
+
464
+ $\rho =$ the density of stuffed material (cork or rubber)
465
+
466
+ and calculate the $M_{uncorked}, L_{uncorked}, M_{corned}, L_{corned}$ .
467
+
468
+ Based on Eq. 4 and Eq. 14, we obtain the maximum $BBS$ of the corked and uncorked bat, and therefore determine whether "corking" a bat enhances the "sweet spot" effect. (Calculation details in Simulation and Results)
469
+
470
+ # 5 Simulation and Analysis
471
+
472
+ # 5.1 Basic Data Used in Simulation
473
+
474
+ The uncorked wood bat is the one used by Cross in his extensive set of measurements, which is a 33 in/31 oz Louisville Slugger Model R161 [Cross 1998]. The detail data is shown in Table 2.
475
+
476
+ Table 2 Data used for simulation
477
+
478
+ <table><tr><td>Property</td><td>Description</td><td>Value</td></tr><tr><td>L</td><td>The length of the bat</td><td>0.84m</td></tr><tr><td>M</td><td>The mass of the bat</td><td>0.885kg</td></tr><tr><td>Jcm</td><td>The MOI of the bat</td><td>0.045kg·m2</td></tr><tr><td>a</td><td>The distance of center of mass to from handle</td><td>0.564m</td></tr><tr><td>m</td><td>The mass of the ball</td><td>0.145kg</td></tr><tr><td>average of Vi</td><td>The average of Vi measured in a contest</td><td>24m/s</td></tr><tr><td>average of ωi</td><td>The average of ωi measured in a contest</td><td>51rad/s</td></tr><tr><td>average of vi</td><td>The average of vi measured in a contest</td><td>40m/s</td></tr></table>
479
+
480
+ # 5.2 Solution to Problem I
481
+
482
+ # 5.2.1 Simulation
483
+
484
+ As we have shown that the "sweet spot" location is determined by several parameters. The values of $m, M$ and $J$ are the constant properties of the baseball and the bat, while the value of $v_{i}$ , $V_{i}$ and $\omega$ are determined by the pitcher and the batter. In other words, the $v_{i}$ , $V_{i}$ and $\omega$ value will vary over a certain range, which will influence the "sweet spot" location. The properties of the bat we use to simulate are listed in Table 2. Herein, we do a series of calculation based on the variation of $v_{i}$ , $V_{i}$ and $\omega$ to investigate the value of $b$ .
485
+
486
+ # The effect of ball speed $v_{i}$
487
+
488
+ We analyze the effect of ball speed $v_{i}$ on the "sweet spot" location $b_{\text{sweet spot}}$ in Figure 6 by assigning $v_{i}$ a series of values (31m/s ~ 50m/s) and calculating $b_{\text{sweet spot}}$ . We vary $v_{i}$ from 31m/s to 50m/s for practical concern. The calculation shows that the value of $b_{\text{sweet spot}}$ is lowered by increasing the ball speed $v_{i}$ . However, the "sweet spot" location still has a distance of about 0.15m from the tip.
489
+
490
+ ![](images/ecc431477c8a2b8949c67305d233a52378c4fe19d8ef874c501e5a82a562f21b.jpg)
491
+ Figure 6. The plot of the "sweet spot" location $b_{\text{sweet spot}}$ as the ball speed $v_i$ ranges from $31\mathrm{m/s}$ to $50\mathrm{m/s}$ . The impact occurs on the standard wood bat, which has a CM speed of $54\mathrm{mph}$ and a rotational
492
+
493
+ speed about the CM of $51\mathrm{s}^{-1}$
494
+
495
+ # The effect of swinging speed $V_{i}$
496
+
497
+ We analyze the effect of ball speed $v_{i}$ on the "sweet spot" location $b_{\text{sweet spot}}$ in Figure 7 by assigning $V_{i}$ a series of values (19m/s ~29m/s) and calculating $b_{\text{sweet spot}}$ . Considering the practical situation, we vary $v_{i}$ from 19m/s to 29m/s. The calculation shows that the value of $b_{\text{sweet spot}}$ is lowered by increasing the swing speed $V_{i}$ and the effect is smaller than that of the ball speed. In the same manner, the "sweet spot" location still has a distance of about 0.15m from the tip.
498
+
499
+ ![](images/82dd103bf5f369ab71739fb47c75a406a61f7692fb9f234e235a1aa13c60444b.jpg)
500
+ Figure 7. The plot of the "sweet spot" location $b_{\text{sweet spot}}$ as the swinging speed $V_i$ ranges from $19\mathrm{m/s}$ to $29\mathrm{m/s}$ . The impact occurs on the standard wood bat, which has a CM speed of $54\mathrm{mph}$ and a rotational speed about the CM of $51\mathrm{s}^{-1}$ .
501
+
502
+ # The effect of bat rotational speed $\omega$
503
+
504
+ We analyze the effect of ball speed $\omega$ on the "sweet spot" location $b_{\text{sweet spot}}$ in Figure 8 by assigning $\omega$ a series of values (43 $s^{-1} \sim 61 s^{-1}$ ) and calculating $b_{\text{sweet spot}}$ . Considering the practical situation, we vary
505
+
506
+ $\omega$ from $43 \, s^{-1}$ to $61 \, s^{-1}$ . The calculation shows that the value of $b_{\text{sweet spot}}$ is raised by increasing the swing speed $\omega$ . Similarly, the "sweet spot" location still has a distance of about $0.15 \, \text{m}$ from the tip.
507
+
508
+ ![](images/c506c09ace8fc1c8799432c7e832c833c02545e35103733c1154e3ed017dfb87.jpg)
509
+ Figure 8. The plot of the "sweet spot" as swinging angular velocity $\omega$ ranges from $43~s^{-1}$ to $61s^{-1}$ m/s. The impact occurs on the standard wood bat, which has a CM speed of $54\mathrm{mph}$ and a rotational speed about the CM of $51~\mathrm{s}^{-1}$ .
510
+
511
+ # the effect of impact location on the value $v_{f}(BBS)$
512
+
513
+ Figure 6, Figure 7 and Figure 8 clearly illustrate that the optimum impact location is still far away from the tip of the bat, although the value of $b_{\text{sweet spot}}$ (the "sweet spot") will vary with the value of $v_i$ , $V_i$ and $\omega$ . As the expression of $v_f$ suggests, the impact location will influence the value of $v_f$ . It is tempting to know how big the influence will be, for example, what is the exit velocity of the ball if the collision takes place at other locations other than the "sweet spot". Herein, We analyze the effect of impact location on ted ball speed $v_f$ , and put results in Figure 9 by varying the impact location from 0.1m (from the knob) to 0.885m (the tip of the bat). The calculation shows that the value of $v_f$ assumes a parabola-like curve and attains the maximum at
514
+
515
+ the "sweet spot" rather than the tip. This fact provides the direct evidence that the "sweet spot" is not at the tip of the bat.
516
+
517
+ ![](images/49d6be6a37d7f5ae37eb28b45a204802ed38437bff4b8f4bdd82cba183ed33e3.jpg)
518
+ Figure 9. The plot of $v_{f}(BBS)$ as a function of impact location for an impact of a $40\mathrm{m / s}$ ball on the standard wood bat, which has a CM speed of $54\mathrm{mph}$ and a rotational speed about the CM of $51~\mathrm{s}^{-1}$ .
519
+
520
+ # 5.2.2 Where is the “Sweet Spot”?
521
+
522
+ The simulation we perform demonstrates that the "sweet spot" is about $15\mathrm{cm}$ away from the end of the bat when parameters $v_{i}, V_{i}$ and $\omega$ varies over the practical and meaningful ranges. So, we draw a conclusion from the above analysis that:
523
+
524
+ The "sweet spot" is not at the end of bat.
525
+
526
+ # 5.3 Solution to Problem II
527
+
528
+ # 5.3.1 BBS Formula Simulation
529
+
530
+ # The Effects of Ratio of Stiffness $(k_{ball} / k_{bat})$ on Maximum BBS
531
+
532
+ Based on Table 1, we know that the increase of $k_{ball} / k_{bat}$ leads to bigger equivalent BBCOR and therefore leads to higher maximum BBS. In order to make quantitative analysis, at first, we determine the quantitative relationship between
533
+
534
+ $k_{ball} / k_{bat}$ and equivalent BBCOR based on the Eq.17, as shown in Figure 10. Secondly, we deduced the relationship between $k_{ball} / k_{bat}$ and maximum BBS using Eq.4, which is illustrated in Figure 11.
535
+
536
+ From semi-log figure of $\frac{k_{ball}}{k_{bat}} \sim v_{f,max}$ (Figure 11), we can clearly figure out that the increase of maximum $BBS$ becomes quite slow, when $k_{ball} / k_{bat}$ is bigger than 1. It illuminates that we should make the $k_{ball} / k_{bat}$ smaller than one if we expect more significant trampoline effect which enhances the $BBS$ .
537
+
538
+ ![](images/aaf1e5495b75e073ef521258c718a679b13b73958e00e30e1b395cf78c4b33d7.jpg)
539
+ Figure 10 The equivalent BBCOR is a function of ration of stiffness with $e_{bat} = 1$ .
540
+
541
+ ![](images/973367b22b69fc5fd8fa0a70106d388fbdeba244028a594b4fca7fc2f5c58347.jpg)
542
+ Figure 11 The maximum $BBS(v_{f,max})$ is a function of equivalent BBCOR which is defined by $k_{ball} / k_{bat}$ . The bat is corked with $e_{bat} = 0.98$ , whose CM speed is $24\mathrm{m/s}$ and angular speed is $51\mathrm{rad/s}$ .
543
+
544
+ # The Effect of Moment of Inertia $(J)$ on Maximum BBS
545
+
546
+ From the Table 1, we may be depressed by the fact that merely qualitative analysis based on the equation of $BBS$ is unable to conclude whether the effect of MOI's increase on $BBS$ is positive or negative. Here, we import the data to do the precise quantitative analysis, and then determine the specific relationship between them.
547
+
548
+ The procedure to simulate is shown as following
549
+
550
+ ![](images/eed4c74d5124a039176ad3d471b8b3543181c0bc6544eb4b1e62f7b00d480ee4.jpg)
551
+ From Figure 14, we are able to easily make sure that the increase of $J$ (limited in
552
+
553
+ possible field of variation) leads to higher maximum $BBS$ , which is quite useful in comparing the "Sweet Spot" effect of the corked and uncorked bat and designing better corked bat.
554
+
555
+ ![](images/942cefea37e276c0e62333b0a489de24b8f1f97bce6c3e75850622281f026d5e.jpg)
556
+ Figure 12 The relationship between $\omega_{i}$ and $J$ is illuminated by an empirical equation.
557
+
558
+ ![](images/53e7471f77797f60064b7c3adb6762585027a8d45861a2e1d4b7006a77f40696.jpg)
559
+ Figure 13 The Location of "sweet spot" is determined by moment of inertia, with $V_{i} = 24m / s$ , $v_{i} = -40m / s$ , $M = 0.885kg$ .
560
+
561
+ ![](images/85c95efc5c43ba9232fb4f360ce669989fa8be6aafba89754a11ba610781faf2.jpg)
562
+ Figure 14. The figure i illustrating the relationship between MOI and Maximum BBS based on the data $\frac{k_{ball}}{k_{bat}} = 0.1$
563
+
564
+ # The Effect of Mass $(M)$ on Maximum BBS
565
+
566
+ Similar to the effect of MOI, the effect of Mass on BBS cannot be determined only by qualitative analysis. We make the quantitative analysis in the same procedure as MOI, only replacing $\omega_{i}$ with $V_{i}$ , and $J$ with $M$ .
567
+
568
+ From Figure 17, we easily obtain the similar result that the increase of $M$ (limited in possible field of variation) leads to higher maximum BBS. What's more, Figure 17 also tells us that the increase speed of maximum BBS is getting slow with the increase of mass.
569
+
570
+ Especially, we draw the swinging bat speed $V_{i}$ and maximum $BBSv_{i}$ with respect to mass $M$ in the same figure, as Figure 18 shows. Comparing with Figure 4 which is drawn with experimental data [Bahill and Freitas, 1995], our data obtained from calculation of our model exactly match that experimental data, which proves the correctness of our model.
571
+
572
+ ![](images/970666e45f04ed0c2cfb4e37482286f38f4e3effcc5a37f2bd7dedcd7aa3d083.jpg)
573
+ Figure 15. The plot of swinging bat speed $V_{i}$ as the function of the corked bat mass $M$ .
574
+
575
+ ![](images/2c62c8b9001d5fb8deb98285399e44c20f48e948317641c014b32e2847798a98.jpg)
576
+ Figure 16. The plot of "sweet spot" location $b_{\text{sweet spot}}$ as the function of the corked bat mass $M$ .
577
+
578
+ ![](images/b38a3598a8f14de12dbe40bc921728f98f0f47d9abd2198759f13be4f449dd61.jpg)
579
+ Figure 17. The maximum $BBS$ $v_{f,max}$ as the function of the forked bat mass $M$ .
580
+
581
+ ![](images/02c09a32408ea50b68be321bc73721db1c76999299011eccd97a9091977396fe.jpg)
582
+ Figure 18 The plot of batted-ball speed $v_{f}$ and swinging bat speed $V_{i}$ as the function of the coreded bat mass $M$ . This figure derived from the calculation of our model exactly matches the actual data obtained by experiments.
583
+
584
+ # 5.3.2 “Sweet Spot” Effects of Corking
585
+
586
+ # ■ Simulating details
587
+
588
+ The corked bat can be fully described with the shape of a hallowed cylinder ("Corked Hole", mathematical described with $d, h$ ) and the density of the stuffed materials (cork or rubber in this problem). Based on Eq.19 and Eq.18, we are able to calculate the mass and moment of inertia of a specific corked bat, and then obtain the values of maximum BBS which is the key indicator to evaluate "Sweet Spot" effect.
589
+
590
+ In Figure 19, we set the density of stuffed material $\rho_{cork} = 450kg / m^3$ to evaluation the "Sweet Spot" effect of different corked bats with the different "Corked Holes", whose depth and diameter is in the actually possible field of variation ( $h\in [0,0.2]m$ , $d\in [0,0.05]m$ ) [Russel, 2004].
591
+
592
+ In Figure 21, the density of stuffed material $\rho_{rubber} = 1100kg / m^3$ to evaluation the "Sweet Spot" effect of different corked bats with the same variation of $d$ and $h$ .
593
+
594
+ ![](images/1ca648cec25140e6dba848a9db211172a245627ca112183a6f79ee53a46d8b06.jpg)
595
+ Figure 19 Figure in a wireframe mesh style. Based on the structure of the corked bat: depth and radius of corked hole, the density of stuffing cork $\rho_{cork} = 450kg / m^3$ , we determined the $J,M$ , and then obtain maximum BBS. Notes: $v_{i} = -40m / s$ , $L = 840mm$ , $a = 564mm$ , $m = 0.145kg$ .
596
+
597
+ ![](images/4f0c2fb8b6d8fe76710e8fd2e9e5eac9095aa36cd971bed89823b90ef6d3669e.jpg)
598
+ Figure 20 Same Figure in contour style.
599
+
600
+ ![](images/3255ac8c4291deb0d9e4599f67cc1a0e83d979d4fcc3cb670d8280c2b350071b.jpg)
601
+ Figure 21 Based on the structure of the corked bat: depth and radius of corked hole, the density of stuffing rubber $\rho_{\text{rubber}} = 1100kg/m^3$ , we determined the $J, M$ , and then obtain maximum BBS. Notes: $v_i = -40m/s$ , $L = 840mm$ , $a = 564mm$ , $m = 0.145kg$ .
602
+
603
+ ![](images/7f699d16dbbdc10ef7cbf19588754717dc3fb7ae56aab123b017a20dd0068c62.jpg)
604
+ Figure 22 Figure in contour style
605
+
606
+ # Results
607
+
608
+ # I. Whether corking enhances "Sweet Spot" effect?
609
+
610
+ # - The corked bat with cork stuffed $(\rho_{cork} < \rho_{ash})$
611
+
612
+ In Figure 19 or Figure 20, considering all possible values of $d$ and $h$ , the maximum $BBS(v_{f,max})$ of the corked bat with cork stuffed reaches the maximum at the point of $d = 0, h = 0$ . In other words, the maximum $BBS$ of all possible corked bat with cork stuffed is lower than that of the uncorked. However, since both $M$ and $J$ decrease, the corked bat is better to control than before.
613
+
614
+ Therefore, if "Sweet Spot" effect only concerns the maximum BBS (i.e. maximum power transferred to the ball), we can evidently conclude that "corking" a bat with cork stuffed cuts down the "Sweet Spot" effect, no matter what shape of "Corked Hole" is adopted.
615
+
616
+ # - The corked bat with rubber stuffed $(\rho_{rubber} > \rho_{ash})$
617
+
618
+ In Figure 21 or Figure 22, considering all possible values of $d$ and $h$ , the maximum $BBS(v_{f,max})$ of the corked bat with rubber stuffed reaches the minimum at the point of $d = 0, h = 0$ . In other words, the maximum $BBS$ of all possible corked bat with rubber stuffed is higher than that of the uncorked. However, since both $M$ and $J$ increase, the corked bat becomes more difficult to
619
+
620
+ control than before.
621
+
622
+ Therefore, if "Sweet Spot" effect only concerns the maximum $BBS$ (i.e. maximum power transferred to the ball), we can evidently conclude that "corking" a bat with rubber stuffed enhances the "Sweet Spot" effect, no matter what shape of "Corked Hole" is adopted.
623
+
624
+ # II. Why MLB prohibits "corking"
625
+
626
+ As discussed above, we know that cording the bat with high density material stuffed will significantly enhances the "Sweet Spot" effect which will lead to higher batted ball speed. However, cording the bat with low density material stuffed will leads to better control which may satisfy some part of athletes, like contact-type hitters.
627
+
628
+ If the Major League Baseball (MLB) does not prohibit "corking", the athletes can always improve their scores by using higher quality "corking" materials and more sophisticated design of corked bat. If so, the contest is not only the contest on athletes' baseball skills, but also on the quality of bat, in other words, the technology and money, which will definitely leads to inequality. Aimed to protect the principle of sport-equality, the MLB has to list "corking" technology into taboo.
629
+
630
+ # 5.3 Solution to Problem III
631
+
632
+ Metal bat (usually aluminum bat) has received great popularity for its wider range of "sweet spot", more power, better feel, and higher performance than the wood one (usually ash bat). What factors contribute to metal bat's better performance? Why MLB prohibits the metal (aluminum) bat? Herein, we will answer those questions based on conclusions drawn from our previous models.
633
+
634
+ # 5.3.1 An Illustration of a Typical Metal Bat
635
+
636
+ Figure 23 is an illustration of a typical metal bat. Generally speaking, the outer shape of a metal bat and a wood bat is much alike. However, a large amount of metal will be hollowed out to ensure it has an appropriate mass. The shell of bat is about $0.24\mathrm{cm}$ in thickness, about $1/7$ of that of a corked wood bat [Nathan, 2004]. It must be mentioned that the shell structure leads to trampoline effect which enhance the performance of metal bats.
637
+
638
+ ![](images/5c49936f5d47c53130bc1cf640f3dfb9a85dd761682e40e4d379684d76f04f64.jpg)
639
+ Figure 23. Illustration of a typical metal bat; metal bats are often designed to have special cavity to enhance its performance.[Nguyen, 2004]
640
+
641
+ # 5.3.2 Predicting Behavior of Metal and Wood Bats
642
+
643
+ # ■ Review of established models
644
+
645
+ We have developed two models to investigate how structural properties ( $J_{cm}$ , $M$ , $z_{cm}$ , coarking) influence the performance of a bat. We picture a logic relationship corresponding to our foregoing models in Figure 24 to facilitate the prediction. In this figure, controllability and $BBS$ are introduced to describe the behavior of a bat. Figure 24. clearly shows that both metal and wood bats can be characterized by parameters $J_{cm}$ , $M$ , $z_{cm}$ and outer shape, which affects the bats' controllability and $BBS$ . Apart from that, metal bats generally have trampoline effect which increases $BBS$ as proved in our DS model. To predict and compare the behavior the two kinds of bats, we first start with a special case.
646
+
647
+ ![](images/ca379db98950aef97166317c0cc447d21e7b8b328207623383ae7f9489b741a3.jpg)
648
+ Figure 24. Illustration of useful conclusions drawn from foregoing models. Both metal and wood bats can be characterized by parameters $J_{cm}$ , $M$ , $z_{cm}$ and outer shape, which affects the bats' controllability and $BBS$ ; Metal bats have trampoline effect which increases $BBS$ .
649
+
650
+ # A special case: the same $J_{cm}$ , $M$ , $z_{cm}$ , and outer shape
651
+
652
+ # - Can such a special metal bat exist?
653
+
654
+ Before the comparison and prediction, one may cast doubt on the existence of such a special metal bat meeting satisfying these conditions. We do some calculation and obtain such a special metal design parameters and graph as listed below.
655
+
656
+ Table 3. The design parameters of the special aluminum bat. The solid wood bat is the one used by Cross in his measurements. It is a 33 in/31 oz Louisville Slugger Model R161 [Cross, 1998]; and the relevant properties are also listed
657
+
658
+ <table><tr><td></td><td>Solid Wood Bat</td><td>Aluminum Bat</td><td>same or not</td></tr><tr><td>M</td><td>0.885kg</td><td>0.885 kg</td><td>yes</td></tr><tr><td>ZCM</td><td>0.564m</td><td>0.564m</td><td>yes</td></tr><tr><td>L</td><td>0.840m</td><td>0.840m</td><td>yes</td></tr><tr><td>JCM</td><td>0.045kg · m2</td><td>0.045kg · m2</td><td>yes</td></tr><tr><td>density</td><td>670kg/m3</td><td>2700kg/m3</td><td>no</td></tr><tr><td>Outline Shape</td><td colspan="2">Totally same</td><td>yes</td></tr><tr><td>Trampoline Effect</td><td>No</td><td>Yes</td><td>yes</td></tr></table>
659
+
660
+ The technique to construct a required bat is stated in the part of technical tips in the paper.
661
+
662
+ ![](images/b82cc199b2b4f79861b2044ab2fe0b4d2172811c54995efe203361a79f1bc817.jpg)
663
+ Figure 25. Illustration of the special metal bat design.
664
+
665
+ # - Prediction and Comparison of behavior for wood and metal bats
666
+
667
+ As Figure 24 demonstrates, such a special metal bat will have the same controllability as the wood bat concerned. However, this special metal bat will have a faster BBS due to the contribution of trampoline effect. In a word, our model could predict that a metal bat will outperform the wood bat with the same $J_{cm}$ , $M$ , $z_{cm}$ , $COR$ and outer shape.
668
+
669
+ A quantitative analysis on the trampoline effect can be got on Appendix II of this paper. We can see that $BBS$ could experience a gap from $45\mathrm{m / s}$ to about $75\mathrm{m / s}$ theoretically.
670
+
671
+ # General cases
672
+
673
+ From the special case, we can conclude that in common situation, metal bats will possess both better BBS and controllability. The reason is that a better metal bat can be manufactured through the following steps on the basis of the special bat design:
674
+
675
+ 1) Reduce $J_{cm}$ and $M$ by remove a certain amount of metal, thus increase the controllability.
676
+ 2) Streamline the inner cavity to enhance the strength of the bat.
677
+
678
+ After the two steps, the newly-made bat still has strong trampoline effect while its controllability is raised.
679
+
680
+ # An experimental example
681
+
682
+ We present another experimental example, in Table 4, which verifies our argument that the aluminum bat exhibit both faster BBS and better controllability.
683
+
684
+ Table 4. Contrast of wood and metal bats including Crisco-Greenwald's Cage Study[Greenwald, 2001; Crisco,2002] experiment result and model calculation result.
685
+
686
+ <table><tr><td>Bat</td><td>Length (in)</td><td>Weight (oz)</td><td>Zcm (in)</td><td>Jcm (oz-in2)</td><td>Swing Speed (mph)</td><td>BBS (mph)</td></tr><tr><td>Wood</td><td>34</td><td>30.9</td><td>23</td><td>11516</td><td>67.9</td><td>98.6</td></tr><tr><td>bat</td><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>Metal
687
+ bat</td><td>33</td><td>29.2</td><td>20.7</td><td>9282</td><td>70.9</td><td>106.5</td></tr></table>
688
+
689
+ # 5.3.3 Reason for MLB's Prohibition of Metal Bats
690
+
691
+ Our above analysis points out metal bats generally have faster BBS and better controllability than wood bats do. This is also supported by Crisco and Greenwald Batting Cage Study[Greenwald, 2001; Crisco,2002], which shows that an average batted ball speed for wood bats is around 98.6-mph, while the average batted balls speed for metal bats lies between 100.3 mph and 106.5-mph. The faster travelling ball poses two negative effects on the athletes.
692
+
693
+ - The faster the ball travels, the less time athletes, such as the pitcher, have to react, thus resulting in injuries more easily. It is generally believed that the pitcher's reactionary time drops from 0.4s to 0.3s when the ball is hit by a metal bat [Russell, 2008].
694
+ - The faster the ball travels, the more severe injuries it tends to cause.
695
+
696
+ Apart from safety concern, we derive from our model that
697
+
698
+ the advent of the metal bat will weaken the fairness of the game;
699
+ - powerful bats make athletes dependent upon the tool, not the game itself.
700
+
701
+ From the above analysis, we conclude the reason that MLB should prohibit metal bats is for concerns of safety and the nature of sport.
702
+
703
+ # 6 Technique Tips for Bat Design
704
+
705
+ One of the important implementation of our model is to instruct the design of a perfect bat.
706
+
707
+ # 6.1 Optimum Mass for Better "Sweet Spot" Effect
708
+
709
+ In order to simplify the process of estimation, we firstly simplify the Eq.4. When the collision takes place near to the mass center of the bat, as it very frequently does, all terms containing $\mathbf{b}$ may be deleted [P. Kirkpatrick 1963]. Therefore, $r = m / M$ and Eq.4 becomes
710
+
711
+ $$
712
+ v _ {f} = \frac {(1 + e) (V _ {i} + \omega_ {i} b) + | v _ {i} | \cos \theta (r - e)}{(1 + r) \cos \phi}. \tag {23}
713
+ $$
714
+
715
+ It shows that the velocity of the batted ball speed (BBS) $v_{f}$ is dependent upon the ratio of $m / M$ or $r$ . According to the equation, the value of $v_{f}$ will be raised if we merely lower the value of $M$ and keep all the other parameters unchanged. However, as is often the case, the swing speed $V_{i}$ will be decreased when the mass of the bat $M$ becomes larger, which in turn leads to the decrease of $v_{f}$ . One can anticipate that there exists an optimum bat mass on the assumption that the best bat requires that least energy input to impart a given velocity to the ball.
716
+
717
+ We estimate the best mass of bat on the assumption that "that bat is best which requires the least energy input to impart a given velocity to the ball [P. Kirkpatrick 1963]. Rearrangement of Eq.4 gives
718
+
719
+ $$
720
+ V _ {f} = \frac {(1 + r) v _ {f} \cos \phi + (e - r) v \cos \theta}{1 + e}. \tag {24}
721
+ $$
722
+
723
+ The kinetic energy of the bat (neglecting $\omega_{i}$ ) is
724
+
725
+ $$
726
+ W = \frac {1}{2} M V _ {f} ^ {2} = \frac {M}{2} \left[ \frac {(1 + r) v _ {f} c o s \phi + (e - r) v c o s \theta}{1 + e} \right] ^ {2} \tag {25}
727
+ $$
728
+
729
+ By differentiating the expression of $W$ and setting it to zero, we obtain the minimum of $W$ when
730
+
731
+ $$
732
+ r = \frac {v _ {f} \cos \phi + e v \cos \theta}{v _ {f} \cos \phi - v \cos \theta} \cong \frac {v _ {f} + e v}{v _ {f} - v} \tag {26}
733
+ $$
734
+
735
+ If the $v_{f} = -v_{i}$ , then
736
+
737
+ $$
738
+ r = \frac {1 - e}{2}. \tag {27}
739
+ $$
740
+
741
+ For example, when $e^* = 0.5$ , then $r = 0.25$ , $M = 4m$ .
742
+
743
+ # 6.2 Designing a Special Aluminum Bat
744
+
745
+ This part primarily serves as the guidance for the case study in Problem III.
746
+
747
+ The solid wood bat we choose is the one used by Cross in his extensive set of measurements. It is a 33 in/31 oz Louisville Slugger Model R161 [Cross, 1998]; and the relevant properties are listed in Table 5. The parameters of special aluminum bat we design are also listed in Table 5.
748
+
749
+ Table 5. A Comparison between Solid Wood Bat & Aluminum Bat (customized)
750
+
751
+ <table><tr><td></td><td>Solid Wood Bat</td><td>Aluminum Bat</td><td>same or not</td></tr><tr><td>M</td><td>0.885kg</td><td>0.885 kg</td><td>yes</td></tr><tr><td>ZCM</td><td>0.564m</td><td>0.564m</td><td>yes</td></tr><tr><td>L</td><td>0.840m</td><td>0.840m</td><td>yes</td></tr><tr><td>JCM</td><td>0.045kg · m2</td><td>0.045kg · m2</td><td>yes</td></tr><tr><td>density</td><td>670kg/m3</td><td>2700kg/m3</td><td>no</td></tr><tr><td>Outline Shape</td><td colspan="2">Totally same</td><td>yes</td></tr><tr><td>Trampoline Effect</td><td>No</td><td>Yes</td><td>yes</td></tr></table>
752
+
753
+ To customize such a bat, we need to reconfigure the mass distribution transversely. One of the possibilities is shown in Figure 26.
754
+
755
+ ![](images/6ecc23fcdd102e8e79f39ffa85f1b31f1a13124223cfc2909b0552a8e22e15b1.jpg)
756
+ Figure 26. Cross section of typical inner cavity style of aluminum bat and its estimated according mass distribution function.
757
+
758
+ According to Table 5 and Figure 26, we derive three constrain equations and assume that the bat is a spinning body.
759
+
760
+ - The mass of the wood bat and the customized bat are equal gives
761
+
762
+ $$
763
+ M ^ {w} = \int_ {0} ^ {L} d m (z), \tag {28}
764
+ $$
765
+
766
+ where $M^w$ is the mass of the wood bat, $kg$ ;
767
+
768
+ $m(z)$ is the mass distribution function of the customized bat;
769
+
770
+ $L$ is the length of the wood bat, $L$ .
771
+
772
+ - The mass center of the wood bat and the customized bat are the same gives
773
+
774
+ $$
775
+ z _ {c m} ^ {w} = \frac {\int_ {0} ^ {L} z \cdot d m (z)}{M ^ {w}}, \tag {29}
776
+ $$
777
+
778
+ where $z_{cm}^{w}$ is the mass center of the wood bat, $kg$ .
779
+
780
+ - The moment of inertia of the wood bat and the customized bat with respect to
781
+
782
+ the mass center are the same gives
783
+
784
+ $$
785
+ J _ {c m} ^ {w} = \int_ {0} ^ {L} (z - z _ {c m}) ^ {2} \cdot d m (z), \tag {30}
786
+ $$
787
+
788
+ where $J_{cm}^{w}$ is the moment of inertia of the wood with respect to the mass center, $kg \cdot m^2$ .
789
+
790
+ Any mass distribution function $m(z)$ satisfying the above three equations can be used to customize an aluminum bat.
791
+
792
+ In fact, there exists an infinite number of $m(z)$ functions, yet many of them are complicated or technical impractical. To facilitate the control test and analysis, we adopt a simple and special $m(z)$ as demonstrated by Figure 27.
793
+
794
+ - Derivation of special case
795
+
796
+ ![](images/a2c15d87db28025a3fc6dcb89d4532859a99441beae00c7034cc2fb03286db2f.jpg)
797
+ Figure 27. Cross section of this special and simple case of aluminum bat (geometrical parameters unknown); Shadowed part and white part are distinguished to help the according theoretical derivation of mass distribution $\mathrm{m(z)}$ and the inner shape of bat.
798
+
799
+ We let $M_{a}$ be the mass of the removed part, and $M_{b}$ be the mass of the rest part. Because the outline shape of two bats are same, which means the two bats have the same volume if stuff the removed part $M_{a}$ back, we have
800
+
801
+ $$
802
+ \frac {M _ {a} + M _ {b}}{M ^ {w}} = \frac {\rho_ {A l}}{\rho_ {w o o d}}, \tag {31}
803
+ $$
804
+
805
+ where
806
+
807
+ $M^w$ is the mass of the wood bat, $kg$ ;
808
+
809
+ $\rho_{Al}$ is the density of aluminum, $kg / m^3$
810
+
811
+ $\rho_{wood}$ is the density of wood, $kg / m^3$
812
+
813
+ For the same reason, the mass center equation and the moment of inertia equation<sup>1</sup>, we also have
814
+
815
+ $$
816
+ M _ {a} \cdot z _ {c m, a} + M _ {b} \cdot z _ {c m, b} = (M _ {a} + M _ {b}) z _ {c m} ^ {w} \tag {32}
817
+ $$
818
+
819
+ $$
820
+ \frac {J _ {c m} ^ {a} + J _ {c m} ^ {b}}{J _ {c m} ^ {w}} = \frac {\rho_ {A l}}{\rho_ {w o o d}} \tag {33}
821
+ $$
822
+
823
+ Using the three constrain equations, we can get the mechanics parameters of the removed part.
824
+
825
+ $$
826
+ M _ {b} = M ^ {w} \& E q. (3 1) \quad \Rightarrow \quad M _ {b} = \left(\frac {\rho_ {A l}}{\rho_ {w o o d}} - 1\right) M ^ {w} \tag {34}
827
+ $$
828
+
829
+ $$
830
+ z _ {c m, a} = z _ {c m} ^ {w} \& E q. (3 2) \quad \Rightarrow \quad z _ {c m, b} = z _ {c m} ^ {w} \tag {35}
831
+ $$
832
+
833
+ $$
834
+ J _ {c m} ^ {b} = J _ {c m} ^ {w} \& E q. (3 3) \quad \Rightarrow \quad J _ {c m} ^ {b} = \left(\frac {\rho_ {A l}}{\rho_ {w o o d}} - 1\right) J _ {c m} ^ {w} \tag {36}
835
+ $$
836
+
837
+ Through calculation we obtain a diagram of the demo bat in Figure 25.
838
+
839
+ # 7 Discussion and Conclusion
840
+
841
+ We develop two models to elaborate the ball-bat collision. It has been shown that the models' simulation well match the empirical data and well explain some phenomena. It seems that some physical issues including the vibration of the bat and the nonlinear compress behavior of ball have been neglected in our paper. However it's not that case. We will validate our concise model in the following part.
842
+
843
+ # 7.1 Model Validation
844
+
845
+ Literature review indicates that some researchers have developed more complicated models (see Figure 28) to show the dynamic features and vibrant features. With advanced models, such phenomena as bending vibration, hoop vibration, and so on. As listed four models in Figure 28, the latter two, #3 and #4 do no good to answer the proposed problems. The energy stored in the wave on the bat after the collision is a part of consuming energy considered in modification of parameter, BBCOR, in the first two models.
846
+
847
+ Such a technique is quite useful in practice, because we can do experiment to determine $BBCOR$ .
848
+
849
+ ![](images/e930e5ec31d931edb3c354497cebbe7b53ec0386748a986c1462b0b0fdace53a.jpg)
850
+ Figure 28. The evolution of collision models
851
+
852
+ # 7.2 Bending Vibration
853
+
854
+ In the fundamental model, we treat the bat as a rigid body and neglect the potential vibration excited by the collision. Actually, during and after collision, the baseball bat might exhibit several flexural bending modes of vibration, and the energy stored in the vibration movement is dependent on impact location. It is possible that at the "sweet spot" obtained in the fundamental modal will shift if intensive vibration consuming a large amount of energy is excited. Therefore it is worthwhile to enhance the fundamental model by taking into account the influence of the potential vibration on the performance of the bat.
855
+
856
+ ![](images/d239c35b53e9a688f1772f32cf6cff6e280c817558f107b10931249d2c83a04c.jpg)
857
+ Figure 29. Illustration of bending vibration of the bat in different modes or frequencies [Russell,2000]
858
+
859
+ It is natural that the existence of the bending vibration will give rise to the change of $v_{f}$ , which is determined by the properties of the bat and the baseball and the characteristics of the collision. According to the research work of Alan M. Nathan [2000], the expression of $v_{f}$ can be modified by replacing $e$ with $e_{eff}$ , as stated below:
860
+
861
+ $$
862
+ v _ {f} = \left[ \frac {e _ {e f f} - r}{1 + r} \right] v _ {b a l l} + \left[ \frac {e _ {e f f} + 1}{1 + r} \right] v _ {b a t}. (3 7)
863
+ $$
864
+
865
+ In this equation, $e_{eff}$ is an effective coefficient of restitution for the collision of the ball with a flexible bat, contains all the dynamical information about the collision and has the desired properties that it reduces to $e$ in the limit that vibration are neglected[Alan M. Nathan 2000]. It has been shown that $e_{eff}$ depends strongly on the impact location yet weakly on the impact speed in Nathan's [2000] work.
866
+
867
+ Nathan investigated the baseball-bat collision from the perspective of vibration, two important results in his work are cited below:
868
+
869
+ - the vibration energy fraction as a function of impact location
870
+
871
+ ![](images/60e89d7839fa164132088f87a9e3b02f23c52971c4ddabad2d0fff8d9796c92a.jpg)
872
+ Figure 30. The distribution of vibration energy for an impact of a 90-mph ball on the standard wood bat, which has a CM speed of $54\mathrm{mph}$ and a rotational speed about the CM of $51\mathrm{s}^{-1}$ .
873
+
874
+ - the value of $e_{eff}$ as a function of impact location
875
+
876
+ ![](images/a551dc2fcbc6d2b7f2fee60c42cfc09263b46d0be7a4e716f53705035508f917.jpg)
877
+ Figure 31. The plot of $\mathbf{e}_{\mathrm{eff}}$ for an impact of a 90-mph ball on the standard wood bat, which has a CM
878
+
879
+ speed of $54\mathrm{mph}$ and a rotational speed about the CM of $51~\mathrm{s}^{-1}$ .
880
+
881
+ From the two figures, we can draw a conclusion that in the region 68-74 cm the vibration energy is small and $e_{eff}$ is relatively large. This conclusion also enhances the reasonability of the fundamental model that does not consider the possible bending vibration effect, since the vibration energy is about $5\%$ of the total energy when collision occurs at the "sweet spot". It is quite clear that the "sweet spot" is not on the bat end.
882
+
883
+ # 7.3 Problems Review
884
+
885
+ The three proposed problems have been well answered in our model simulation and results part.
886
+
887
+ Where is the "sweet spot"?
888
+
889
+ We find out that the "sweet spot" is located at a point about $15\mathrm{cm}$ from the end of the tip, which supports the empirical finding.
890
+
891
+ - How about the "corking" effects?
892
+
893
+ We find out that the "corking" effect mainly depends on the shape hollowed cylinder and the filing material density. Corking rubber (density larger than ash) enhances the "sweet spot" effect, while corking cork (density smaller than ash) weakens the "sweet spot" effect. For the same corking material, the enhancing/weakening effect is dependent upon the shape of the hollowed cylinder. This model explains the penalty MLB gave to some players using specially-corked bat.
894
+
895
+ - How about metal bats?
896
+
897
+ We find out that a well-designed metal bat will possess good qualities of faster BBS and better controllability than that of the wood one (usually ash bat). Our analysis exhibits the reason that MLB prohibits metal bats.
898
+
899
+ # 7.4 Strengths
900
+
901
+ - Our concise model is in good agreement with the experiment data. That's to say our model is practical to some extent, especially the model possess clear mode.
902
+ - We have well analyzed the performance of a bat, and controllability along with "sweet spot" effect is separately analyzed. Thus the analysis of the calculation results will be closer to the actuality.
903
+ - Our model has been developed to show the interrelationship of geometrical attributes, mechanical attributes and the performance. This can help the design of bat, and simultaneously, we have a in-depth understanding on the ball-bat interaction.
904
+
905
+ # 7.5 Weaknesses
906
+
907
+ - We haven't elaborately modeled the transverse wave and hoop vibration; the error introduced is about $5\%$ (see details in the foregoing bending mode part).
908
+ - We have used three empirical formulas and each has its own limitation and error.
909
+
910
+ # 8 Reference
911
+
912
+ Brody H., The "sweet spot" of a baseball bat, Am. J. Phys. 54, 640-643 (1986)
913
+ Brody H., Models of baseball bats, Am. J. Phys. 58, 756-7589(1990)
914
+ Bahill T., Freitas M.M., Two Methods for Recommending Bat Weights, Annals of Biomedical Engineering, 23(4), 436-444 (1995)
915
+ Cross R., The "sweet spot" of a baseball bat, Am. J. Phys. 66, 772-779(1998)
916
+ Crisco J.J., Batting performance of wood and metal baseball bats, Med. Sci. Sports Exerc., 34(10), 1675-1684 (2002)
917
+ Greenwald R.M, Differences in Batted Ball Speed with Wood and Aluminum Baseball Bats: A Batting Cage Study," J. Appl. Biomech., 17, 241-252 (2001).
918
+ Kilpatrick P., Batting the ball, Am. J. Phys. 31, 606-613 (1963)
919
+ Nathan A. M., dynamics of the baseball-bat collision, Am. J. Phys. 68, 980-990 (2000)
920
+ Nathan A. M., Some Remarks on Corked Bats, Dec. 1, 2004
921
+ http://www.npl.illinois.edu/~a-nathan/pob/corked-bat-remarks.doc
922
+ Russell D. A., Hoop frequency as a predictor of performance for softball bats, Engineering of Sport, Vol. 2, pp. 641-647 (International Sports Engineering Association, 2004).
923
+ Russell D. A., Should metal baseball bats be banned because they are inherently Dangerous? (Apr. 9, 2008)
924
+ http://paws.kettering.edu/~drussell/bats-new/ban-safety.html
925
+ Nguyen T.V., reinforced-layer metal composite bat, patent number US6808464
926
+
927
+ B1, patent date Oct.26, 2004
928
+
929
+ The Baseball-Bat Collision Lecture 7
930
+
931
+ http://www.slidesworld.com/slideshow.aspx/The-Baseball-Bat-Collision-Lecture-7-ppt-2153198
932
+
933
+ The Baseball-Bat Collision Lecture,8,
934
+
935
+ http://www.docstoc.com/docs/622036/The-Baseball-Bat-Collision-II-Lecture-8
936
+
937
+ The Baseball-Bat Collision Lecture 9
938
+
939
+ http://www.docstoc.com/docs/5690996/The-Baseball-Bat-Collision-III-Lecture-9
940
+
941
+ Official rules of MLB,
942
+
943
+ http://mlb.mlb.com/mlb/official_info/official_rules/batter_6.jsp
944
+
945
+ # Appendix I
946
+
947
+ # Mathematical Process for deducing equivalent BBCOR
948
+
949
+ In a reference frame where center of mass of the system ( $CM_{BSB}$ , consisted of ball, spring and bat) remains at rest, we define new symbols
950
+
951
+ $$
952
+ v _ {i} ^ {c} = v _ {i} - v _ {c} v _ {f} ^ {c} = v _ {f} - v _ {c}
953
+ $$
954
+
955
+ $$
956
+ V _ {i} ^ {c} = V _ {i} - v _ {c} V _ {f} ^ {c} = V _ {f} - v _ {c}
957
+ $$
958
+
959
+ in which the speed of the center of mass of the system
960
+
961
+ $$
962
+ v _ {c} = \frac {m v _ {i} + M V _ {i}}{m + M}.
963
+ $$
964
+
965
+ The collision in $CM_{BSB}$ reference frame can be divided into the following four procedures:
966
+
967
+ v) The ball and bat (with spring respectively) approach each other;
968
+ vi) The two springs contact and compress until the velocity (in $CM_{BSB}$ reference frame) of bat and ball turn to be zero;
969
+ vii) The ball and bat are accelerated by the springs respectively. In procedure ii and iii, the loss of energy happens through the effect between bat (ball) and spring.
970
+ viii) The two springs is no longer at contact, and the ball and bat separate each other.
971
+
972
+ By definition of the coefficient of restitution (the ratio of the differences in velocities before and after the collision), we obtain the equivalent BBCOR (with trampoline effect and material effect inside) in a reference of home frame
973
+
974
+ $$
975
+ e ^ {*} = \frac {v _ {f} - V _ {f}}{v _ {i} - V _ {i}}
976
+ $$
977
+
978
+ Considering the conservation of momentum, we obtain
979
+
980
+ $$
981
+ m v _ {i} ^ {c} + M V _ {i} ^ {c} = m v _ {f} ^ {c} + M V _ {f} ^ {c} = 0 \Rightarrow V _ {f} ^ {c} = - \frac {m}{M} v _ {f} ^ {c}, V _ {i} ^ {c} = - \frac {m}{M} v _ {i} ^ {c}.
982
+ $$
983
+
984
+ Then the equivalent BBCOR becomes
985
+
986
+ $$
987
+ e ^ {*} = \frac {v _ {f} - V _ {f}}{v _ {i} - V _ {i}} = \frac {v _ {f} ^ {c} - V _ {f} ^ {c}}{v _ {i} ^ {c} - V _ {i} ^ {c}} = \frac {v _ {f} ^ {c} + \frac {m}{M} v _ {f} ^ {c}}{v _ {i} ^ {c} + \frac {m}{M} v _ {i} ^ {c}} = \frac {v _ {f} ^ {c}}{v _ {i} ^ {c}}.
988
+ $$
989
+
990
+ We may also understand the physical meaning of $BBCOR$ in the way of energy
991
+
992
+ as following:
993
+
994
+ $$
995
+ \because E _ {i} = \frac {1}{2} m (v _ {i} ^ {c}) ^ {2} + \frac {1}{2} M (V _ {i} ^ {c}) ^ {2} = \frac {1}{2} m (v _ {i} ^ {c}) ^ {2} (1 + \frac {m}{M})
996
+ $$
997
+
998
+ $$
999
+ E _ {f} = \frac {1}{2} m \left(v _ {f} ^ {c}\right) ^ {2} + \frac {1}{2} M \left(V _ {f} ^ {c}\right) ^ {2} = \frac {1}{2} m \left(v _ {f} ^ {c}\right) ^ {2} (1 + \frac {m}{M})
1000
+ $$
1001
+
1002
+ $\therefore e^{*2} = \frac{E_f}{E_i}$ means that the $BBCOR^2$ is the fraction of energy restored in system after collision.
1003
+
1004
+ The total energy in the BSB system at contact before collision is $E_{i}$ . If no loss of energy happens when the kinetic energy transfers to the spring, the total energy fully converts to the potential energy in spring at the end of procedure ii.
1005
+
1006
+ Since the total momentum remains zero all times, the point where two springs contact remains at rest in BSB reference frame.
1007
+
1008
+ Therefore, if no loss of energy happens,
1009
+
1010
+ $$
1011
+ \left\{ \begin{array}{l} k _ {b a l l} \Delta x _ {b a l l} = k _ {b a t} \Delta x _ {b a t} \\ \Delta x _ {b a l l} + \Delta x _ {b a t} = \Delta x \end{array} \right. \Rightarrow \left\{ \begin{array}{l} \Delta x _ {b a l l} = \frac {k _ {b a t}}{k _ {b a l l} + k _ {b a t}} \Delta x \\ \Delta x _ {b a t} = \frac {k _ {b a l l}}{k _ {b a l l} + k _ {b a t}} \Delta x \end{array} \right.
1012
+ $$
1013
+
1014
+ the energy stored in the spring attached on ball is
1015
+
1016
+ $$
1017
+ E _ {i} ^ {b a l l} = \frac {1}{2} k _ {b a l l} \Delta x _ {b a l l} ^ {2} = \frac {1}{2} k _ {b a t} \Delta x _ {b a l l} ^ {2} \left(\frac {k _ {b a l l} k _ {b a t}}{(k _ {b a l l} + k _ {b a t}) ^ {2}}\right)
1018
+ $$
1019
+
1020
+ $$
1021
+ E _ {i} ^ {b a t} = \frac {1}{2} k _ {b a t} \Delta x _ {b a t} ^ {2} = \frac {1}{2} k _ {b a l l} \Delta x _ {b a t} ^ {2} \left(\frac {k _ {b a l l} k _ {b a t}}{(k _ {b a l l} + k _ {b a t}) ^ {2}}\right)
1022
+ $$
1023
+
1024
+ Considering conservation of total energy $E_{ball} + E_{bat} = E$ and the physical meaning of BBCOR in energy perspective $e^2 = E_f / E$ , we obtain
1025
+
1026
+ $$
1027
+ E _ {f} ^ {b a l l} = E _ {i} ^ {b a l l} e _ {b a l l} ^ {2} = \frac {k _ {b a t}}{k _ {b a l l} + k _ {b a t}} e _ {b a l l} ^ {2} E
1028
+ $$
1029
+
1030
+ $$
1031
+ E _ {f} ^ {b a t} = E _ {i} ^ {b a t} e _ {b a t} ^ {2} = \frac {k _ {b a l l}}{k _ {b a l l} + k _ {b a t}} e _ {b a t} ^ {2} E
1032
+ $$
1033
+
1034
+ Therefore, in the collision of ball and corked bat, the different part of Eq. 16 is as following
1035
+
1036
+ $$
1037
+ e ^ {* 2} = \frac {E _ {f}}{E _ {i}} = \frac {E _ {f} ^ {b a l l} + E _ {f} ^ {b a t}}{E _ {i}} = \frac {k _ {b a t}}{k _ {b a l l} + k _ {b a t}} e _ {b a l l} ^ {2} + \frac {k _ {b a l l}}{k _ {b a l l} + k _ {b a t}} e _ {b a t} ^ {2}.
1038
+ $$
1039
+
1040
+ where
1041
+
1042
+ $$
1043
+ e ^ {* 2}
1044
+ $$
1045
+
1046
+ the fraction of energy restored in ball and bat (kinetic energy) after collision;
1047
+
1048
+ $$
1049
+ \frac {k _ {b a t}}{k _ {b a l l} + k _ {b a t}}
1050
+ $$
1051
+
1052
+ the fraction of initial energy stored in ball;
1053
+
1054
+ $$
1055
+ e _ {b a l l} ^ {2}
1056
+ $$
1057
+
1058
+ the fraction of stored energy returned to kinetic energy of ball;
1059
+
1060
+ $$
1061
+ \frac {k _ {b a l l}}{k _ {b a l l} + k _ {b a t}}
1062
+ $$
1063
+
1064
+ the fraction of initial energy stored in bat;
1065
+
1066
+ $$
1067
+ e _ {b a t} ^ {2}
1068
+ $$
1069
+
1070
+ the fraction of stored energy returned to kinetic energy of bat.
1071
+
1072
+ # Appendix II
1073
+
1074
+ # The Stiffness of the Corked Bat's Shell
1075
+
1076
+ Trampoline effect can be observed in the hollowed bat [Russell, 2004]. We have augmented our model from fundamental model to DS model, by which we know how the relative stiffness between ball and bat " $k_{ball} / k_{bat}$ " influences the values of $BBS$ and $v_f$ . However, going a further step, we will discuss, to what extent does hollowing affects " $k_{ball} / k_{bat}$ ", and then the equivalent BBCOR $e^*$ , as well as the "sweet spot" effect.
1077
+
1078
+ # - Hoop Spring
1079
+
1080
+ The observed fundamental hoop vibration mode, which accounts for the majority of vibrant energy and is responsible for trampoline effect, has a frequency of about $1\mathrm{kHz}$ . It means that when the ball leaves the bat, taking place about 1 ms after the ball touches the bat [H. Brody, 1985], the fundamental hoop vibration mode has not been set up. In other words, at the moment that ball exit, it has not "seen" either the knob or tip of the bat. This fact indicates us that we can view the bat as a hoop spring.
1081
+
1082
+ The hoop stress constant $k_{bat}$ can be identified by an empirical relationship
1083
+
1084
+ $$
1085
+ k _ {b a t} \propto \left(\frac {t}{R}\right) ^ {3},
1086
+ $$
1087
+
1088
+ Where
1089
+
1090
+ $t$ is the thickness of the shell;
1091
+
1092
+ $R$ is the radius of the bat cross section;
1093
+
1094
+ - Calculation of an Instance
1095
+
1096
+ A typical wood bat has a cross section diameter of 2.3 in, and the removed cylinder has a diameter of 1 in; accordingly, the maximum and minimum $k_{bat}$ proportion can be estimated by:
1097
+
1098
+ $$
1099
+ \frac {k ^ {m a x}}{k ^ {m i n}} = \left(\frac {t _ {m a x}}{t _ {m i n}}\right) ^ {3} = \left(\frac {2 . 3}{1 . 3}\right) ^ {3} \approx 5. 5
1100
+ $$
1101
+
1102
+ $$
1103
+ \frac {k _ {o r i g i n} ^ {W}}{k _ {c o a r k e d} ^ {W}} = \left(\frac {t _ {o r i g i n} ^ {W}}{t _ {c o a r k e d} ^ {W}}\right) ^ {3} = \left(\frac {2 . 3}{1 . 3}\right) ^ {3} \approx 5. 5
1104
+ $$
1105
+
1106
+ A typical aluminum bat can be extremely hollowed; practically, the rest hoop thickness of aluminum bat can be 1/7 of that of the wood one [Nathan, 2004]; accordingly, the maximum and minimum $k_{\text{hoop}}$ proportion can be estimated by:
1107
+
1108
+ $$
1109
+ \frac {k ^ {m a x}}{k ^ {m i n}} = \left(\frac {t _ {m a x}}{t _ {m a x}}\right) ^ {3} = \left(\frac {2 . 3}{1 . 3} \times 7\right) ^ {3} \approx 1 8 8 6. 5
1110
+ $$
1111
+
1112
+ $$
1113
+ \frac {k _ {o r i g i n} ^ {M}}{k _ {h o l l o w e d} ^ {M}} = \left(\frac {t _ {o r i g i n} ^ {M}}{t _ {h o l l o w e d} ^ {M}}\right) ^ {3} = \left(\frac {2 . 3}{1 . 3} \times 7\right) ^ {3} \approx 1 8 8 6. 5
1114
+ $$
1115
+
1116
+ ![](images/7ae692462f9d31ae80461e39fdea5d72d6ddde0befacba3e9195a88c4cee8272.jpg)
1117
+
1118
+ Based on the figure above, we can figure out that the trampoline effect of thin metal shell bat is quite significant while wood corked bat not.
1119
+
1120
+ # Appendix III
1121
+
1122
+ Codes for MATLAB
1123
+ $\%$ \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*
1124
+ $\%$ \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*
1125
+ $\mathrm{h,R] =}$ meshgrid(0:0.0005:0.2);
1126
+ $\mathbf{R} = \mathbf{R} / 10$ $\%$ \*\*\*\*\*\*\*\*\*\*
1127
+ $\mathtt{d} = \mathtt{R}^{*}2$ .L=.84;a=.564;M=.885;rhol=450;rho2=1100;
1128
+ $\%$ \* \* \* \* \* \* \* \* \* \* \* \* \* \* \* \* \* \* \* \* \* \* \* \* \* \* \* \* \* \* \* \* \* \* \* \* \* \* \* \* \* \* \* \* \* \* \* \* \* \* \+
1129
+ J1=0.045+pi/48.\*(rho1-670).\*(d.^2.\*h).\*(.75\*d.^2+h.^2)+pi/4.\*(rho1-67 0).\*d.^2.\*h.\*(L-a-h/2).\^2+(M+pi/4.\*(rho1-670).\*d.^2.\*h).\*(M.\*(L-a)+p i/8.\*(rho1-670).\*d.^2.\*h.^2)./(M+pi/4.\*(rho1-670).\*d.^2.\*h)-L+a).\^2; J2=0.045+pi/48.\*(rho2-670).\*(d.^2.\*h).\*(.75\*d.^2+h.^2)+pi/4.\*(rho2-67 0).\*d.^2.\*h.\*(L-a-h/2).\^2+(M+pi/4.\*(rho2-670).\*d.^2.\*h).\*(M.\*(L-a)+p i/8.\*(rho2-670).\*d.^2.\*h.^2)./(M+pi/4.\*(rho2-670).\*d.^2.\*h)-L+a).\^2;
1130
+ $\%$ \**\ *\ *\ *\ *\ *\ *\ *\ *\ *\ *\ *\ *\ *\ *\ *\ *\ *\ *\ *\ *\ *\ *\ *\ *\ *\ *\ *\ *\ *\ *\ *\ *\ *\ *\ *\ *\ *\ *\ *\ *\ *\ *\ *\ *\ *\ *\ *\ *\ *\ *\ *\ *\ *\ *\ *\ *\ *\ *\ *\ *\ *\ *\ *\ *\ *\ *\ *\ *\ *\ *\ *\ *\ *\ *\ *\ *\ *\ *\ *\ *\ *\ *\ *\ *\ *\ *\ *\ *\ *\ *\ *\ *\ *\ *\ *\ *\ *\ *\ *\ *
1131
+ $\%$ J5=-1/12\*(3\*670 $^{\wedge}2^{*}$ h. $^{\wedge}2^{3.14^{*}}$ 2\*.R. $^{\wedge}6 + 10^{*}$ h. $^{\wedge}4^{*}$ 670 $^{\wedge}2^{*}$ 3.14 $^{\wedge}2^{*}$ R. $^{\wedge}4^{*}$ 0.885 $^{\wedge}670^{*}$ h. $^{\wedge}3.14 - 13^{*}$ h. $^{\wedge}3^{*}0.885^{*}$ 6 70 $^{*}3.14^{*}$ R. $^{\wedge}2 + 24^{*}0.274^{*}0.885^{*}$ h. $^{\wedge}2^{*}$ 670 $^{*}3.14^{*}$ R. $^{\wedge}2 - 12^{*}$ R. $^{\wedge}2^{*}0.045^{*}670^{*}$ h . $^\ast 3.14 - 12^{*}0.274^{*}2^{*}0.885^{*}670^{*}$ h. $^{*}3.14^{*}$ R. $^{\wedge}2 + 12^{*}0.045^{*}0.885)$ /(-0.885+67 0\*.h. $^{*}3.14^{*}$ R. $^{\wedge}2$ );
1132
+ M1=0.885+(rho1-670)\*(3.14\*( h. $* R.^{\wedge}2$ ;
1133
+ M2=0.885+(rho2-670)\*(3.14\*( h. $* R.^{\wedge}2$ ;
1134
+ vi1=(6032./(M1./0.02835+70.4)-5.4).*0.447;
1135
+ vi2=(6032./(M2./0.02835+70.4)-5.4).*0.447;
1136
+ JJ1=(J1+0.885 $^{*}0.564$ ). $*54674.7$ ;
1137
+ JJ2=(J1+0.885 $^{*}0.564$ ). $*54674.7$ ;
1138
+ w1=45.3.\*(JJ1/16000). $^{-}$ 0.30769;
1139
+ w2=45.3.\*(JJ2/16000). $^{-}$ 0.30769;
1140
+ b1=- (vi1+40)./w1+(( (vi1+40)./w1). $^{\wedge}2+$ J1.\*(O.145+M1)./(O.145.*M1)). $^{\wedge}0.5$ :
1141
+ b2=- (vi2+40)./w2+(( (vi2+40)./w2). $^{\wedge}2+$ J2.\*(O.145+M2)./(O.145.*M2)). $^{\wedge}0.5$ :
1142
+ vmaxl=( (1+0.564).\*(vi1+w1. $*b1$ )-40.\*(O.145./M1+O.145. $*b1.^{\wedge}2$ /J1-0.564) )./(1+O.145./M1+O.145. $*b1.^{\wedge}2$ /J1);
1143
+ vmax2=( (1+O.564).\*(vi2+w2. $*b2$ )-40.\*(O.145./M2+O.145. $*b2.^{\wedge}2$ /J2-0.564) )./(1+O.145./M2+O.145. $*b2.^{\wedge}2$ /J2);
1144
+ % drawing 1
1145
+ figure(1);
1146
+ meshc(h,R,vmaxl);
1147
+
1148
+ figure(2);
1149
+ $\% \mathrm{ma} = \max (\max (\mathrm{vmax1}))$ $\% \mathrm{mi} = \min (\min (\mathrm{vmax1}))$
1150
+ C1=contour(h,R,vmax1,10);
1151
+ clabel(C1);
1152
+ $\%$ drawing 2
1153
+ figure(3);
1154
+ meshc(h,R,vmax2);
1155
+ figure(4);
1156
+ $\% \mathrm{ma} = \max (\max (\mathrm{vmax2}))$ $\% \mathrm{mi} = \min (\min (\mathrm{vmax2}))$
1157
+ C2=contour(h,R,vmax2,10);
1158
+ clabel(C2);
MCM/2010/A/7571/7571.md ADDED
@@ -0,0 +1,779 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # An optimal model of "Sweet Spot" effect
2
+
3
+ summary
4
+
5
+ Various definitions on Sweet Spot have been given by former researchers. We take into consideration both the exit velocity and the comfort degree. Hence, the "Sweet Spot" in our model is defined as the optimal hitting location -resulting in high exit velocity, while reducing the impact force on hands to the lowest degree. A concept of Sweet Zone is also defined in our model for deeper study of the performance of a bat.
6
+
7
+ Based on the Acceleration theorem and the Moment of Momentum Theorem, our optimal model of Sweet Spot is established. The Sweet Spot is found to be $18.21\mathrm{cm}$ apart from the end of a bat, resulting with the exit velocity to be $3915.8\mathrm{cm/s}$ . In contrast, the exit velocity is $3129.6\mathrm{cm/s}$ , which obviously confirms that the Sweet Spot is not located at the end of a bat. The analysis on $\lambda$ (preference coefficient) shows the insensitivity of the location of Sweet Spot to the value of $\lambda$ , proving the stability of our model. Sensitivity on the mass of the ball and the swing speed suggests that greater mass of bat and higher swing speed leads to higher exit velocity of a batted ball.
8
+
9
+ Then, our model is augmented to evaluate the performance of a corked bat, getting the conclusion that the exit velocity of a corked bat is lower than that of a normal bat. Furthermore, the influence that the length of the cork lays on the Sweet Spot effect is discussed. Based on model analysis, we explain why corked bats are banned in most games.
10
+
11
+ To compare the properties of an aluminum bat with a wooden one, our optimal model of Sweet Spot is developed, showing that the Sweet Spot is $20.15\mathrm{cm}$ apart from the end of the bat, where the exit velocity is $4258.2\mathrm{cm/s}$ . Compared with a wooden bat, an aluminum bat shows obvious superiority—including higher exit velocity, wider Sweet Zone and closer distance to pivot. In sensitivity analysis, the wall thickness is discussed in details. Based on our model, we analyze the reason why an aluminum bat is banned.
12
+
13
+ Furthermore, the strength and weakness are given and further discussion is expected.
14
+
15
+ Lastly, we summarize all the conclusions in order to give batters reasonable suggestions from different aspects.
16
+
17
+ # Contents
18
+
19
+ 1. Introduction 5
20
+
21
+ 1.1. Definition of a sweet spot 5
22
+ 1.2. Summary of Our Approach.. Error!Bookmark not defined.
23
+
24
+ 2. Assumption and definition 5
25
+
26
+ 2.1. General assumptions: 5
27
+ 2.2. Definitions: 5
28
+
29
+ 3. Where is the Sweet Spot? 6
30
+
31
+ 3.1. Assumptions: 6
32
+ 3.2. Model establishment 6
33
+
34
+ 3.2.1. Objective formula 6
35
+ 3.2.2. Comfort index function $E_{batter}$ [2] 6
36
+ 3.2.3. Exit velocity index function $E_{ball}$ 8
37
+
38
+ 3.3. Parameter determination 9
39
+
40
+ 3.3.1. Parameters in the objective function 9
41
+ 3.3.2. Parameters in the comfort index function 10
42
+ 3.3.3. Parameters in batted ball velocity index function 11
43
+
44
+ 3.4. Solution and Analysis 12
45
+ 3.5. Sensitivity analysis 13
46
+
47
+ 3.5.1. Analysis of $\lambda$ (Preference coefficient) 13
48
+ 3.5.2. Analysis of the mass of a bat 15
49
+ 3.5.3. Analysis of the swing speed 17
50
+
51
+ 4. The behavior of a "corked" bat. 18
52
+
53
+ 4.1. Assumption 18
54
+ 4.2. Simplified model of the shape of a forked bat 19
55
+
56
+ 4.2.1. Mass of the bat 19
57
+ 4.2.2. The center of mass: 19
58
+ 4.2.3. Moment of inertia 20
59
+ 4.2.4. Swing speed: 20
60
+
61
+ 4.3. Parameter determination 21
62
+
63
+ 4.3.1. Mass of the bat: 21
64
+ 4.3.2. Length from the center of mass to the pivot 21
65
+ 4.3.3. Moment of inertia to the center of mass $(J_{o})$ 21
66
+ 4.3.4. Swing speed 21
67
+
68
+ 4.4. Solution and analysis 22
69
+
70
+ 4.4.1. Influence on the location of the Sweet Spot 22
71
+ 4.4.2. Influence on the exit speed 22
72
+ 4.4.3. Influence on the comfort index 23
73
+ 4.4.4. Influence on the Sweet Zone 24
74
+
75
+ 4.5. Why is corked bat forbidden? 25
76
+
77
+ 5. Aluminum vs. wood 26
78
+
79
+ 5.1. Parameter determination 26
80
+ 5.2. Solution and analysis: 27
81
+ 5.3. Performance comparison between wooden bat and aluminum bat 29
82
+
83
+ 5.3.1. Exit velocity 29
84
+ 5.3.2. Sweet Zone 29
85
+
86
+ 5.3.3. The sensitivity analysis of the wall thickness (t) 30
87
+
88
+ 5.4. Why aluminum bat is banned? 31
89
+ 6. Superiority and weakness 32
90
+ 6.1. Superiorities of our model 32
91
+ 7. Further discussion 32
92
+ 7.1. Weaknesses of our model 32
93
+ 8. Suggestions given to batters: 33
94
+ 9. References 33
95
+
96
+ # 1. Introduction
97
+
98
+ # 1.1. Definition of a sweet spot
99
+
100
+ There are many definitions for the "Sweet Spot" on a baseball bat. The reason why there isn't an uniform definition is that the research on "Sweet Spot" is not only conducted by different methods and theories, but also considered form different aspects. In most of earlier researches, a "Sweet Spot" is referred to as the center of percussion, which is studied by using Momentum Theorem. The nodes of the first or second modes of vibration has become a popular definition recently, since the collision and vibration theory are introduced. Some define a "Sweet Spot" in order to get the greatest exit velocity of a batted ball, while the others are for the sake of the comfort for batters.
101
+
102
+ In our model, we take into consideration both the exit velocity and the comfort degree. So, the "Sweet Spot" in our model is defined as the optimal hitting location -resulting in high exit velocity, while reducing the impact force on hands to the lowest degree.
103
+
104
+ # 2. Assumption and definition
105
+
106
+ # 2.1. General assumptions:
107
+
108
+ - Unit of length is expressed in cm, time in s and mass in g.
109
+ The mass of a bat is evenly distributed.
110
+ During the collision procedure, the bat doesn't break.
111
+ The rotation of the ball is not taken into consideration.
112
+ - Only the factor of material is taken into consideration.
113
+
114
+ # 2.2. Definitions:
115
+
116
+ $\mathbf{e}$ is the coefficient of restitution, which is defined as the ratio of the relative velocity after collision to that before the collision
117
+
118
+ exit velocity is the speed at which a ball moves away after impacting the bat.
119
+
120
+ # 3. Where is the Sweet Spot?
121
+
122
+ # 3.1. Assumptions:
123
+
124
+ In this section only wooden bat is considered.
125
+
126
+ # 3.2. Model establishment
127
+
128
+ # 3.2.1. Objective formula
129
+
130
+ As mentioned in the introduction, a "Sweet Spot" is defined as the optimal hitting location –resulting in high exit velocity, while reducing the impact force on hands to the lowest degree.
131
+
132
+ Based on the consideration of both factors, we get the objective
133
+
134
+ $$
135
+ f = \lambda E _ {b a t t e r} + (1 - \lambda) E _ {b a l l}
136
+ $$
137
+
138
+ # Where
139
+
140
+ $\lambda$ is the Preference coefficient: donates how evaluator lay emphasis on each of the factors taken into consideration (e.g. donates that only the highest exit velocity is to be pursued, ignoring the comfort degree for batters)
141
+
142
+ $E_{batter}$ and $E_{ball}$ is respectively the comfort index function and exit velocity index function, which is to be established later.
143
+
144
+ # 3.2.2. Comfort index function $E_{batter}$ [2]
145
+
146
+ ![](images/cb7e8f946a67130cd5a96cfe2ea8bb8e10070a31cff660909c4c98a288d47e3e.jpg)
147
+ Figure 1 the pivot point[1]
148
+
149
+ point $o$ is the point(actually a zone) where a batter holds the bat.
150
+
151
+ When colliding with the fast-flying ball, a bat is impacted by force $F$ , which will indirectly exert a force on the batter's hands. The force analysis is shown as follows: (see Figure 2)
152
+
153
+ ![](images/889d6bb1b9ef12c144037534e8c87d99bab31e371c3b487191009be1c54be788.jpg)
154
+ Figure 2
155
+
156
+ ![](images/eafce285fe93f761a039ed2099645e9aa5a401ebcf9a5db9e2760256ff1387fe.jpg)
157
+
158
+ According to the theory of mechanics, force $F$ is equivalent to a moment (equals $Fc$ ) and a force (equals $F$ ) applied at the mass center of the bat. Shown in ( see
159
+
160
+ # Figure 2 )
161
+
162
+ The acceleration at point $o$ caused by the impulsive force:
163
+
164
+ $$
165
+ a _ {1} = \frac {F}{M _ {\text {b a t}}}
166
+ $$
167
+
168
+ Angular acceleration of the bat caused by the equivalent moment equals to $\frac{Fc}{J_{cm}}$ ,
169
+
170
+ Then, the acceleration at point $o$ caused by the equivalent moment equals:
171
+
172
+ $$
173
+ a _ {2} = \frac {F c d}{J _ {c m}}
174
+ $$
175
+
176
+ ![](images/7ab09d32516720f7f6ed1e0bf3bf757367b0218a51184ab9f4c71dd70410fc3d.jpg)
177
+ Figure 3 speed before and after collision
178
+
179
+ ![](images/1a82a103630bd261a95ea1bcf4e66ee69c5d28e19988408aba0a9a2981ee4e0f.jpg)
180
+
181
+ Since the acceleration at point $O$ is a composition of accelerations caused by both of the impulsive force and the equivalent moment, $a$ can be expressed as:
182
+
183
+ $$
184
+ a = a _ {1} - a _ {2} = \frac {F}{M _ {b a t}} - \frac {F c d}{J _ {c m}}
185
+ $$
186
+
187
+ When the impulsive force strikes on a certain point of the bat, the force applied on hands will be zero, since the acceleration at point $O$ will be zero.
188
+
189
+ So, the acceleration of point $o$ ranges within[0, $\frac{F}{M_{bat}}$ ], getting the comfort index :
190
+
191
+ $$
192
+ E _ {b a t t e r} = 1 - 0. 3 1 2 3 \left(\frac {M _ {b a t} d}{J _ {c m}} c - 1\right) ^ {2}
193
+ $$
194
+
195
+ - Clearly, the value of $E_{batter}$ ranges from 0 to 1.
196
+
197
+ # 3.2.3. Exit velocity index function $E_{ball}$ :
198
+
199
+ It is a index representing the velocity at which the ball moving away after collision
200
+
201
+ According to the theorem of momentum moment, we can get the following expression:
202
+
203
+ $$
204
+ J _ {o} w _ {1} - M _ {\text {b a l l}} v _ {1} b = J _ {o} w _ {2} + M _ {\text {b a l l}} v _ {2} b \tag {1.1}
205
+ $$
206
+
207
+ e is the coefficient of restitution, which is defined as the ratio of the relative velocity after collision to that before the collision.
208
+
209
+ $$
210
+ e = \frac {v _ {2} - w _ {2} b}{v _ {1} + w _ {1} b} \tag {1.2}
211
+ $$
212
+
213
+ Combining (1.1) and (1.2), $v_{2}$ can be expressed as:
214
+
215
+ $$
216
+ v _ {2} = \frac {e ^ {\square} (v _ {1} + w _ {1} b) J _ {o} - v _ {1} M _ {b a l l} b ^ {2} + w _ {1} b J _ {o}}{J _ {o} + M _ {b a l l} b ^ {2}}
217
+ $$
218
+
219
+ According to Alan M. Nathan's research, the value of $e$ is related to ratio of the force constant of the bat ( $k_{bat}$ ) and that of the ball ( $k_{ball}$ ). This ratio varies with the different impact location on a bat.
220
+
221
+ Hence, e is a function of the length from the hands-holding point to the impact point. The expression above can be overwritten as[3]:
222
+
223
+ $$
224
+ v _ {2} = \frac {e (b) \left(v _ {1} + w _ {1} b\right) J _ {o} - v _ {1} M _ {\text {b a l l}} b ^ {2} + w _ {1} b J _ {o}}{J _ {o} + M _ {\text {b a l l}} b ^ {2}} \tag {1.3}
225
+ $$
226
+
227
+ According to (1.3), we can calculate the maximum velocity and the minimum velocity of the ball moving away and get the velocity index expression as follows:
228
+
229
+ $$
230
+ E _ {b a l l} = \frac {\nu_ {2} - \nu_ {\mathrm {m i n}}}{\nu_ {\mathrm {m a x}} - \nu_ {\mathrm {m i n}}}
231
+ $$
232
+
233
+ ranging from 0 to 1
234
+
235
+ # 3.3. Parameter determination
236
+
237
+ # 3.3.1. Parameters in the objective function
238
+
239
+ $\lambda$ represents how we lay emphasis on those two factors (the comfort degree for a batter and the batted ball velocity). Here, we attach the same importance to both of them. So, $\lambda = 0.5$
240
+
241
+ # 3.3.2. Parameters in the comfort index function
242
+
243
+ We simplify the shape of a bat as two coaxial cylinders with different diameters.(shown in [4][5])
244
+
245
+ length of the bat: $L = 85cm$ ,
246
+ - Radius of the thin part: $r = 1.25 \, \text{cm}$ ,
247
+ - Radius of the fat part: $R = 3.5 \mathrm{~cm}$ ,
248
+ Length of the thin part: $L_{1} = 53.55 \, \text{cm}$ ,
249
+ Length of the fat part: $L_{2} = 31.45 \, \text{cm}$ ,
250
+ Density of wood: $\rho = 0.6 \mathrm{~g} / \mathrm{cm}^{3}$ ,
251
+
252
+ Length from the pivot to the end of the thin part of a bat: 16.8cm.
253
+
254
+ ![](images/80c420a8076453bc3a40b0780ef07339c23757fa06a5f48c0707940cb5849ebc.jpg)
255
+ Figure 4 simplified model of a bat
256
+
257
+ The mass of the bat can be calculated as:
258
+
259
+ $$
260
+ M _ {b a t} = \pi r ^ {2} L _ {1} \rho + \pi R ^ {2} L _ {2} \rho = 8 8 3. 9 2 g
261
+ $$
262
+
263
+ Length from the center of mass to the end of the thin part:
264
+
265
+ $$
266
+ L _ {c m} = \frac {\pi r ^ {2} L _ {1} \rho \times \frac {L _ {1}}{2} + \pi R ^ {2} L _ {2} \rho \times (\frac {L _ {2}}{2} + L _ {1})}{\pi r ^ {2} L _ {1} \rho + \pi R ^ {2} L _ {2} \rho} - 1 6. 8 = 4 4. 8 9 c m
267
+ $$
268
+
269
+ The moment of inertia to the center of mass(CM) :
270
+
271
+ $$
272
+ \begin{array}{l} J _ {c m} = \pi r ^ {2} \rho L _ {1} \times \frac {L _ {1} ^ {2}}{1 2} + \pi r ^ {2} \rho L _ {1} \times \left(L _ {c m} - \frac {L _ {1}}{2}\right) ^ {2} + \pi R ^ {2} \rho L _ {2} \times \frac {L _ {2} ^ {2}}{1 2} + \pi R ^ {2} \rho L _ {2} \times \left(L - L _ {c m} - \frac {L _ {2}}{2}\right) ^ {2} \\ = 3 3 1 5 9 0 g \square c m ^ {2} \\ \end{array}
273
+ $$
274
+
275
+ # 3.3.3. Parameters in batted ball velocity index function
276
+
277
+ - According to published research, initial velocity of the ball (before collision)
278
+
279
+ $$
280
+ \text {e q u a l s :} v _ {1} = 7 0 \mathrm {m p h} = 1 1 2. 7 \mathrm {k m} / \mathrm {h} = 3 1 3 0. 5 6 \mathrm {c m} / \mathrm {s} [ 6 ]
281
+ $$
282
+
283
+ The mass of a ball: $M_{\text {ball}} = 145 \, \text{g}$ [7]
284
+
285
+ - According to the empirical formula ,
286
+
287
+ $$
288
+ v = - 0. 6 6 2 5 M _ {b a t} + 3 3 5 4 = 2 7 6 8. 3 9 \mathrm {c m / s}, \tag {1.4}[8]
289
+ $$
290
+
291
+ So, the angular speed is $w_{1} = \frac{u_{1}}{L - b} = \frac{2768.39}{68.2} = 40.59 rad / s$
292
+
293
+ The moment of inertia to the pivot point :
294
+
295
+ $$
296
+ J _ {o} = J _ {c m} + M _ {b a t} b ^ {2} = 2 1 1 2 8 0 0 \mathrm {g} \square \mathrm {c m} ^ {2}
297
+ $$
298
+
299
+ The value of e(coefficient of restitution)
300
+
301
+ According to the research done by Alan M. Nathan, the value of e varies with the changing impact location along the length of a bat. a set of data is shown as follows:
302
+
303
+ (Nathan_Trampoline-ISEA2004)
304
+
305
+ Table 1
306
+
307
+ <table><tr><td>68.2-b</td><td>12.7</td><td>15.24</td><td>17.78</td><td>20.32</td><td>22.84</td><td>25.4</td></tr><tr><td>e</td><td>0.44</td><td>0.551</td><td>0.589</td><td>0.593</td><td>0.568</td><td>0.525</td></tr></table>
308
+
309
+ We use SPSS software to fit this set of data, getting the expression of e:
310
+
311
+ $$
312
+ e = \left\{ \begin{array}{l l} - 0. 5 2 9 4 + 0. 1 1 3 (6 8. 2 - b) - 0. 0 0 2 8 (6 8. 2 - b) ^ {2} & w h e n e > 0. 4 \\ 0. 4 & w h e n e \leq 0. 4 \end{array} \right.
313
+ $$
314
+
315
+ Where
316
+
317
+ $$
318
+ R ^ {2} = 0. 9 6 2, S = 0. 0 0 7
319
+ $$
320
+
321
+ $F = 38.01$ , showing a high fitting degree.
322
+
323
+ (Considering all given value of $e$ is greater than 0.4, we assume
324
+
325
+ $$
326
+ e = 0. 4 \text {w h e n} e \leq 0. 4.)
327
+ $$
328
+
329
+ # 3.4. Solution and Analysis
330
+
331
+ Using the Matlab software we calculate the comfort index, velocity index and the value of our objective function, shown in Figure 5 and
332
+
333
+ ![](images/188990876ff6dcc261acbc1ca51db689f492325e6eaeac064ef51f6f0849e879.jpg)
334
+ Table 2
335
+ Figure 5
336
+
337
+ Table 2
338
+
339
+ <table><tr><td>b (cm)</td><td>EBatter</td><td>EBall</td><td>v2 (cm/s)</td><td>f</td></tr><tr><td>53.25</td><td>1</td><td>0.7693</td><td>3665.7</td><td>0.8874</td></tr><tr><td>48.66</td><td>0.9059</td><td>1</td><td>3938.7</td><td>0.9530</td></tr><tr><td>49.99</td><td>0.9526</td><td>0.9809</td><td>3915.8</td><td>0.9666</td></tr><tr><td>68.2</td><td>0</td><td>0.3163</td><td>3129.6</td><td>0.1582</td></tr></table>
340
+
341
+ # Conclusion:
342
+
343
+ - The most comfortable hitting point is not the maximum-exit-velocity point, with a difference of $6.93\%$ from the maximum speed.
344
+ - Vice versa, at the maximum-exit-velocity point, the comfort index equals 0.91, leading to a less comfortable feeling in hands. So, batters have to sacrifice some of the comfort feeling to pursue the highest exit velocity.
345
+ - From the curve of the objective function, we can get the optimal solution of the objective formula: The sweet spot is located at the point $18.21\mathrm{cm}$ apart from the end of a bat. Hitting at this point, batters can obtain a very high speed $(3915.8\mathrm{cm/s}$ compared with the maximum value $3938.7\mathrm{cm/s}$ ), and enjoy a high degree of comfort (0.95) at the same time.
346
+ - In contrast, hitting at the end of the bat is clearly not a wise decision, since it brings a relatively low exit velocity (3129.6cm/s compared with the maximum value of 3938.7cm/s) and low comfort degree (0), which does harm to batters' muscles and ligaments and sometimes even breaks the bat its self. The contrast between hitting at the sweet point and the end of the bat is clearly shown in Table 3.
347
+ - Thus, it has been proved that the sweet spot, hitting which will result in high exit velocity and high degree of comfort, is located 18.21cm apart from the end of a bat.
348
+
349
+ Table 3
350
+
351
+ <table><tr><td></td><td>Comfort index</td><td>Exit velocity(cm/s)</td><td>Value of objective formula</td></tr><tr><td>Sweet point</td><td>0.95</td><td>3915.8</td><td>0.9666</td></tr><tr><td>End of a bat</td><td>0</td><td>3129.6</td><td>0.3163</td></tr></table>
352
+
353
+ # 3.5. Sensitivity analysis
354
+
355
+ # 3.5.1. Analysis of $\lambda$ (Preference coefficient)
356
+
357
+ In former analysis, we assume the value of $\lambda$ as 0.5 (i.e. we attach same
358
+
359
+ importance to both comfort and velocity). Here, we change the value of $\lambda$ from 0.1
360
+
361
+ to 0.9, getting nine curves to show relation between value of objective formula and
362
+
363
+ impact location. (shown in Figure 6)
364
+
365
+ ![](images/7c6f7ecc6b9dd2ab690d5dcd90e42745f444444f1302d254d96a6d9bfc25353d.jpg)
366
+ Figure 6
367
+
368
+ From Figure 6, we find that the sweet spots for each value of $\lambda$ are relatively concentrated.
369
+
370
+ Figure 7 is used to show the changes of sweet spots' location when the value of $\lambda$ varies.
371
+
372
+ ![](images/bce5135e0ad0d519d232868f6173fb81967488de7dbbc014780b0e390bbe2bef.jpg)
373
+ Figure 7
374
+
375
+ - Locations of Sweet Spots vary within [48.5, 52.5], which is a zone with a length of $4 \mathrm{~cm}$ , showing that our model is stable.
376
+ - With the increase of the value of $\lambda$ , the location of the Sweet Spot moves closer to the end of a bat. Thus, in order to obtain higher exit speed of the batted ball, a batter is advised to hit the ball closer to hands within the zone [48.5, 52.5], though he will consequently feel harder impact on his hands.
377
+
378
+ # 3.5.2. Analysis of the mass of a bat
379
+
380
+ # 3.5.2.1. Influence on the location of Sweet Spot
381
+
382
+ In former analysis, we assume the mass of bat as 883.92g. Here, we change the value of the mass of a bat to see how the location of Sweet Spot changes. (shown in Figure 8)
383
+
384
+ ![](images/27925fb0759047ce1a2e38ed0db9566dd4da33aac0c93fb8c9b7f2ba7846bb81.jpg)
385
+ Figure 8
386
+
387
+ From Figure 8, basic conclusions can be given:
388
+
389
+ - The greater the mass of a bat, the closer the Sweet Spot is located to the end of the bat.
390
+ - This influence of it is so slight that can be ignored. Therefore, we can safely say that the location of Sweet Spot stays stable no matter how your bat weighs.
391
+
392
+ # 3.5.2.2. Influence on the exit velocity
393
+
394
+ ![](images/249b86e69b94031e5464eb1fce40121cdd22002a9e035c5f8d7d672caf235b34.jpg)
395
+ Figure 9
396
+
397
+ Table 4
398
+
399
+ <table><tr><td>Change of the mass of a bat(in ratio)</td><td>-0.1</td><td>-0.05</td><td>0</td><td>0.05</td><td>0.1</td></tr><tr><td>Location of Sweet Spot</td><td>49.91</td><td>49.95</td><td>49.99</td><td>50.03</td><td>50.06</td></tr><tr><td>Exit velocity</td><td>3805.9</td><td>3863.3</td><td>3915.8</td><td>3963.9</td><td>4008.5</td></tr><tr><td>Changing ratio of exit velocity</td><td>-2.81%</td><td>-1.34%</td><td>0</td><td>1.23%</td><td>2.37%</td></tr></table>
400
+
401
+ From Figure 9 and Table 4, We can get the basic conclusions as follows:
402
+
403
+ - The heavier a bat is, the higher exit velocity it results in. This conclusion matches our expectation perfectly: with the increase of a bat, the moment of inertia increases, which means that more available energy can be transferred to the ball and consequently a higher speed at which it moves away.
404
+ - Hence, in order to obtain higher exit velocity, a batter should choose heavier bat within his own capability, since a heavier bat obviously has higher demand on muscle strength and endurance.
405
+
406
+ # 3.5.3. Analysis of the swing speed
407
+
408
+ In former analysis we identify the swing speed by the empirical formula (1.4). Here, we change the value of swing speed by $\pm 10\%$ in order to research the influence of swing speed.
409
+
410
+ The influence on the exit velocity: Using our model we calculate the exit velocity curves corresponding to different value of swing speed.(shown in Figure 10).
411
+
412
+ ![](images/02c852198382cf13dba1b8c7d9f1ac0a3715055c2d99c21fe9f21202925b7625.jpg)
413
+ Figure 10
414
+
415
+ We can see that the overall exit velocity changes with the change of swing speed while the shape of the curves stays stable. The relation between swing speed and exit speed is more clearly shown in Table 5
416
+
417
+ Table 5
418
+
419
+ <table><tr><td>The change of swing speed(in ratio)</td><td>Exit speed (cm/s )</td><td>variance of exit speed(in ratio)</td></tr><tr><td>-0.1</td><td>3665.5</td><td>-6.94%</td></tr><tr><td>0</td><td>3938.7</td><td>0</td></tr><tr><td>0.1</td><td>4212.4</td><td>+6.95%</td></tr></table>
420
+
421
+ - From Table 5, we can safely say that higher swing speed results in higher exit speed.
422
+ The location of the Sweet Spot stays stable when the swing speed varies.
423
+ - Hence, batters are advised to improve their muscle strength in order to get higher swing speed and consequently higher exit speed of the ball.
424
+
425
+ # 4. The behavior of a "corked" bat
426
+
427
+ In history, players using a corked bat perform impressively and even hit home runs with it. People get curious about the property of the corked bat and it has become a controversial issue. Which indeed has better performance, the corked bat or uncorked one?
428
+
429
+ In this section, we augment our model to solve this problem and further discussion is given based on model analysis. We analyze the influence the cork length lays on a bat's performance and we conclude why the corked bat is banned.
430
+
431
+ ![](images/9fbd997899ecf87d74f2f648f2baa6f3e7cdb4395887ae0026eaadd1d3e018f7.jpg)
432
+ Figure 11
433
+
434
+ # 4.1. Assumption
435
+
436
+ - "replacing a wood cap" is not taken into consideration
437
+
438
+ # 4.2. Simplified model of the shape of a corked bat
439
+
440
+ We continue to use the two-coaxial-cylinder model and simulate the cork as a cylinder with lower density.(see Figure 12)
441
+
442
+ ![](images/89db5b18507224fe1c1e0c3b82cb0dc22e3a4c58e2b0ee0da5299d221013e24f.jpg)
443
+ Figure 12 simplified model of a corked bat
444
+
445
+ Basic parameters can be expressed as follows:
446
+
447
+ # 4.2.1. Mass of the bat
448
+
449
+ Mostly, producers drill a cylinder about 2.54cm in diameter, and 25.4 deep and fulfill the hole with rubber or cork, which reduces the mass of the bat by 42.5g.[9]
450
+
451
+ Since a 25.4cm long cork can reduce the mass by 42.5g, the mass will be reduced by $1.67x$ when the length of the bat equals $x$ and the diameter is kept as 2.54cm.
452
+
453
+ Then, the mass of the corked bat decreases to:
454
+
455
+ $$
456
+ M _ {c o r k e d} = 8 8 3. 9 2 - 1. 6 7 x
457
+ $$
458
+
459
+ # 4.2.2. The center of mass:
460
+
461
+ The re-distribution of mass will lead to the displacement of the center of mass of a bat.
462
+
463
+ The length from the pivot point to the center of mass of a corked bat equals:
464
+
465
+ $$
466
+ d _ {\text {c o r k e d}} = \frac {M _ {\text {n o r m a l}} d _ {\text {n o r m a l}} - 1 . 6 7 x \times (6 8 . 2 - \frac {x}{2})}{M _ {\text {c o r k e d}}}
467
+ $$
468
+
469
+ # Where
470
+
471
+ $x$ is the length of the bat.
472
+
473
+ $M_{\text{normal}}$ is the mass of a normal (uncorked) bat.
474
+
475
+ $d_{\text{normal}}$ is the length from the pivot point to the center of mass of a normal bat.
476
+
477
+ $M_{\text{corked}}$ is the mass of a corked bat.
478
+
479
+ # 4.2.3. Moment of inertia
480
+
481
+ The displacement of the center of mass and the re-distribution of mass inevitably lead to the decrease of the moment of inertia, which can be expressed as follows:
482
+
483
+ The moment of inertia to the center of mass:
484
+
485
+ $$
486
+ J _ {c m - c o r k e d} = J _ {c m - n o r m a l} - 8 8 3. 9 2 \left(d _ {n o r m a l} - d _ {c o r k e d}\right) ^ {2} - \frac {1}{1 2} \times 1. 6 7 x \cdot x ^ {2} - 1. 6 7 x \cdot \left(6 8. 2 - d _ {n o r m a l} - 0. 5 x\right) ^ {2}
487
+ $$
488
+
489
+ The moment of inertia to the pivot point:
490
+
491
+ $$
492
+ J _ {o - c o r k e d} = J _ {o - n o r m a l} - \frac {1}{1 2} \times 1. 6 7 x \cdot x ^ {2} - 1. 6 7 x \cdot (6 8. 2 - 0. 5 x) ^ {2}
493
+ $$
494
+
495
+ # 4.2.4. Swing speed:
496
+
497
+ Being lighter and the moment of inertia of it getting smaller, a bat is more easy to swing. So, a swing speed gets higher, which will necessarily lead to the increase of the batted ball velocity.
498
+
499
+ The swing speed can be expressed according to the empirical formula[10]:
500
+
501
+ $$
502
+ \frac {v}{4 4 . 7 2} = - \frac {M _ {\text {c o r k e d}}}{6 7 . 5} + 7 5
503
+ $$
504
+
505
+ # Where
506
+
507
+ $\nu$ is the swing speed (the speed at the end of a swung bat)
508
+
509
+ $M_{\text{corked}}$ is the mass of a corked bat.
510
+
511
+ The angular speed of the bat equals:
512
+
513
+ $$
514
+ w _ {1} = \frac {v}{6 8 . 2}
515
+ $$
516
+
517
+ # 4.3. Parameter determination
518
+
519
+ In order to research the influence the length of cork lays on the bat properties, we change the value of the x to see the fluctuation of the Sweet Spot effect.
520
+
521
+ When $x = 0,5,10,15,20\mathrm{cm}$ , we calculate the following parameters:
522
+
523
+ # 4.3.1. Mass of the bat:
524
+
525
+ Table 6
526
+
527
+ <table><tr><td>Length of cork</td><td>0</td><td>5</td><td>10</td><td>15</td><td>20</td></tr><tr><td>Mass of bat</td><td>883.92</td><td>875.57</td><td>867.22</td><td>858.87</td><td>850.52</td></tr></table>
528
+
529
+ # 4.3.2. Length from the center of mass to the pivot
530
+
531
+ Table 7
532
+
533
+ <table><tr><td>Length of cork</td><td>0</td><td>5</td><td>10</td><td>15</td><td>20</td></tr><tr><td>Length from center of mass to the pivot</td><td>44.89</td><td>44.69</td><td>44.54</td><td>44.43</td><td>44.37</td></tr></table>
534
+
535
+ # 4.3.3. Moment of inertia to the center of mass $\left(J_{o}\right)$
536
+
537
+ Table 8
538
+
539
+ <table><tr><td>Length of cork</td><td>0</td><td>5</td><td>10</td><td>15</td><td>20</td></tr><tr><td>J0</td><td>331590</td><td>327920</td><td>325740</td><td>324670</td><td>324320</td></tr></table>
540
+
541
+ # 4.3.4. Swing speed
542
+
543
+ Table 9
544
+
545
+ <table><tr><td>Length of cork</td><td>0</td><td>5</td><td>10</td><td>15</td><td>20</td></tr><tr><td>Swing speed</td><td>40.59</td><td>40.67</td><td>40.75</td><td>40.84</td><td>40.92</td></tr></table>
546
+
547
+ # 4.4. Solution and analysis
548
+
549
+ We substitute the parameters in our basic model with all the above calculated values, getting solutions as follows:
550
+
551
+ # 4.4.1. Influence on the location of the Sweet Spot
552
+
553
+ Table 10
554
+
555
+ <table><tr><td>Length of cork</td><td>0</td><td>5</td><td>10</td><td>15</td><td>20</td></tr><tr><td>b</td><td>49.99</td><td>49.89</td><td>49.81</td><td>49.76</td><td>49.73</td></tr></table>
556
+
557
+ ![](images/244ef96c8f36e29fe2d95207b3459ab2eb2552522ba471fc7b3de9c5b86e4635.jpg)
558
+ Figure 13
559
+
560
+ From Table 10 and Figure 13, we find that with the increase of the length of the cork, the location of the Sweet Spot will move towards the pivot though slightly—the length form the Sweet Spot to the pivot is $49.99\mathrm{cm}$ in an uncorked bat, while that in a corked bat is $49.73\mathrm{cm}$ , only $2.6\mathrm{mm}$ shorter.
561
+
562
+ # 4.4.2. Influence on the exit speed
563
+
564
+ Table 11
565
+
566
+ <table><tr><td>Length of cork</td><td>0</td><td>5</td><td>10</td><td>15</td><td>20</td></tr><tr><td>v2</td><td>3915.8</td><td>3906.7</td><td>3889.2</td><td>3892.7</td><td>3887.8</td></tr></table>
567
+
568
+ ![](images/062521878abd8eaa8c75f0ea33ca8fcd6fa1ef9569ef989f944e35f3f5856122.jpg)
569
+ Figure 14
570
+
571
+ The longer the cork is, the lower the exit speed will be.
572
+ The exit speed of an uncorked bat is $3915.8 \mathrm{~cm} / \mathrm{s}$ , while that of a 20cm-long-corked bat is $3887.8 \mathrm{~cm} / \mathrm{s}$ , which decreases by $0.72\%$ .
573
+ - The result matches our expectation—being corked, a bat gets lighter and the moment of inertia gets smaller (4.47%); that will consequently leads to the decrease of the available energy transferred to the ball. That's why the batted ball moving away at a lower speed when the bat is corked.
574
+
575
+ # 4.4.3. Influence on the comfort index
576
+
577
+ Table 12
578
+
579
+ <table><tr><td>Length of cork</td><td>0</td><td>5</td><td>10</td><td>15</td><td>20</td></tr><tr><td>The most comfortable</td><td>53.25</td><td>53.07</td><td>52.97</td><td>52.94</td><td>52.96</td></tr></table>
580
+
581
+ hitting point
582
+
583
+ ![](images/8e673e89cc2c71c09f753ce3fe2715f6b7e82f643d44c27a984a24aa79f5f3d8.jpg)
584
+ Figure 15
585
+
586
+ - The length from the most comfortable point to the pivot is 53.25 in an uncorked bat, while that in a corked bat is 52.96, only 0.29 shorter.
587
+ It suggests that a batter, for the sake of comfort, should hit a point that's closer to his hands when the cork in bat gets longer.
588
+
589
+ # 4.4.4. Influence on the Sweet Zone
590
+
591
+ - Sweet Zone is a zone within which the value of objective function is no less than 0.85.
592
+
593
+ Here, we discuss the influence the length of rubber lays on the span of Sweet Zone.
594
+
595
+ When $x = 0$ , 5, 10, 15, 20cm, the Sweet Zone varies as shown in Figure 16 and
596
+
597
+ Table 13:
598
+
599
+ ![](images/ac87f0e0983d6697036b3a257ad683355defcd08d7d152db54239054dd365024.jpg)
600
+ Figure 16 location of Sweet Zone vs. length of rubber
601
+
602
+ Table 13
603
+
604
+ <table><tr><td>X</td><td>Lower limit</td><td>Upper limit</td><td>Span of the zone</td></tr><tr><td>0</td><td>46.09</td><td>53.87</td><td>7.78</td></tr><tr><td>5</td><td>46</td><td>53.75</td><td>7.75</td></tr><tr><td>10</td><td>45.95</td><td>53.66</td><td>7.71</td></tr><tr><td>15</td><td>45.9</td><td>53.59</td><td>7.69</td></tr><tr><td>20</td><td>45.88</td><td>53.56</td><td>7.68</td></tr></table>
605
+
606
+ The location and the span of Sweet Zone both stay stable with the change of the length of cork. Thus a corked bat is not superior to a normal one in the aspect of Sweet Zone.
607
+
608
+ # 4.5. Why is corked bat forbidden?
609
+
610
+ - According to the solution and analysis of our model, a corked bat seems not to show a better performance—with lower (1.5%) exit speed of a batted ball. Nevertheless, there exist other factors that should also be taken into consideration.
611
+ - The corked bat is easier to swing since its lighter and the moment of inertia is smaller. Thus, the rise in the swing speed allows a batter longer time to prepare
612
+
613
+ and determine the flying path and the speed of the ball, which results in better performance.
614
+
615
+ - The psychological factor shouldn't be neglected. According to historical data, players using a corked bat perform impressively and even hit home runs with it. Corked bat is more or less something on which a batter lays his psychological dependence.
616
+ - Athletes should lay more emphasis on the improvement of Physical Quality and skill practice instead of unduly relying on sports equipments. That's one of the reason why corked bat is forbidden[11].
617
+
618
+ # 5. Aluminum vs. wood
619
+
620
+ # 5.1. Parameter determination
621
+
622
+ We simplify the shape of an aluminum bat as two coaxial hollow cylinders with different diameters, using the same basic length parameters with a wooden one[12].(shown in Figure 17)
623
+
624
+ ![](images/fb8a82b4b0791382a12203f93da6276b6e4cb08d0c11cf3f463cd8ecbe76f959.jpg)
625
+ Figure 17 simplified model for an aluminum bat
626
+
627
+ - According to, we get the mass of an aluminum bat: $M_{al - bat} = 861.63g$ .
628
+ The density of aluminum: $\rho_{al} = 2.7\mathrm{g/cm}^3$
629
+
630
+ From the expression of the mass of an aluminum bat:
631
+
632
+ $$
633
+ M _ {a l - b a t} = \pi r ^ {2} L _ {1} \rho_ {a l} - \pi (r - t) ^ {2} L _ {1} \rho_ {a l} + \pi R ^ {2} L _ {2} \rho_ {a l} - \pi (R - t) ^ {2} L _ {2} \rho_ {a l}
634
+ $$
635
+
636
+ We calculate the wall thickness of an aluminum bat: $t = 0.31 \, \text{cm}$
637
+ Length from the pivot to the center of mass:
638
+
639
+ $$
640
+ d _ {a l} = \frac {\pi \left[ r ^ {2} - (r - t) ^ {2} \right] L _ {1} \rho_ {a l} \times \frac {L _ {1}}{2} + \pi \left[ R ^ {2} - (R - t) ^ {2} \right] L _ {2} \rho_ {a l} \times \left(\frac {L _ {2}}{2} + L _ {1}\right)}{\pi \left[ r ^ {2} - (r - t) ^ {2} \right] L _ {1} \rho_ {a l} + \pi \left[ R ^ {2} - (R - t) ^ {2} \right] L _ {2} \rho_ {a l}} - 1 6 . 8 = 3 7 . 2 6 c m
641
+ $$
642
+
643
+ Moment of inertia to the center of mass:
644
+
645
+ $$
646
+ \begin{array}{l} J _ {a l - c m} = \pi \left[ r ^ {2} - (r - t) ^ {2} \right] \rho_ {a l} L _ {1} \times \frac {L _ {1} ^ {2}}{1 2} + \pi \left[ r ^ {2} - (r - t) ^ {2} \right] \rho_ {a l} L _ {1} \times \left(d _ {a l} + 1 6. 8 - \frac {L _ {1}}{2}\right) ^ {2} \\ + \pi \left[ R ^ {2} - (R - t) ^ {2} \right] \rho_ {a l} L _ {2} \times \frac {L _ {2} ^ {2}}{1 2} + \pi \left[ R ^ {2} - (R - t) ^ {2} \right] \rho_ {a l} L _ {2} \times \left(L - 1 6. 8 - d _ {a l} - \frac {L _ {2}}{2}\right) ^ {2} \\ = 4 7 6 9 4 0 \\ \end{array}
647
+ $$
648
+
649
+ Moment of inertia to the pivot: $J_{al - o} = J_{al - cm} + M_{al - bat}d_{al}^2 = 1673400$
650
+
651
+ From literature[9], the average value of e(coefficient of restitution) for an aluminum bat equals to 0.4, which is 1.6 times of that for a wooden bat(0.25).
652
+
653
+ Therefore, the value of $e$ can be expressed as:
654
+
655
+ $$
656
+ e = - 0. 8 4 7 + 0. 1 8 1 (6 8. 2 - b) - 0. 0 0 4 5 (6 8. 2 - b) ^ {2}
657
+ $$
658
+
659
+ All the other parameters keep the same with the wooden bat.
660
+
661
+ # 5.2. Solution and analysis:
662
+
663
+ We substitute the parameters in our basic model with all the above calculated values, getting solutions shown in Figure 18 and Table 14.
664
+
665
+ ![](images/bc1f30f10199f1ca84a27983120fae74423bbc0c520ac78cd73dea4d1d5990a0.jpg)
666
+ Figure 18
667
+
668
+ Table 14
669
+
670
+ <table><tr><td>b (cm)</td><td>EBatter</td><td>EBall</td><td>v2(cm/s)</td><td>f</td></tr><tr><td>52.11</td><td>1</td><td>0.7145</td><td>3632.6</td><td>0.8572</td></tr><tr><td>45.64</td><td>0.825</td><td>1</td><td>4365.0</td><td>0.9125</td></tr><tr><td>48.05</td><td>0.931</td><td>0.9584</td><td>4258.2</td><td>0.9447</td></tr></table>
671
+
672
+ Basic conclusions based on the solution:
673
+
674
+ - The most comfortable point is $52.11 \mathrm{~cm}$ apart from the pivot, $1 \mathrm{~cm}$ closer to the pivot compared with a wooden bat. The exit speed at this point equals to $3632.6 \mathrm{~cm} / \mathrm{s}$ , which is similar to that of the wooden bat.
675
+ - The highest exit velocity point is $48.66 \mathrm{~cm}$ apart from the pivot, $3 \mathrm{~cm}$ closer to the pivot compared with a wooden bat. The exit speed at this point equals to $4365.0 \mathrm{~cm} / \mathrm{s}$ , which is $11 \%$ higher than that of a wooden bat.
676
+ - The Sweet Spot is $48.05 \mathrm{~cm}$ apart from the pivot, $2 \mathrm{~cm}$ closer to the pivot compared with a wooden bat. The exit speed at this point equals to $4258.2 \mathrm{~cm} / \mathrm{s}$ , which is $9 \%$ higher than that of a wooden bat.
677
+ - To sum up, all of these optimal points are closer to the pivot compared with those of a wooden bat. Furthermore, The exit velocity increases significantly.
678
+
679
+ # 5.3. Performance comparison between wooden bat and
680
+
681
+ # aluminum bat
682
+
683
+ # 5.3.1. Exit velocity
684
+
685
+ The highest exit speed of a wooden bat is $3938.7 \, \text{cm/s}$ , which can be easily obtained by using an aluminum bat, since the exit speed for an aluminum bat is no less than
686
+
687
+ 3938.7cm/s as long as hitting on the bat within a zone $b \subset [41.00, 50.61]$ (with span of
688
+
689
+ 9.5cm). Clearly, the exit velocity for an aluminum bat is much higher than that for a wooden bat.
690
+
691
+ # 5.3.2. Sweet Zone
692
+
693
+ Consider the length of Sweet Zone of aluminum bat and wooden bat, shown in
694
+
695
+ ![](images/72caf8e20f4063141e429fdd3eb0abe798569470053d0c7326c725d2006bd7f8.jpg)
696
+ Figure 19 and Figure 20:
697
+ Figure 19 property of wooden bat
698
+
699
+ ![](images/7762e71be9ea056a2d06864907c3688f49445d646bc523e6377197b5d9ffe15e.jpg)
700
+ Figure 20 property of aluminum bat
701
+
702
+ - The Sweet Zone of an aluminum bat is [43.95, 52.77], with a span of $8.82\mathrm{cm}$ .
703
+ The Sweet Zone of a wooden bat is [46.09, 53.87], with a span of 7.78cm.
704
+ - Clearly, the span of Sweet Zone for an aluminum bat is $1.1 \mathrm{~cm}$ longer than that for a wooden bat, which means that an aluminum bat is easier to control since it provides higher probability for a batter to obtain an expected exit velocity.
705
+
706
+ # 5.3.3. The sensitivity analysis of the wall thickness (t)
707
+
708
+ Changing the value of $t$ by $\pm 0.01cm, \pm 0.02cm$ , we get the following results: (see
709
+
710
+ Figure 21 and
711
+ Table 15)
712
+ Table 15
713
+
714
+ <table><tr><td>Change of t(in ratio)</td><td>Location of Sweet Spot</td><td>Exit velocity of ball</td><td>Change of the exit velocity(in ratio)</td></tr><tr><td>-0.02</td><td>48.02</td><td>4307.4</td><td>0.012</td></tr><tr><td>-0.01</td><td>48.03</td><td>4283.2</td><td>0.006</td></tr><tr><td>0</td><td>48.05</td><td>4258.2</td><td>0</td></tr></table>
715
+
716
+ <table><tr><td>0.01</td><td>48.06</td><td>4234.3</td><td>-0.006</td></tr><tr><td>0.02</td><td>48.07</td><td>4210.4</td><td>-0.011</td></tr></table>
717
+
718
+ ![](images/7c96ebf06df5c0f4268a7899fecea2e9e7c27f697c7d8feeea134804f2c7eb6c.jpg)
719
+ Figure 21
720
+
721
+ Basic conclusions we get:
722
+
723
+ The location of Sweet Spot stays stable with the change of t.
724
+ The exit velocity decreases with the increase of t.
725
+ - Hence, producer of aluminum bats should try to make the wall thinner in order to raise the exit speed for the bat. However, there is a lower limit of the wall thickness since too thin the wall is will make the material easily bent and the coefficient of resistant will turn to be very low.
726
+
727
+ # 5.4. Why aluminum bat is banned?
728
+
729
+ Though the aluminum bat has much superiority over a wooden one, it's usually banned in a game. The reason is analyzed as follows:
730
+
731
+ - The exit speed gets to be as high as $4365 \mathrm{~cm} / \mathrm{s}$ when an aluminum bat is used. We assume the ball weighs $145 \mathrm{~g}$ and the contact time is $0.1 \mathrm{~s}$ . It suggests that a impact force of $63.3 \mathrm{~N}$ is applied on the hands of a batter, threatening the safety of the batter.
732
+ - The over high-speed batted ball is also a threat to the safety for the other athletes in a baseball game, which runs counter to the spirit of sports.
733
+
734
+ - Since the span of Sweet Zone for an aluminum bat is wider than that for a wooden one, athletes will more or less lay psychological dependence on it, which might lead to a trend of the undue dependence on sports equipments.
735
+
736
+ # 6. Superiority and weakness
737
+
738
+ # 6.1. Superiorities of our model
739
+
740
+ - We take into consideration both the exit velocity of the batted ball and the comfort for batters. So, the solution of our model pursues high exit speed, while reducing the harm to the body of athletes to a relatively low degree.
741
+ The results of model match perfectly with the experience, which proves the rationality and correctness of our model.
742
+ - Our modal has universal adaptability, since different locations of Sweet Spot are given to different batters who have different preference on the comfort degree and the exit velocity.
743
+
744
+ # 7. Further discussion
745
+
746
+ - In our model, the coefficient of resistant is only determined by the hitting position. Solutions of our model could be more accurate if more factors are taken into account when determining the value of the coefficient of resistant.
747
+ - We could take accurate vibration model into consideration: The bat can be simplified as a free-free vibrating elastic beam. Then, the location of the Sweet Spot could be determined by solving the vibration distribution function according to the theory of vibration dynamics.
748
+ - We could use finite-element software Ansys to conduct simulation, since the numerical solution would be too complex to obtain.
749
+
750
+ # 7.1. Weaknesses of our model
751
+
752
+ - We don't directly consider the effect of vibration, but it has been taken into account in e(the coefficient of resistant), though being rough. More accurate solution could be obtained by experimental methods or dynamical simulation.
753
+ - We don't take into account the influence of the rotation of the ball.
754
+
755
+ # 8. Suggestions given to batters:
756
+
757
+ Based on the analysis of our model, we provide reasonable suggestions to batters from various aspects:
758
+
759
+ - The heavier a bat is, the higher exit velocity it results in. In order to obtain higher exit velocity and to hit the ball to a longer distance, a batter should choose heavier bat within his own capability, since a heavier bat obviously has higher demand on muscle strength and endurance
760
+ - Practice to improve the swing speed in order to obtain higher exit velocity of ball
761
+ - A corked bat allows a batter longer time to prepare and determine the flying path and the speed of the ball, which results in better performance. It's a good choice for new learners
762
+ - The span of Sweet Zone for an aluminum bat is wider, making it easier to control. It's also a good choice for new learners
763
+ - The over high exit speed of the ball is a threat to the safety of athletes in a baseball game, which should be noticed by athletes.
764
+
765
+ # 9. References
766
+
767
+ [1] http://paws.kettering.edu/\~drussell/bats-new/bat-moi.html
768
+ [2] http://memo.cgu.edu.tw/yun-ju/CGUWeb/PhyChiu/H201Oscillation/SweetSpotProblem.htm
769
+ [3] http://paws.kettering.edu/~drussell/bats-new/nonlinear-ball.html
770
+ [4] http://paws.kettering.edu/~drussell/bats-new/alumwood.html
771
+ [5] http://zhidao.baidu.com/question/61026308.html
772
+ [6] http://paws.kettering.edu/~drussell/bats-new/ban-safety.html
773
+ [7] http://zhidao.baidu.com/question/32333384.html
774
+ [8] http://paws.kettering.edu/~drussell/bats-new/batw8.html
775
+ [9] http://paws.kettering.edu/~drussell/bats-new/corkedbat.html
776
+ [10] http://paws.kettering.edu/\~drussell/bats-new/batw8.html
777
+ [11] http://en.wikipedia.org/wiki/Corked bat
778
+ [12] http://paws.kettering.edu/~drussell/bats-new/alumwood.html
779
+ [13] Wu Junchang,Li Minghui,Xiang Ziyuan,Comparison Analysis Research between Striking Results by an Aluminum Bat and a Wooden Bat , learned journal of Taibei 2002(29), 252-271
MCM/2010/A/7586/7586.md ADDED
@@ -0,0 +1,284 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # The Sweet Spot: A Wave Model of Baseball Bats
2
+
3
+ Team #7586
4
+
5
+ February 22, 2010
6
+
7
+ # ABSTRACT
8
+
9
+ In this paper, we determine the sweet spot on a baseball bat. We capture the essential physics of the ball-bat impact by taking the ball to be a lossy spring and the bat to be an Euler-Bernoulli beam. In order to impart some intuition about the model we begin by presenting a rigid body model. Next, we proceed to use our full model to reconcile various correct and incorrect claims about the sweet spot found in the literature. Finally, we discuss the sweet spot and performance of corked and aluminum bats, with a particular emphasis on hoop modes.
10
+
11
+ # 1. INTRODUCTION
12
+
13
+ The status of baseball as the nation's pastime makes understanding the collision of an irregularly shaped slab of wood with a ball a compelling and interesting problem. The physics of baseball encompasses much more than just the batting of the ball. The pitching of the ball and the flight of the ball are examples of equally important events susceptible to physics modeling. Even limiting one's scope to the batting of the ball yields a problem of many dimensions. A hitter might expect a model of the baseball-bat collision to yield insight into how the bat breaks, how the bat imparts spin on the ball, how best to swing the bat, and so on.<sup>1</sup> We model only the sweet spot.
14
+
15
+ We have encountered at least two notions of what the sweet spot should be defined as. The first comes from what we suspect explains the term sweet spot. This definition equates the sweet spot with the impact location on the bat which minimizes the discomfort to the hands. We can tentatively give the second definition as the impact location which maximizes the outgoing velocity of the ball. Some players seem to implicitly identify these two definitions as the same. We do not identify the two and focus exclusively on the second definition, which we now define more carefully.
16
+
17
+ Given a baseball bat and baseball, the final velocity of the ball is determined by the initial velocity and rotation of the ball, initial velocity and rotation of the bat, the relative position and orientation of the bat and ball, and the force over time that the hitter's hands may apply on the handle. We assume that the ball is not rotating and that the ball's velocity at impact is perpendicular to the length of the bat. We assume that everything occurs in a single plane and we will argue that the hand's interaction are actually negligible. In the frame of reference of the center of mass of the bat, the initial conditions are completely specified by the angular velocity of the bat, the velocity of the ball, and the position of impact along the bat.
18
+
19
+ Fixing the properties of the bat, ball, initial velocity of the ball, and angular velocity, we can now define the sweet spot as the location of the point of impact which maximizes the final ball velocity. In particular, the location of the sweet depends not just the bat alone, but also the pitch and swing. In order to determine the actual location of the sweet spot, we model the physical effects that we find to be most important and allow that model to make a prediction. Here we rapidly explain the features of a few models in the literature.
20
+
21
+ The simplest model predicts the sweet spot to be at the center of percussion. This model assumes that an impact location that minimizes discomfort to the hand also maximizes power transferred to the ball. The model assumes the ball to be a rigid body for which there are conjugate points at which an impact at one will exactly balance the angular recoil and linear recoil at the other. By gripping at one conjugate point and impacting at the other point, the center of percussion, the theory predicted that the hands would experience minimal shock and therefore the ball would exit with high velocity. This point depends heavily on the moment of inertia and the location of the hands. We cannot accept this model because it both erroneously equates the two definitions of sweet spot and furthermore assumes incorrectly that the bat is a rigid body.
22
+
23
+ Another model predicts the sweet spot to be between nodes of the two lowest natural frequencies of the bat. Given a free bat allowed to oscillate, it's oscillations can be decomposed into fundamental modes of various frequencies. The intuition is much like that of xylophones or other musical instruments. Different geometries and materials have different natural frequencies at which they will oscillate. The resulting wave shapes suggest how one would go about exciting those modes. Plucking a string at the node of a vibrational mode will not excite that mode. It is ambiguous which definition of sweet spot this model uses. Using the first definition, it is focusing on the uncomfortable excitations of vibrational models. Hopefully by choosing the impact location to be near nodes of important frequencies, there will be a minimum of uncomfortable vibrations set up. Using the second definition, the worry is that energy sent into vibrations of the bat will be lost. This model assumes then that the most important energies to model are those lost to vibration.
24
+
25
+ This model raises many questions. Which frequencies actually do get excited and why? The Fourier transform of an impulse will in general contain infinitely many modes. Furthermore, wood is a viscoelastic material which quickly dissipates its energies. Is the notion of an oscillating bat even relevant to modeling such a bat? How valid is the condition that the bat is free? Ought the system be coupled with hands on the handle, or the arm's bone structure, or possibly even the ball? What types of oscillations are relevant? A cylindrical structure can support numerous different types of modes beyond the transverse modes usually assumed by this model.<sup>4</sup>
26
+
27
+ We discuss these models not only to point out the confusion (advantageous to marketers of fancy bats) surrounding the sweet spot but also to highlight two basic issues in modeling the sweet spot. Following the center of percussion line of reasoning, how do we model the recoil of the bat? Following the vibrational nodes line or reasoning, how do we model the vibrations of the bat? In the general theory of impact mechanics, these two effects are the main ones assuming that the bat does not break or deform permanently. We will explain in this paper notable approaches found in the literature by Brody, Cross, and Nathan. Very briefly, Brody ignores vibrations, Cross ignores bat rotation but studies the propagation of the impulse coupled with the ball, and Nathan attempts to emphasize the vibrational modes. Our approach reconciles the tension between these different approaches while emphasizing the crucial role played by the time-scale of the collision.
28
+
29
+ Our main goal is to understand the sweet spot. A secondary goal is to understand the differences between the sweet spots of different bat types. Although marketers of different types of bats often emphasize the importance of the sweet spots, there are often other relevant factors involved: ease of swing, tendency of the bat to break, psychological effects, and so on. We once again emphasize that the problem that we investigate is a restricted problem for which we arbitrarily set constant other important factors. For example, we will argue that it doesn't matter to the collision whether or not the batter's hands are gripping the handle firmly or if the batter follows through correctly. This does not have any bearing on the technique required to actually swing the bat or how the bat's properties affect this.
30
+
31
+ Our paper is organized as follows. First, we present the Brody rigid body model illuminating the recoil effects of impact. Next we present a full computational model based on wave propagation in a Euler-Bernoulli beam coupled with the ball modeled as a lossy spring. We compare this model with other models found in the literature and explore the local nature of impact, the interaction of recoil and vibrations, and robustness to parameter changes. We adjust the parameters of the model to comment on the sweet spots of corked bats and aluminum bats. Finally, we investigate the effect of hoop frequencies on aluminum bats.
32
+
33
+ # 2. A SIMPLE EXAMPLE
34
+
35
+ We begin by only considering the rigid recoil effects of the bat-ball collision, much in the same way as Brody.2 Consider a bat swung at an incoming baseball. For simplicity of this example, we assume that the bat is perfectly rigid. Because the collision happens on such a short time scale (around 1 ms), we will treat the bat as a free body. That is to say, we are not concerned with the batter's hands exerting some force on the bat which may be transferred into the ball.
36
+
37
+ ![](images/a6bc26659344ad07480fd78851792ca7201006ac7becf7d3cc78c79a69c5c3a7.jpg)
38
+ Figure 1: Arrows point in the positive direction for the corresponding parameter.
39
+
40
+ The bat has mass $M$ and moment of inertia $I$ about its center of mass. From the reference frame of the center of mass of the baseball bat before the collision, the ball has velocity $v_{i}$ in the positive $x$ -direction while the bat has an angular velocity of $\omega_{i}$ . Note that in our setup, $v_{i}$ and $\omega_{i}$ will have opposite signs when the batter is swinging at the ball as in Figure 1. The ball collides with the bat at a distance $l$ from the center of mass of the bat. We assume that the collision is head-on and view the event such that all the $y$ -component velocities are zero at the moment of the collision. After the collision, the ball will have a final velocity $v_{f}$ and the bat will have a final linear velocity of $V_{f}$ and an angular velocity of $\omega_{f}$ at the center of mass.
41
+
42
+ When the ball collides with the bat, it briefly compresses and decompresses, converting kinetic energy to potential energy and back. However, some energy is lost in the process, i.e. the collision is inelastic. The ratio of the relative speeds of the bat and the ball before and after the collision is known as the coefficient of restitution, designated by $e$ . In other words $e = 0$ represents a perfectly inelastic collision, and $e = 1$ means the collision is perfectly elastic. In this basic model, we make two simplifying assumptions: $e$ is constant along the length of the bat, and $e$ is constant for all $v_{i}$ .
43
+
44
+ Given our pre-collision conditions, we can write conservation of linear momentum:
45
+
46
+ $$
47
+ M V _ {f} = m (v _ {i} - v _ {f})
48
+ $$
49
+
50
+ Conservation of angular momentum:
51
+
52
+ $$
53
+ I (\omega_ {f} - \omega_ {i}) = m l (v _ {i} - v _ {f})
54
+ $$
55
+
56
+ Definition of the coefficient of restitution:
57
+
58
+ $$
59
+ e \left(v _ {i} - \omega_ {i} l\right) = - v _ {f} + V _ {f} + w _ {f} l
60
+ $$
61
+
62
+ Solving for $v_{f}$ gives
63
+
64
+ $$
65
+ v _ {f} = \frac {- v _ {i} (e - \frac {m}{M ^ {*}}) + \omega_ {i} l (1 + e)}{1 + \frac {m}{M ^ {*}}}
66
+ $$
67
+
68
+ Where $M^{*} = \frac{M}{1 + \frac{Ml^{2}}{I}}$ is the effective mass of the bat.
69
+
70
+ For calibration purposes, we use the following numbers which are typical of a regulation bat connecting with a fastball in Major League Baseball. The results are plotted below.
71
+
72
+ <table><tr><td>m</td><td>0.145 kg</td><td>5.1 oz</td></tr><tr><td>M</td><td>0.83 kg</td><td>29 oz</td></tr><tr><td>L</td><td>0.84 m</td><td>33 in</td></tr><tr><td>I</td><td>0.039 kg·m2</td><td></td></tr><tr><td>vi</td><td>67 m/s</td><td>150 my</td></tr><tr><td>ωi</td><td>-60 rad/s</td><td></td></tr><tr><td>e</td><td>0.55</td><td></td></tr></table>
73
+
74
+ ![](images/c9b429695a0402a8eec78a7af63c15996c090f8e8e17c3fa0c66339ca3bf8069.jpg)
75
+ Figure 2: $v_{f}$ as a function of $l$ .
76
+
77
+ The maximum exit velocity in our example is around $26.7\mathrm{m / s}$ , and the sweet spot is $13~\mathrm{cm}$ from the center of mass. Missing the sweet spot by up to $5\mathrm{cm}$ will only result in a $1\mathrm{m / s}$ difference from the maximum speed, implying a relatively wide sweet spot.
78
+
79
+ From this example, we can already see that the sweet spot is determined by a multitude of factors, including the length, mass, and shape of the baseball bat, the mass of the baseball, and the coefficient of restitution between the bat and ball. Furthermore, the sweet spot is even not uniquely determined by the bat and baseball. It also depends on the incoming baseball speed and the batter's swing speed. All these factors need to be taken into account to maximize the baseball's exit speed.
80
+
81
+ The figure also shows intuitively why the sweet spot is located somewhere between the center of mass and the end of the barrel. As the point of collision moves outwards along the bat, the effective mass of the bat goes up, so that greater fraction of the initial kinetic energy is put into the bat's rotation, which. At the same time, the rotation in the bat means that the barrel of the bat is moving faster than center of mass
82
+
83
+ (or handle). These two effects work in opposite directions to give a unique sweet spot that's not at either endpoint.
84
+
85
+ However, this model tells only part of the story. Indeed some of our starting assumptions contradict each other. We treated the bat as a free body because the collision time was so short. In essence, during the 1 ms of the collision, the ball will only "see" the local geometry of the bat. The ball will not "see" the batter's hands on the handle of the bat. On the other hand, we assumed the bat was perfectly rigid. However, this means that the ball "sees" the entire bat. These assumptions clearly cannot both be true. We also assumed that the $e$ was constant along the length of the bat and for different collision velocities. Experimental evidence<sup>1</sup> suggests that neither issue can be ignored for an accurate prediction of the location of the sweet spot. We will need a more sophisticated model to address all these shortcomings in our simple example and better understand what's going on.
86
+
87
+ # 3. OUR MODEL
88
+
89
+ We have attempted to combine the best features of previous models into our model. We draw from Brody's rigid body model as described in the previous section for intuition, but we draw our mathematics most directly from Cross's work. One could accurately describe our work as an adaptation of Cross's work to actual baseball bats. Nathan also attempted such an adaptation but he was misled by incorrect intuition about the role of vibrations. We describe Nathan's approach and error as a way to explain Cross's work and to motivate our work. Next, we present the equations of our model and discuss the main features of its assumptions and methodology.
90
+
91
+ # 3.1 Previous Models
92
+
93
+ In the previous section we encountered Brody's rigid body model which successfully predicts the existence of a sweet spot not at the end of the bat. That model suffers from the fact that the bat is not a rigid body and experiences vibrations. One way to account for those vibrations is to model the bat as a flexible object. Beam theories of varying degrees of accuracy and complication can be used to model the flexible bat. Van Zandt was the first to carry out such an analysis modeling the beam as a Timoshenko beam. A Timoshenko beam is a fourth order theory taking into account both shear forces and tensile stresses. The equations are complicated and we will not need them so we will not record them here. His model assumes the ball to be uncoupled from the beam and simply takes the impulse of the ball as a given. The resulting vibrations of the bat are used to more accurately predict the velocity of the beam at the impact point (by summing the Brody velocity with the velocity of the displacement at the impact point due to vibrations) and therefore predict a more accurate exit velocity of the ball still from the equations of the coefficient of restitution.<sup>7</sup>
94
+
95
+ The next model we draw on is Cross's model of the interaction of the impact of balls with aluminum beams. His beam used the less elaborate Euler-Bernoulli equations to model the propagation of waves. In addition, he provided equations to model the dynamic coupling of the ball to the beam during the impact. After discretizing the beam spatially, he assumed the ball basically acts as a lossy spring coupled to the single component of the region of impact. Cross's work is motivated by both the case of tennis rackets and baseball bats. One important difference in those cases is the time-scale of impact. The baseball bat's collision only lasts approximately one millisecond in which time the propagation speed of the wave is very important. In this local view of the impact, the importance of the baseball's coupling with the bat is increased. We will describe both the equations of the Euler-Bernoulli equations and ball's equations in a later section.
96
+
97
+ Before continuing let us discuss the main results of the Cross model. Cross argues that the actual vibrational modes and node points are largely irrelevant since the interaction is localized on the bat. The boundary conditions only matter if vibrations reflect off the boundaries, so an impact close enough to the barrel end of the bat will be affected by the boundary there. In particular, a pulse reflecting off a free boundary will return with the same sign (deflected away from the ball, decreasing the force on the ball, decreasing the exit velocity), but a pulse reflecting of a fixed boundary will return with the opposite sign (deflected towards the ball, pushing it back, increasing the exit velocity). Away from the boundary, we expect the exit velocity to be uniform along a non-rotating bat. Cross's model predicts all of these effects,
98
+
99
+ and he has experimentally verified them. In our model we expect to see similar phenomenology in baseball bats. We also expect the narrowing of the barrel near the handle to act somewhat like a boundary.
100
+
101
+ Nathan's model also attempted to combine the best features of Van Zandt and Cross. His theory used the full Timeshenko theory for the beam and the Cross model for the ball. He even intuitively acknowledged the local nature of impact. So where do we diverge from him? His error stems from an overemphasis on trying to separate out the ball's interaction with each separate vibrational mode.
102
+
103
+ The first sign of inconsistency comes when he uses the "orthogonality of the eigenstates" to determine how much a given impulse excites each mode. The eigenstates are not orthogonal. Many theories yield symmetric matrices that need to be diagonalized yielding the eigenstates, but Timeshenko's theory does not due to the inclusion of odd-order derivatives into its equations. His story plays out beautifully if only the eigenstates were actually orthogonal, but we have numerically calculated the eigenstates, and they are not even approximately orthogonal. He uses the orthogonality to draw important conclusions. The first is that the location of the nodes of the vibrational modes are important. The second is that high frequency effects can be completely ignored. We disagree with both of these.
104
+
105
+ The correct derivation starts with the following equation of motion (with asymmetric $H$ ):
106
+
107
+ $$
108
+ \vec {y} ^ {\prime \prime} (t) = \mathbf {H} \vec {y} (t) + \vec {F} (t)
109
+ $$
110
+
111
+ We consider solutions of the form $\vec{y}(t) = a_n(t)\vec{\phi}_n(t)$ where $\vec{\phi}_n$ is an eigenmode $\mathbf{H}\vec{\phi}_n = -\omega_n^2\vec{\phi}_n$ . We also let $\Phi_{nk}$ indicate the $n$ -th component of the $k$ -th eigenmode. Then we write the equation of motion:
112
+
113
+ $$
114
+ a _ {n} ^ {\prime \prime} (t) \vec {\phi} _ {n} + \omega_ {n} ^ {2} a _ {n} (t) \vec {\phi} _ {n} = F _ {n} = (\Phi_ {n k} ^ {- 1} F _ {k}) \vec {\phi} _ {n}
115
+ $$
116
+
117
+ $$
118
+ a _ {n} ^ {\prime \prime} (t) + \omega_ {n} ^ {2} a _ {n} (t) = \Phi_ {n k} ^ {- 1} F _ {k}
119
+ $$
120
+
121
+ Nathan's paper simply uses $\Phi_{kn}F_k$ times some normalization constant on the right hand side.
122
+
123
+ At first glance, this seems like a minor technical detail, but the physics here is important. We can calculate that the $\Phi_{nk}^{-1}F_k$ terms stay fairly large for even high values of $n$ corresponding to the high frequency modes ( $k$ is just the position of the impact.) This means there are significant high frequency components, at least at first. In fact the high frequency modes are necessary for the impulse to propagate slowly as a wavepacket. In Nathan's model, only the lowest standing modes are excited, so the entire bat starts vibrating as soon as the ball hits. This contradicts his earlier belief (that we agree with): the collision is over so quickly that the ball only "sees" part of the bat (the collision is local). Nathan's paper also claims that the sweet spot is related to the nodes of the lowest mode. This also contradicts with locality; the location of the lowest order nodes depends on the geometry of the entire bat, including the boundary conditions at the handle.
124
+
125
+ While the inconsistencies in the Nathan model may well cancel out, we prefer to build our model on a more rigorous footing. For simplicity, we will use the Euler-Bernoulli equations rather than the full Timoshenko equations. The difference is that the Euler-Bernoulli equations ignore shear forces. This should be acceptable; Nathan points out that his model is largely insensitive to the shear modulus. We will also solve the differential equations directly after discretizing in space rather than decomposing into modes. In these ways we are following the work of Cross.<sup>6</sup>
126
+
127
+ On the other hand our model extends Cross's work in several key ways. First, we will examine parameters much closer to those relevant to baseball (Cross's model focused on tennis). Cross's models involved an aluminum beam of width $0.6\mathrm{cm}$ being hit with a ball of $42\mathrm{g}$ at around $1\mathrm{m/s}$ . In our case, we will have an aluminum or wood bat of radius approximately $3\mathrm{cm}$ being hit with a ball of $145\mathrm{g}$ traveling at $40\mathrm{m/s}^*$ .
128
+
129
+ Second, we will allow for varying cross sectional area, an important feature of a real baseball bat. Third, we will allow the bat to have some initial angular velocity. This will let us scrutinize the rigid body model prediction that higher angular velocities lead to the maximum power point moving further up the barrel.
130
+
131
+ To reiterate, the main features of our model are an emphasis on ball coupling with the bat, finite speed of wave propagation in short time-scale, and adaptation to realistic bats. This is a natural outgrowth of the approaches found in the literature.
132
+
133
+ # 3.2 Mathematics of Our Model
134
+
135
+ Our equations are a discretized version of the Euler-Bernoulli equations:
136
+
137
+ $$
138
+ \rho \frac {\partial^ {2} y (z , t)}{\partial t ^ {2}} = F (z, t) + \frac {\partial^ {2}}{\partial z ^ {2}} \left(Y I \frac {\partial^ {2} y (z , t)}{\partial z ^ {2}}\right)
139
+ $$
140
+
141
+ In the above equation, $\rho$ is the mass density, $y(z,t)$ is the displacement, $F(z,t)$ is the external force (in our case applied by the ball), $Y$ is the material's Young's modulus (a constant), and $I$ is the second moment of area $(\pi R^4 /4$ for a solid disc). We will discretize $z$ in steps of $\Delta$ . The only force will be from the ball, which applies a force in the negative direction to the $k$ -th segment. Our discretized equation is:
142
+
143
+ $$
144
+ \rho A \Delta \frac {d ^ {2} y _ {i}}{d t ^ {2}} = - \delta_ {i k} F (u (t), u ^ {\prime} (t)) - \frac {Y}{\Delta^ {3}} \left(I _ {i - 1} (y _ {i - 2} - 2 y _ {i - 1} + y _ {i}) - 2 I _ {i} (y _ {i - 1} - 2 y _ {i} + y _ {i + 1}) + I _ {i + 1} (y _ {i} - 2 y _ {i + 1} + y _ {i + 2})\right)
145
+ $$
146
+
147
+ Our dynamical variables are $y_{1}$ through $y_{N}$ . For a fixed left end we pretend $y_{-1} = y_0 = 0$ . For a free left end we pretend $y_{1} - y_{0} = y_{0} - y_{-1} = y_{-1} - y_{-2}$ . The conditions on the right end are analogous. These are the same conditions Cross uses.
148
+
149
+ Finally, we have an additional variable for the ball's position (relative to some constant) $w(t)$ . Initially $w(t)$ is positive and $w'(t)$ is negative, so the ball is moving towards the negative from the positive direction. Let $u(t) = w(t) - y_k(t)$ . This variable will represent the compression of the ball, and we will replace $F(t)$ with $F(u(t), u'(t))$ . Initially $u(t) = 0$ and $u'(t) = -v_{ball}$ . The force between the ball and the bat will take the form of hysteresis curves such as the ones shown in Figure 3b. The higher curve will be taken when $u'(t) < 0$ (compression) and the lower curve when $u'(t) > 0$ (expansion). When $u(t) > 0$ the force is zero. The equation of motion for the ball can then be written:
150
+
151
+ $$
152
+ w ^ {\prime \prime} (t) = u ^ {\prime \prime} (t) + y _ {k} ^ {\prime \prime} (t) = F (u (t), u ^ {\prime} (t))
153
+ $$
154
+
155
+ We have eliminated the variable $w$ .
156
+
157
+ We have yet to specify the function $F(u(t), u'(t))$ . As we can see in videos, the ball does not act like a rigid object in collisions and instead compresses significantly (often more than a centimeter.) This compression and decompression is lossy. We could model this loss by just subtracting some fraction of the ball's energy after the collision. This is good enough for many purposes, but we will instead follow Nathan and model this as a non-linear spring with hysteresis.
158
+
159
+ ![](images/0a8bf384e3f574eec1cb76da7f71d16712b1b66b8856ee0ecf2b215ad89de46a.jpg)
160
+ Figure 3: Left: An image of a ball hitting a rigid wall. The compression is easily visible. Right: A hysteresis curve used in our modeling. The maximum compression here is a very significant $1\mathrm{cm}$ .
161
+
162
+ ![](images/a47a924650d7b751e989a8a9bd794825e84d50d97d94a974f9263ddde79dd7a5.jpg)
163
+
164
+ Since $W = \int F dx$ the total energy lost is the area between the two curves in Figure 3b. A problem with creating these hysteresis curves is that one does not know the maximum compression (i.e. where to start drawing the bottom curve) until after solving the equations of motion until $u'(t) = 0$ . In practice, we solve the equation in two steps.
165
+
166
+ The main assumptions of our model derive from the main assumptions of each equation. The first is the exact form of the hysteresis curve of the ball. It has been argued $^{6}$ that the exact form of the hysteresis curve is not very important as long as the duration of impact, magnitude of impulse, maximum compression of the ball, and energy loss are roughly correct. Secondly, both the Timeshenko and Euler-Bernoulli theories ignore azimuthal and longitudinal waves. This is a fundamental assumption built into all of the approaches described so far in the literature. Given that the impact of the ball is transverse and ball does not rotate, the assumption is theoretically justified. In general, the assumptions of our models are the same as those found in the literature and so even if we cannot carry out experiments, our assumptions are verified by the literature's experiments.
167
+
168
+ # 4. SIMULATION AND ANALYSIS
169
+
170
+ # 4.1 Simulation Results
171
+
172
+ Our model's two main features are wave propagation in the bat and the non-linear compression and decompression of the ball. The latter is illustrated by the asymmetry of the plot in Figure 4a. This plot also reveals the time-scale of the collision: the ball leaves the bat $1.4\mathrm{ms}$ after impact. During and after the collision shock waves propagate throughout the bat. In this example the bat was struck $60~\mathrm{cm}$ from the handle. What does the collision look like from $10\mathrm{cm}$ from the handle? Figure 4b shows the answer: the other end of the bat does not feel anything until about $0.4\mathrm{ms}$ , and does not feel significant forces until about $1.0\mathrm{ms}$ . By the time that portion of the bat swings back (almost $2.0\mathrm{ms}$ ) the ball has already left contact with the bat. This illuminates an important point: we are only concerned with forces on the ball that act within the $1.4\mathrm{ms}$ time-frame of the collision. Thus, any waves taking longer than that time to return to the impact location do not affect the exit velocity.
173
+
174
+ ![](images/a9cf1820994fb4cdfd83a165d0d26893b57a43d71aa2d45d521fb540a37c9879.jpg)
175
+ Figure 4: Left: The force between the ball and the bat as a function of time. We see that the impulse lasts about $1.4\mathrm{ms}$ . Right: The waveform of the $y_{10}(t)$ when the bat is struck at $60\mathrm{cm}$ . The impulse reaches this chunk at around $0.4\mathrm{ms}$ , but it does not start moving significantly until later.
176
+
177
+ ![](images/b908ff653451d1181cc6b4aedb77eacde4fcfad14e8141b7821b953590786925.jpg)
178
+
179
+ Having demonstrated the basic features of our model, we will now replicate some of Cross's results, except with baseball-like parameters. In Figure 5a we show the effects of fixed versus free boundary conditions to be in agreement with Cross's model. As we expected, fixed boundaries enhanced the exit velocity and free boundaries reduced them. From this we see the effect of the shape of the bat. The handle does indeed act like a free boundary. The distance between the boundaries is too small to get a flat zone in the exit velocity vs. position curve. If we extend the barrel by $26\mathrm{cm}$ we see a flat zone develop in Figure 5b (notice the change in axes). Intuitively, this flat zone exists because the ball only "sees" the local geometry of the bat and the boundaries are too far away to have a substantial effect.
180
+
181
+ ![](images/bc1f3b719b5404dd77f7c25b558dd29b119904cae879487231249effae9129ba.jpg)
182
+ Figure 5: Left: Exit velocity vs. impact position for a free boundary (solid line) and fixed boundary (dashed line). Notice that we are fixing the barrel end and leaving the handle end free in both cases. Right: The same graph for a free $110\mathrm{cm}$ bat.
183
+
184
+ ![](images/3295ba9828709b4b7f3ae255bb0a42bfa7344427ce93479ad0ad98c021086326.jpg)
185
+
186
+ From now on we will use a $84\mathrm{cm}$ bat that is free on both ends, where position zero denotes the handle end. In this base case the sweet spot is $70\mathrm{cm}$ . We now investigate the dependence on the exit speed on initial angular velocity. According to rigid body models, the sweet spot is exactly at the center of mass if the bat has no angular velocity. In Figure 6 we present the results of changing angular velocity. Our results contrast greatly with the simple example presented earlier. While the angular rotation effect is still there, the effective mass plays only a negligible role in determining the exit speed in our model. In other words, the bat is not a rigid body because the entire bat does not react instantly. The dominating effect is from the boundaries: the end of the barrel and where the barrel tapers off. These free ends cause a significant drop in exit velocity.
187
+
188
+ ![](images/6bd84c1eef5478dcfb61778c43344d7325c0d33df7e242aa9e30c36cfbdaa311.jpg)
189
+ Figure 6: Exit velocity vs. impact position at various initial angular velocities of the bat. Our model predicts the solid curves, while the dashed lines represent the simple model. The dots are at the points where Brody's solution is maximized.
190
+
191
+ because the impact velocity is greater (by a factor of $\omega_{i}(z - z_{cm})$ .) In Figure 7a we show that near the sweet spot, angular velocity actually decreases the excess exit velocity (relative to the impact velocity). We should expect this, since at higher impact velocity, more energy is lost to the ball's compression and decompression. To confirm this, we also recreated the plot in Figure 7b without the hysteresis curve, where this effect disappears. This example is one of the few places where the hysteresis curve makes a difference. This confirms experimental evidence<sup>91</sup> that the coefficient of restitution decreases with increasing impact velocity.
192
+
193
+ ![](images/ac3373cc7feb414809ee9c3c29276fd88816dc1b76589b1bef8f090d0f074922.jpg)
194
+ Figure 7: Left: Exit velocity minus impact velocity vs. impact position at various initial angular velocities of the bat. Left: Near the center of mass, higher angular velocity gives higher excess exit velocity, but towards the sweet spot the lines cross and higher angular velocity gives lower excess exit velocity. Right: The same plot without a hysteresis curve. The effect disappears.
195
+
196
+ ![](images/5db68371abcd3cc7db72fe839a33381ff9f79634bcfd082af174391aa0d4671c.jpg)
197
+
198
+ The results for angular velocity contrast with the simple model. As evident from Figure 6, the rigid-body model greatly overestimates this effect for large angular velocities.
199
+
200
+ ![](images/c7bfb456efc46d6e131daa611d69e0eb8aa220b91f9728c9f722b16ef77c0cfc.jpg)
201
+ Figure 8: Optimal impact position vs. angular velocity. The line is the rigid-body prediction, while the points are our model's prediction.
202
+
203
+ # 4.2 Parameter Space Study
204
+
205
+ ![](images/72108fcf672afa10f6dc69ee5732cc757fe10a0b26eba31892299433e4c32bc9.jpg)
206
+ Figure 9: The profile of our bat.
207
+
208
+ There are various adjustable parameters in our model. We use $\rho = 649\mathrm{kg / m^3}$ and $Y = 1.814\times 10^{10}\mathrm{N / m^2}$ . These values, as well as our bat profile (Figure 9) were taken from Nathan as typical values for a wooden
209
+
210
+ bat. While these numbers are in good agreement with other sources, we will see that these numbers are fairly special. As a result of our bat profile, the mass came out to $0.831\mathrm{kg}$ and the moment of inertia around the center of mass (at $59.3\mathrm{cm}$ from the handle of our $84\mathrm{cm}$ bat) was $0.039\mathrm{kg}\cdot \mathrm{m}^2$ . We let the $145\mathrm{g}$ ball's initial velocity be $40\mathrm{m / s}$ , and set up our hysteresis curve so that the compression phase was linear with spring constant $7\times 10^{5}\mathrm{N / m}$ .
211
+
212
+ We began by varying the density of the bat, and saw that the current density occupied a narrow region that gave peaked curved exit velocity curves (see Figure 10. We also varied the Young's modulus and shape of bat to similar effect (see Figure 11.) The fact that varying any of Nathan's parameters makes the resulting exit velocity versus location plot less peaked means baseball bats are specially designed to have the shape shown in Figure 6 (or the parameters were picked in a special way).
213
+
214
+ ![](images/fbe31e8b8fb397528f3840a8ce997d5eb76696670c78f4c12a537df9bd2d183e.jpg)
215
+ Figure 10: Exit velocity vs. impact position for various densities. The solid line is the original $\rho = 649\mathrm{kg / m^3}$ . Left: Dotted is 700, dashed is 1000. Right: Dotted is 640, dashed is 500.
216
+
217
+ ![](images/6e3fa3d5275bef98f9de096002cdd76e7e5b78daa0c50110a8ad5b00c2f675b5.jpg)
218
+
219
+ ![](images/fc954124b04b0360e60ff7a279d21773f9846d4e86206c1416e2676004e14e4a.jpg)
220
+ Figure 11: Left: Varying the value of Y. Solid is $Y = 1.1814 \times 10^{10} \mathrm{~N} / \mathrm{m}^2$ . Dashed is 1.25 times as much. Dotted is 0.8 times. Right: Varying the shape of the bat. Solid is the original shape. Dashed has a thicker handle region while dotted has a narrower handle region.
221
+
222
+ ![](images/6e13af89f9968f082dcece85e3cda7f0adc8a6691bdcdce951e8884549d336dc.jpg)
223
+
224
+ Finally we varied the velocity of the ball (see Figure 12. The exit velocity simply scales with the input velocity as expected.
225
+
226
+ ![](images/68c79450e56079aa90d71853c68f52e758221466df5c57e92464cb35781fb383.jpg)
227
+ Figure 12: Varying the velocity of the ball. Solid is the original $40\mathrm{m / s}$ . Dashed is $50\mathrm{m / s}$ while dotted is $30\mathrm{m / s}$ .
228
+
229
+ # 4.3 Alternatives to Wooden Bats
230
+
231
+ Having checked the stability of our model for small parameter changes, we will now change the parameters drastically to model corked and aluminum bats.
232
+
233
+ We modeled a corked bat as a wood bat with the barrel hollowed out, leaving a shell $1\mathrm{cm}$ or $1.5\mathrm{cm}$ thick. The result is shown in Figure 13a. The exit velocities are higher, but this difference is too small to be taken seriously. This result agrees with the literature: the only advantage of a corked bat is the change in mass and moment of inertia.
234
+
235
+ ![](images/9c0e6056222a7a0b1233daef3a97319fbbba2b6db17c873872764205cd5b4473.jpg)
236
+ Figure 13: Left: Corked Bat Right: aluminum bat.
237
+
238
+ ![](images/126c0068589017c67b7ffa3d0f592241c17c253ff79cbead2d9a2ae70930421c.jpg)
239
+
240
+ We modeled an aluminum bat as a $0.3\mathrm{cm}$ thick shell with density $2700\mathrm{kg / m^3}$ and Young's modulus $6.9\times 10^{10}\mathrm{N / m^2}$ . From the exit velocity graph we see that it performs much better than the wood bat (see Figure 13b). It has the same sweet spot $(70\mathrm{cm})$ and similar sweet spot performance, but the exit velocity falls off more gradually away from the sweet spot. To gain more insight into the aluminum bat, we animated the displacement of the bats vs. time, and presented two frames of the animation in Figure 14. We can see that the aluminum bat is displaced less (absorbing less energy). More importantly, in the second diagram the curve for the wood bat is still moving down and left, the aluminum bat's curve is moving left pushing the ball back up. The pulse in the aluminum bat traveled faster and returned in time to give energy back to the ball. By the time the pulse for the wood bat returned to the impact location, the ball had already left the bat.
241
+
242
+ ![](images/dd5f75e3ee288625eaa4daf667a119a2912e1f7faf13c3ac0988de663468c3db.jpg)
243
+ Figure 14: Plots of the displacement of an aluminum bat (dashed) and wood bat (solid) being hit by a ball $60\mathrm{cm}$ from the handle end. The second diagram shows two frames superimposed ( $t = 1.05\mathrm{ms}$ and $t = 1.10\mathrm{ms}$ ) to show the motion. The rigid translation and rotation has been removed from the diagrams.
244
+
245
+ ![](images/21666e39fac9590f4f99993e21f50aa26ea3188a711ae924e99492229d1142ac.jpg)
246
+
247
+ In the literature the performance of aluminum bats is often attributed to the "trampoline effect" where the bat compresses on impact and then springs back before the end of the collision. This would improve aluminum bat performance further. The trampoline effect involves exciting so-called "hoop modes," or modes with an azimuthal dependence, which our model can not simulate directly. For an aluminum bat one could conceivably use wave equations for a cylindrical sheet (adjusting for the changing radius), and then solve the resulting partial differential equations in three variables. We started with the equations given by Graff, modified them for a varying radius, and eliminated the torsional components $v$ . The resulting equations are (where $R' = dR / dz$ ):
248
+
249
+ $$
250
+ \begin{array}{l} \left[ \frac {\partial^ {2} u}{\partial z ^ {2}} + \frac {\nu}{R} \left(\frac {\partial w}{\partial z} + \frac {\partial^ {2} v}{\partial z \partial \theta}\right) \right] + \frac {1 - \nu}{2 R} \left(\frac {\partial^ {2} v}{\partial \theta \partial x} + \frac {\partial^ {2} u}{R \partial \theta^ {2}}\right) = \rho \frac {(1 - \nu^ {2})}{E} \frac {\partial^ {2} u}{\partial t ^ {2}} \\ - \frac {1}{R} \left(\frac {w}{R} + \frac {\partial v}{R \partial \theta} + \nu \frac {\partial u}{\partial z}\right) + \frac {\partial}{\partial z} \left(R ^ {\prime} \left(\frac {\partial u}{\partial z} + \frac {\nu}{R} \left(w + \frac {\partial v}{\partial \theta}\right)\right)\right) + \frac {1 - \nu}{2} \frac {\partial}{\partial \theta} \left(\frac {\partial v}{\partial z} + \frac {\partial u}{R \partial \theta}\right) \frac {R ^ {\prime}}{R} + \frac {1 - \nu^ {2}}{E h} q = \rho \frac {1 - \nu^ {2}}{E} \frac {\partial^ {2} w}{\partial t ^ {2}} \\ \end{array}
251
+ $$
252
+
253
+ In order to solve these equations, we would probably write the solution as $\sum_{n}a_{n}(z,t)\cos (n\theta) + b_{n}(z,t)\cos (n\theta)$ and then discretize along the $z$ direction. We would keep only the lowest few values of $n$ and then numerically solve the resulting coupled ordinary differential equations. Analysis of such a complex system of equations is beyond the scope of this paper.
254
+
255
+ Instead we will artificially insert a hoop mode by hanging a mass from a spring from the spot of the bat the ball hits. We expect the important modes to be the ones with periods near the collision time $(1.4\mathrm{ms} = 1 / (714\mathrm{Hz}))$ . We find that this mode does affect the sweet spot, although the exact change depending on the frequency does not seem to follow a simple relationship. Our results, as shown in Figure 15 show that hoop modes around $700\mathrm{Hz}$ do enhance the exit velocity. They not only make the sweet spot wider but also shift the sweet spot slightly towards the barrel end of the bat.
256
+
257
+ ![](images/e410fdb6a4d75c0bb8bc14681a3e671df214ba82331c7e2dbcd9e8487be43795.jpg)
258
+ Figure 15: Exit velocity vs. impact position at different hoop frequencies. The lines from bottom to top at the left edge (color) are: (blue, starts off the chart) wood bat, (red) no hoop mode, (gray) $2000\mathrm{Hz}$ , (black) $500\mathrm{Hz}$ , (green) $300\mathrm{Hz}$ , (brown) $800\mathrm{Hz}$ , and (purple) $1250\mathrm{Hz}$ .
259
+
260
+ # 5. CONCLUSION
261
+
262
+ We have modeled a ball-bat collision by using Euler-Bernoulli equations for the bat and hysteresis curves for the baseball. Doing so has allowed us to reconcile the existing literature by emphasizing the role the time-scale of the collision and how the ball only "sees" a local region of the bat because of the finite speed of wave propagation. As a result, the sweet spot is farther out in our model than the rigid body recoil model predicts. We're able vary the input parameters and show that the effects are in line with intuition and key results in previous experimental work. Finally, we show that aluminum bats have wider sweet spots than wooden bats.
263
+
264
+ However, our model is far from comprehensive, and we offer several suggestions for improvements and extensions.
265
+
266
+ - The ball is assumed to be non-rotating, and the impact is assumed to be head on. These assumptions are commonly violated in real life. However, rotating balls and off-center collisions will excite torsional modes in the bat which we entirely ignore. These changes would make the problem non-planar.
267
+ - We neglect the shear forces in the bat. Future work could incorporate the shear effect by using Timoshenko beam equations.
268
+ - Our analysis of hoop modes was rather cursory and was tacked on rather than integrated into our main model.
269
+
270
+ Despite these shortcomings, we hope our model is a valuable contribution to the literature.
271
+
272
+ # REFERENCES
273
+
274
+ [1] R. K. Adair, The Physics of Baseball, 2nd ed. (Harper-Collins, 1994).
275
+ [2] H. Brody, American Journal of Physics 54, 640 (1986).
276
+ [3] A. M. Nathan, American Journal of Physics 68, 979 (2000).
277
+ [4] K. F. Graff, Wave Motion in Elastic Solids, 1st ed. (Dover Publications, Inc., 1975).
278
+ [5] W. Goldsmith, Impact, 1st ed. (Edward Arnold Publishers Ltd., 1960).
279
+ [6] R. Cross, American Journal of Physics 67, 692 (1999).
280
+ [7] L. L. V. Zandt, American Journal of Physics 60, 172 (1992).
281
+
282
+ [8] A. M. Nathan, http://webusers.npl.illinois.edu/ a-nathan/pob/video.html.
283
+ [9] A. M. Nathan, American Journal of Physics 71, 134 (2003).
284
+ [10] D. A. Russell, http://paws.kettering.edu/drussell/bats-new/batvibes.html.
MCM/2010/A/7920/7920.md ADDED
@@ -0,0 +1,528 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ For office use only
2
+
3
+ T1
4
+ T2
5
+ T3
6
+ T4
7
+
8
+ Team Control Number
9
+
10
+ # 7920
11
+
12
+ Problem Chosen
13
+
14
+ A
15
+
16
+ For office use only
17
+ F1
18
+ F2
19
+ F3
20
+ F4
21
+
22
+ 2010 Mathematical Contest in Modeling (MCM) Summary Sheet
23
+
24
+ (Attach a copy of this page to each copy of your solution paper.)
25
+
26
+ # Brody Power Model:
27
+
28
+ # An Analysis of Baseball's Sweet Spot
29
+
30
+ February 22, 2010
31
+
32
+ The sport of baseball is a web of complex interactions between numerous elements of basic physics. The typical player, however, is more concerned with stepping into the batter's box and striking the ball as efficiently and as far as possible. In over 100 years of professional baseball, players have learned through experience that the optimum location to strike the ball exists in the thick portion of the barrel, a location known as the "sweet spot." However the concept of the sweet spot clashes with a basic tenet of physics: torque. Torque predicts that the optimal location of contact would exist at the very end of the bat. The driving force behind this paper is to develop a model which remedies this conflict. In our report, we identify two models for the sweet spot, each based on a different definition: the first is based on the center of percussion, and the second is referred to as the "Brody Power Model." We determine the Brody Power Model is superior and use it to examine how the performance of a wooden bat is affected by "corking" it or constructing it from aluminum instead. The model relatively accurately contrasts the performance of different bats based on a variety of parameters, including mass, angular velocity, and moment of inertia. Testing with the Brody Model shows that corking may increase the angular velocity of the swing, but yields a negligible difference in power, while aluminum provides a clearly noticeable improvement in performance. To test the sensitivity of our model, we examine the effects of changes in bat parameters. While it does lack some sensitivity and is limited in its ability to precisely predict outcomes, we ultimately conclude that the Brody Model is the ideal method for an assessment of the interplay between power and sweet spot location.
33
+
34
+ # Contents
35
+
36
+ 1. Introduction 2
37
+ 2. The Plan 2
38
+ 3: Objectives 3
39
+ 4.Defining the "Sweet Spot" 4
40
+ 5. Model A- Center of Percussion: Physics' Sweet Spot 5
41
+
42
+ 5-b. Example I. 5
43
+
44
+ 6. Model B: Brody Power Model 6
45
+
46
+ 6-a. Facts and Assumptions 6
47
+ 6-b. General Form 7
48
+ 6-c. Example II. 9
49
+ 6-d.Torque. 9
50
+
51
+ 7. Discussion 10
52
+
53
+ 7-a. "Corking" 10
54
+ 7-b. Cork Model Augmentation 11
55
+ 7-c.Aluminum vs.Wood 13
56
+ 7-d. Aluminum Model Augmentation 13
57
+
58
+ 8. Sensitivity Analysis 15
59
+ 9. Conclusion 19
60
+ 10.References 22
61
+
62
+ # 1. Introduction
63
+
64
+ Be it the crack of the bat, the roar of the crowd, or the taste of a juicy ballpark hot dog, something about the sport of baseball has had an indelible impact on the American psyche. The sport has developed its own vernacular, with pitchers "busting" hitters inside with "high cheddar," and batters "dropping bombs" over the outfield fence. Ask a player what is meant by these phrases, or how one would go about quantifying what constitutes "cheddar," and one would be hard pressed to form a concrete definition. A similarly obscure phenomenon is the "sweet spot" of a baseball bat, a concept intuitively understood by even the most junior players, but which is very difficult to define.
65
+
66
+ For the purposes of this paper, the sweet spot represents the spot on the thick part of the barrel where maximum power is transferred to the ball upon contact. While a simple explanation based on torque would point to the end of the barrel as the sweet spot, the empirical evidence gathered by well over a century of play clearly shows that this is not the case. Even the average youth baseball player can demonstrate from experience that the sweet spot occurs somewhere in the middle of the thick part of the barrel. The purpose of this investigation is to develop a model that explains this empirical phenomenon. Additionally, we will attempt to investigate the effects of different parameters upon a bat-ball collision, specifically how they affect location and effectiveness of the sweet spot. Included in this investigation is an examination of the utility of "corking" a bat, as well as an examination of the differences observed between the performances of wooden and aluminum bats.
67
+
68
+ # 2. The Plan
69
+
70
+ Our goal is to develop a model which will explain the empirical finding that the sweet spot of a baseball bat cannot be explained by a simple analysis of torque. In order to achieve this, our team must:
71
+
72
+ # - Identify Objectives:
73
+
74
+ In order to evaluate our model, we must describe the criteria of a successful model as it relates to this scenario.
75
+
76
+ # - Define Terms:
77
+
78
+ The word "sweet spot" can mean different things to various camps in the baseball world. Thus we must standardize all of our terms, especially the sweet spot, in order to develop context for our model and minimize confusion.
79
+
80
+ # State Assumptions:
81
+
82
+ The collision between bat and ball involves numerous variables. In order to develop a useable model, we will make some assumptions about the nature of minor variables in the scenario. We will revisit these assumptions as they become pertinent.
83
+
84
+ # Develop Models:
85
+
86
+ The next step is to create multiple working models that can accurately explain the location and effects of the sweet spot.
87
+
88
+ # Sensitivity Analysis:
89
+
90
+ In order to be effective and useful, a model must be consistent when parameters are varied. To test the effect of changing assumptions, we will produce a sensitivity analysis that shows whether our model is properly sensitive to these variations.
91
+
92
+ # 3. Objectives
93
+
94
+ This report is produced with a baseball audience in mind. We want the average baseball enthusiast to be able to read our paper and gain some understanding of a few of baseball's fundamental questions: the sweet spot, the effects of corking, and effects of differences between bat materials. To guide the presentation of our model, we have focused on four primary objectives:
95
+
96
+ # 1. Answering the Problem
97
+
98
+ - Provide a model which explains why torque is not the determining factor for the sweet spot.
99
+ - Use this model to test the effects of cording a bat.
100
+ - Use this model to test for variations in performance between wooden and metal bats.
101
+
102
+ # 2. Simplicity and Clarity
103
+
104
+ - Is the material presented in the most simple and straightforward way possible?
105
+ - Could this report be included in a baseball publication and targeting a baseball audience with minimal mathematical inclination, while still providing sufficient evidence for its conclusions?
106
+
107
+ This will guide the paper in the direction of efficient reading—focusing on concise and informative presentation.
108
+
109
+ # 3. Applicability
110
+
111
+ - Does the model translate well to explaining real world phenomenon?
112
+ - Is the data generated useful outside of a laboratory setting?
113
+
114
+ This model should not only describe the parameters which govern the sweet spot, but should also provide a relative location or zone that fits empirically recognized norms.
115
+
116
+ # 4. Resiliency
117
+
118
+ - Can this model accommodate reasonable fluctuation in input data?
119
+
120
+ A change in variables should not "throw our model a curve ball." Our goal is to create a model which will be able to explain the function and effects of the sweet spot for a multitude of varying parameters.
121
+
122
+ # 4. Defining the "Sweet Spot"
123
+
124
+ The concept of the sweet spot has acquired numerous definitions over the years. Many theories for baseball's sweet spot have been proposed; occasionally definitions overlap or contradict each other. It is important that we explain which definitions of the sweet spot we are referring to in order to minimize confusion.
125
+
126
+ # Center of Percussion (COP)
127
+
128
+ The center of percussion has historically been identified with the concept of the sweet spot for implements such as baseball bats and tennis rackets. When a force is applied at the COP (in this case from the force of the bat-ball collision) no vibration is felt at the pivot point.[3] [4] Thus, the COP is often tied to the tactile feeling of the sweet spot, where the hitter feels little to no sting in his hands.[4] The COP has commonly been measured using a pivot point located 6 inches above the knob, just below the right hand of a typical batter. However, recent research has demonstrated that the pivot point exists below the knob of the bat in the moment prior to contact, and so the usefulness of the COP as a judge of performance has been called into question.[3]
129
+
130
+ # - Location of Maximum Performance
131
+
132
+ An alternate definition of the sweet spot is the strike location that yields maximum performance; this depends on the definition of maximum performance itself. In past studies, the exit velocity of the ball has been used as one possible metric, forming the basis for the Ball Exit Speed Ratio (BESR) testing used to certify high school and college bats.[3] These tests measure the velocity of the ball before and after the collision, creating a ratio of the departure speed over the impact speed. This ratio is commonly known as the coefficient of restitution (COR). It illustrates how much energy the ball maintains after contact with the bat—the higher the value the more efficient the collision. Ratios above a certain level are deemed illegal for sanctioned high school and NCAA competition primarily for safety reasons. Limiting the COR limits the velocity of the ball as it leaves the bat, keeping batted balls below a relatively safe ceiling.[8] Additional techniques show that this location changes with changing parameters such as initial speed of the ball and angular speed of the bat.
133
+
134
+ As we will see, the particular definition used will have a profound effect on the ability of a model to demonstrate why torque is not the controlling factor of sweet spot location.
135
+
136
+ # 5. Model A- Center of Percussion: Physics' Sweet Spot
137
+
138
+ The center of percussion has long been identified in physics texts as being the location of
139
+
140
+ the sweet spot. The principle behind the COP relates to the interaction of rotational and translational motion caused by a ball striking a bat. The translational force initiated by the ba
141
+
142
+ cause the entire bat to attempt to move in the same direction and speed at all points
143
+
144
+ ![](images/ca907bdd57eb1ce8f5ae4319cfa51d3c026a567f22fcf7003565361457b44e91.jpg)
145
+
146
+ center of mass due to rotational force (Figure 2). This rotational force increases the tendency of the handle to move in the opposite direction of the translational force.
147
+
148
+ ![](images/4f6df904fcfc1b0a4ed35ded792d7a6718dffd41e14c49cc3b698b5cb0bb5cd3.jpg)
149
+ (Figure 1). When the ball hits the bat, there is also the tendency for the bat to rotate around the
150
+
151
+ The relative strength of these two forces
152
+
153
+ determines where the sweet spot is located. If the baseball hits the bat between the COP and the end of the bat, the rotational force is stronger than the translational force acting on the handle. This imbalance of forces results in a tug on the handle as the bat tries to rotate out of the hands of the batter. If the ball impacts the bat surface between the center of mass and the COP, the rotational force which acts on the handle is less than that of the translational force that pushes the handle. As a result, the translational force is felt as a vibration which stings the batter's hands. Either way, the lack of control in the follow-through is detrimental to the flight of the ball. Alternatively, when the ball impacts at the COP, the rotational and translational forces acting on the handle cancel out. The collision feels smooth to the batter, and the lack of shock to the hands acts as an indicator that a solid hit has been made.
154
+
155
+ In order to determine the location of the COP mathematically, the following equation is used:[17]
156
+
157
+ $$
158
+ d = \frac {I _ {\mathrm {c m}}}{M \cdot x}
159
+ $$
160
+
161
+ where:
162
+
163
+ - $d =$ Distance from center of mass to center of percussion
164
+ - $I_{\mathrm{cm}} =$ Moment of Inertia about the center of mass
165
+ $M =$ Mass of the bat
166
+ - $x =$ Distance from pivot point to center of mass
167
+ - $L =$ Length of bat
168
+
169
+ # 5-b. Example I
170
+
171
+ To provide a point of comparison for later, we will assume that:[17]
172
+
173
+ $I_{\mathrm{cm}} = 0.048\mathrm{kg}\mathrm{m}^2$
174
+ $M = 0.905\mathrm{kg}$
175
+ $x = 0.51\mathrm{m}$
176
+ $L = 34$ in
177
+
178
+ Inserting these values in the COP equation, we found the location of the sweet spot to be at about 26.531 inches from the knob, or approximately 7.5 inches from the free end of the bat. While this distance is close to the expected values, it falls just outside the accepted 4-7 inches from the end of the bat specified by physicist Rod Cross.[5]
179
+
180
+ While the COP model of explaining the sweet spot is supported by sound physics, it isn't as sensitive as we require. The location of the sweet spot according to the COP is based upon the position where the forces induced by the ball balance out, and is found by taking into account a few structural specifications of the bat. Several problems exist with this definition of the sweet spot. When the bat is swung, it produces rotational and translational forces which interact with those of the baseball. The interactions of momentum and forces cause fluctuations in the location of the optimal impact point, which in turn affects the velocity at which the ball leaves the bat. Also, since this is based on feel, there is theoretically a different position for the COP for every point that is covered by the batter's hands.[5]
181
+
182
+ # 6. Model B: Brody Power Model
183
+
184
+ After investigating the COP as a possible location of maximum performance, we have determined that it is inadequate for our purposes, and another model is required. Thus, the Brody Power Model* is presented.
185
+
186
+ # 6-a. Facts and Assumptions
187
+
188
+ # Equations:
189
+
190
+ The derivation of this model stems from a combination of the three concepts below:
191
+
192
+ Conservation of Linear Momentum: $\mathrm{mv}_i = \mathrm{mv}_f + \mathrm{MV}_f$
193
+
194
+ Conservation of Angular Momentum: $\mathrm{bmv}_i + \mathrm{I}\omega_i = \mathrm{bmv}_f + \mathrm{I}\omega_f$
195
+
196
+ Definition of Coefficient of Restitution: $e(v_{i} - \mathrm{b}\omega_{i}) = V_{f} - v_{f} + \mathrm{b}\omega_{f}$
197
+
198
+ # Definition of Variables:
199
+
200
+ $m =$ Mass of Ball
201
+ $M =$ Mass of Bat
202
+ - $v_{i} =$ Initial Velocity of Baseball
203
+ - $v_{f} =$ Final Velocity of Baseball
204
+ - $\omega_{i} =$ Initial Angular Velocity of Bat
205
+ - $\omega_{f} =$ Final Angular Velocity of Bat
206
+ $V_{f} =$ Final linear velocity of bat
207
+ - $I =$ Moment of Inertia: A measure of resistance to rotational motion.
208
+ $e =$ Coefficient of Restitution
209
+ $b =$ Distance from Ball Impact to Pivot Point
210
+ - $L =$ Length of bat
211
+
212
+ # Simplifying Assumptions
213
+
214
+ $V_{f}$ is equal to 0.
215
+ $\omega_{f}$ is equal to 0.
216
+ - Pivot Point is located 6 in from the bat knob
217
+
218
+ # 6-b. General Form
219
+
220
+ From the three given equations above, an all-encompassing equation can be derived that will allow $b$ to be defined. The fully derived equation comes from an essay written by physics professor H. Brody. The equation:
221
+
222
+ $$
223
+ b = \frac {v _ {i}}{\omega_ {i}} \pm \sqrt {(\frac {v _ {i}}{\omega_ {i}}) ^ {2} + I (\frac {M + m}{\mathrm {M m}})}
224
+ $$
225
+
226
+ is given as a method of determining how far from the center of mass a ball must impact in order to maximize the ball's kinetic energy. The Brody Power Equation was developed to merge the three root equations given above to yield the location that produces the maximum ball exit speed.
227
+
228
+ The combination of these three equations is suggested by the physical nature of the problem. When a bat moves through space and comes into contact with a ball, both translational and rotational forces must be considered. As the bat is swung, it travels in an arc around a pivot point, which requires rotational motion to be taken into account. The baseball itself travels in a straight line, causing the bat to experience a translational force on impact, and so linear velocity also comes into play. The equation for the COR is factored in because an indicator for the sweet spot is the greatest velocity of the ball following the collision relative to its initial velocity.
229
+
230
+ The first step towards creating a model which maximizes velocity lies in combining the three preliminary equations while also isolating $\nu_{f}$ . In order to accomplish this task, the following assumption is made, considerably easing the overall calculation:
231
+
232
+ - Set $V_{f}$ and $\omega_{f}$ equal to 0 and eliminate them. Since we are trying to maximize the movement of the ball, setting the final velocities of the bat to 0 indicates that all energy has been transferred to the ball. While not empirically correct, the actual energy imparted to the ball does not matter for our purposes. We are only concerned with modeling an estimated location of the sweet spot and its effectiveness relative to other bats.
233
+
234
+ Once these assumptions have been made and the given equations have been combined, the equation
235
+
236
+ $$
237
+ v _ {f} = v _ {i} - [ \frac {(1 + e) (v _ {i} - \mathrm {b o} _ {i})}{1 + (m / M) + (\mathrm {m b} ^ {2} / I)} ]
238
+ $$
239
+
240
+ Equation 1
241
+
242
+ remains. The next step is to manipulate Equation 1 in order to find the location on the bat that maximizes the COR, which correlates to a maximum value of $\nu_{f}$ . To take this step, the following assumption is made:
243
+
244
+ - The variable $b$ is treated as the only unknown, while all other variables are treated as constants. Later assumptions towards mass and velocity based on additional sources of data can take the place of these variables.
245
+
246
+ This assumption is represented in Graph A, which plots the final velocity of the ball against the position of the sweet spot. In order to find the maximum value of the function $\nu_{f}$ , it is necessary to take the first partial derivative with respect to $b$ . Taking the derivative of a function provides the slope of that function at any point along the curve. At the maximum value the graph is neither increasing
247
+
248
+ nor decreasing, so the first derivative will be 0. It is then possible to find where on the bat a ball must make contact in order to make $v_{f}^{\prime} = 0$ , which will represent the point on the bat at which the greatest final ball velocity is found.
249
+
250
+ ![](images/30e5fc1bea7579e888980ddca383273177409726fa95e52e33dd3f14b7f147bc.jpg)
251
+ Final Speeds at Given Distance from Pivot
252
+
253
+ Taking these assumptions into account, the first derivative of Equation 1 can be taken with respect to $b$ so that
254
+
255
+ $$
256
+ v _ {f} ^ {\prime} = b ^ {2} \omega_ {i} - 2 \mathrm {b v} _ {i} - \left[ \frac {\mathrm {I} \omega_ {i} (M + m)}{\mathrm {M m}} \right] = 0
257
+ $$
258
+
259
+ Rearranging the variables and solving for $b$ yields
260
+
261
+ $$
262
+ b = \frac {v _ {i}}{\omega_ {i}} \pm \sqrt {\left(\frac {v _ {i}}{\omega_ {i}}\right) ^ {2} + I \left(\frac {M + m}{\mathrm {M m}}\right)} \tag {Equation 2}
263
+ $$
264
+
265
+ We now have the general form of an equation that can be used to find the optimum location to strike the ball.
266
+
267
+ # 6-c. Example II
268
+
269
+ With the general form in mind, the parameters can now be defined:
270
+
271
+ Mass of Bat; $\mathbf{M} = 0.88451\mathrm{kg}$
272
+ Mass of Ball; $\mathrm{m} = 0.14529\mathrm{kg}$
273
+ Moment of Inertia; $\mathrm{I} = 0.20556\mathrm{kg}\mathrm{m}^2$ around 6 in mark from knob
274
+ - Initial Velocity of Baseball; ${\mathrm{v}}_{\mathrm{i}} = - {42}\mathrm{\;m}/\mathrm{s}$
275
+ - Initial Angular Velocity of Bat; $\omega_{\mathrm{i}} = 35.791$ rad/s
276
+ Length of Bat; $\mathrm{L} = 34$ in
277
+ - Movement towards the outfield is oriented positively
278
+
279
+ Using Equation 2, it is possible to determine where the sweet spot exists in relation to the position predicted by torque. We begin by researching common dimensions for baseballs, bats, pitch velocity, and angular velocity of the swing, shown above, in order to use realistic parameters.[9][7] Putting this data into the equation yields
280
+
281
+ $$
282
+ b = \frac {- 4 2}{3 5 . 7 9 1} + \sqrt {\left(\frac {- 4 2}{3 5 . 7 9 1}\right) ^ {2} + 0 . 2 0 5 5 6 \left(\frac {0 . 8 8 4 5 1 + 0 . 1 4 5 2 9}{0 . 8 8 4 5 1 * 0 . 1 4 5 2 9}\right)}
283
+ $$
284
+
285
+ The output dictates that $b = 0.56557 \, \text{m}$ . Converting from meters to inches, this becomes 22.267 in. Moving this distance away from the axis of the moment of inertia (pivot point), assumed to be 6 inches, indicates that the optimal location for hitting the ball exists 28.267 inches from the knob, which is about 5.733 inches from the free end. This position falls within the sweet spot zone of 4-7 inches proposed by Cross.[5] After running multiple bats through the equation, we found the Brody Power Model to be relevant and consistent with empirical findings.
286
+
287
+ # 6-d. Torque
288
+
289
+ The example above demonstrates how the Brody model can be used to explain why torque is not the primary factor at play during the swing. If the final velocity of the ball was based on torque, then the location of the sweet spot should have been as far away from the pivot point as possible. Torque is defined by the equation:
290
+
291
+ $$
292
+ \tau = F \cdot d \cdot \sin [ \theta ]
293
+ $$
294
+
295
+ # where
296
+
297
+ $\tau =$ Torque
298
+ $F =$ Force applied by ball
299
+ - $d =$ Distance from pivot point to impact
300
+ - $\theta =$ Angle between incoming direction of ball and ball
301
+
302
+ The way to maximize torque and energy, assuming all other variables are held constant, is to maximize the distance between the force and the fulcrum. The farthest distance possible on a 34 inch bat is 28 inches away from the fulcrum, making the assumption that the 6 in. pivot point is used as the fulcrum. Under normal circumstances, the optimal position for the ball to impact the bat is not 28 inches from the fulcrum; in this example, the optimal position was calculated to be 22 inches away.
303
+
304
+ Torque treats the entire bat as being uniform in composition and ignores principles of momentum. When a bat is swung, the moment of inertia influences where the mass of the bat seems to concentrate. This position is not the end of the bat, and so the end of the bat becomes less massive in relation to another point on the bat. Taking principles of momentum into account, the ball will experience greater force if it is hit closer to the fulcrum. Additionally, when balls hit near the end of the bat, they set off vibrations which both feel unpleasant and waste energy.[5]
305
+
306
+ # 7. Discussion
307
+
308
+ The Brody Power Model can be augmented to investigate issues involving corked bats and also to examine the differences between aluminum and wooden bats.
309
+
310
+ # 7-a. "Corking"
311
+
312
+ The first recorded instance of corking in Major League Baseball (MLB) was September 7, 1974 by Craig Nettles of the New York Yankees.[12] Corking a bat is generally a straightforward process. If the thick portion of the barrel is hollowed with an approximately half inch diameter hole and filled with a less dense and more elastic material, such as cork or bouncy balls, the bat is rendered considerably lighter, which allows for a faster swing. One of the problems with hollowing out a bat is that the structural strength becomes compromised. Corked bats have a nasty tendency—they shatter more easily upon contact with a fast pitch. Using a corked bat entails a risk for the player, as MLB has been known to suspend players for up to eight games for a single offence. A high profile bat corking incident occurred in 2003 when Sammy Sosa of the Chicago Cubs broke his bat in the first inning of a game versus the Tampa Bay Devil Rays, showering the third baseline with shards of wood and pieces of cork.[12] Sosa claimed that he only used the corked bat to “put on a show” for fans by hitting ameruns during batting practice, while others have attempted to use corked bats to intentionally increase performance during games. The question for the player then, is, “Is it worth it?” After
313
+
314
+ augmenting our model, we find that the difference in ball speed produced by a corked bat is negligible. Additional empirical findings back up this conclusion.
315
+
316
+ # 7-b. Cork Model Augmentation
317
+
318
+ When a bat is corked, several changes take place. First, the mass of the bat decreases at the free end because the material used to fill the barrel is less dense than the wood of the bat itself. This causes the weight of the entire bat to decrease, and also means that more of the weight is located closer to the hands. This translates into a lower moment of inertia. The combination of a lower moment of inertia and less weight in the bat results in a faster swing, given that the same amount of force is present.
319
+
320
+ In order to determine how the interplay of these factors influences the final velocity of the ball, we began with our previous assumptions about the properties of a wooden baseball bat and made reasonable changes to those values based off of research to reflect cording:
321
+
322
+ Wooden Bat Mass; $\mathrm{M_w} = 0.88451\mathrm{kg}$
323
+ Corked Bat Mass; $\mathbf{M}_{\mathrm{c}} = 0.82781\mathrm{kg}$
324
+ Moment of Inertia- Wooden; $\mathrm{I_w} = 0.20556\mathrm{kg}\mathrm{m}^2$ about the point 6 in from the knob
325
+ Moment of Inertia-Corked; $\mathrm{I_c} = 0.19101\mathrm{kg}\mathrm{m}^2$
326
+ Mass of Ball; $\mathrm{m} = 0.14529\mathrm{kg}$
327
+ Length of Bat; $\mathrm{L} = 34$ in
328
+ - Final Linear Velocity of Bat; ${\mathrm{V}}_{\mathrm{f}} = {0}^{ * }$
329
+ - Initial Velocity of Ball; $\mathrm{v_i} = -42\mathrm{m / s}$
330
+ - Angular Velocity-Wooden; $\omega_{\mathrm{wi}} = 35.791$ rad/s
331
+ Angular Velocity-Corked; $\omega_{\mathrm{ci}} = 36.5$ rad/s
332
+ - Movement towards the outfield is oriented positively
333
+
334
+ The first step in making this determination requires us to discover how the differences in mass, moment, and speed affect the location of optimal ball exit speed. Using Brody's Power Model, the value for the normal wooden bat remains $b_{w} = 0.56557$ , while the value of the corked bat is $b_{c} = 0.53847$ . Once we have the value of $b$ , it is possible to move a step backwards in the derivation of the Brody Power Model equation. Recalling that
335
+
336
+ $$
337
+ v _ {f} = v _ {i} - [ \frac {(1 + e) (v _ {i} - \mathrm {b o} _ {i})}{1 + (m / M) + (\mathrm {m b} ^ {2} / I)} ]
338
+ $$
339
+
340
+ and knowing that
341
+
342
+ $$
343
+ e = \frac {v _ {f} - V _ {f}}{V _ {i} - v _ {i}}
344
+ $$
345
+
346
+ We assume that $V_{f} = 0$ , so substituting the value of the coefficient of restitution ( $e$ ) for this equation yields Equation 3.
347
+
348
+ $$
349
+ v _ {f} = v _ {i} - [ \frac {(1 + \frac {v _ {f}}{V _ {i} - v _ {i}}) (v _ {i} - \mathrm {b o} _ {i})}{1 + (m / M) + (\mathrm {m b} ^ {2} / I)} ] \qquad \mathrm {E q u a t i o n 3}
350
+ $$
351
+
352
+ Now the only remaining unknown is $\nu_{f}$ . Once we have the equation for $\nu_{f}$ set up, we simply have to use a calculating utility to substitute all of the known values into the relationship, solving for $\nu_{f}$ . For the normal wooden bat we find a final ball speed of $53.502 \, \mathrm{m/s}$ , while the corked bat had a ball speed of $53.810 \, \mathrm{m/s}$ . The nature of our assumptions precludes the results from being completely accurate under real world conditions.
353
+
354
+ The $0.31\mathrm{m / s}$ difference between the exit speeds can be explained by restrictions on the model. The bat is treated as a solid mass that undergoes a perfect elastic collision with the ball. However, when cording a baseball bat, a certain amount of energy is lost to the material encased inside the wood. This energy loss is not included within the model, and so more energy is attributed to the rebounding ball than would normally be seen.
355
+
356
+ In any case, the results of comparison seem to indicate that, contrary to the prevailing view, corking a bat does not have a significant effect upon the exit speed of the ball. Based on these conclusions, in the case of his pregame power displays Sammy Sosa may have actually been decreasing his ability to hit homeruns by corking his bat.
357
+
358
+ Examining the phenomenon of bat cording from a purely sweet spot performance point of view does not adequately explain why Major League Baseball prohibits their use. If cording provides no discernable increase in final ball velocity, and may actually decrease the velocity of batted balls, then there appears to be no reason to prohibit their use[13]. This is because our model focuses only on the velocity of the batted ball, and leaves out other potential factors. The league's problem with bat cording likely exists for a different reason.[13] As described above, cording the bat lowers the mass of the bat slightly, and also shifts the weight closer to the hitter's hands. This allows the bat to be swung more easily and with a higher velocity. If a batter is concerned about contact, and not necessarily about launching balls over the centerfield wall, then it is conceivable that being able to swing a bat more easily would increase the ability to put the ball in play. The decreased time required to swing gives the experienced batter more time to decide whether or not to swing, and to adjust for unexpected ball trajectory. This is considered to be an unfair advantage in the eyes of Major League Baseball.
359
+
360
+ # 7-c. Aluminum vs. Wood
361
+
362
+ Professional baseball has been around since the late 1800's, with Major League Baseball receiving its genesis in 1903 from the combination of the American and National Leagues and the creation of the World Series.[11] During this period wooden bats have been the only legal hitting instrument. Aluminum bats are a relatively new invention, having existed for roughly the past 40 years.[10] The prohibition of metal bats has a number of underlying causes.
363
+
364
+ # 7-d. Aluminum Model Augmentation
365
+
366
+ The difference between aluminum and wooden bats is modeled under the Brody Power Model following the same logic as our investigation of corking. We begin with our assumptions of the specific parameters for an aluminum bat:
367
+
368
+ Mass of Bat; $\mathrm{M_a} = .89018\mathrm{kg}$
369
+ Moment of Inertia of Bat; $I_{a} = 0.17055 \, \mathrm{kg} \, \mathrm{m}^{2}$
370
+ - Angular velocity of $\mathrm{bat}\omega_{\mathrm{ai}} = 38$ rad/s
371
+
372
+ Substituting these values into Equation 2 causes an output value of $b_{\mathrm{a}} = 0.50317$ . Turning next to Equation 3, used to find the final velocity for the cork augmentation, we substituted the above parameters and ended up with a final speed of $\nu_{f} = 54.258 \mathrm{~m/s}$ . This can be contrasted with the final ball speed off a regular wooden bat calculated at $\mathrm{v_f} = 53.502 \mathrm{~m/s}$ during the original Brody example. The difference between the wooden bat and the aluminum bat exit speeds in this example is about .756 m/s. In English units, this is equivalent to 2.5 ft/s (1.7 MPH), enough extra speed to be more dangerous to a pitcher standing in the path of an oncoming ball.
373
+
374
+ It is worth briefly noting why the aluminum bat appears to be more effective than the corked bat in improving ball speed. The difference stems from the corking process; when the wooden bat was corked, it lost mass in the free end and therefore lowered its moment of inertia. The aluminum bat is able to maintain the same weight as the unaltered wooden bat while also lowering its moment of inertia. For both bats, the lower moment of inertia increases the angular speed at which it can be swung. While the aluminum bat is indeed heavier than the corked bat, the it has a significantly lower moment of inertia. This allows the aluminum bat to swing more mass at a higher speed, which in turn contributes more force to the collision, increasing the velocity of the batted ball.
375
+
376
+ While the Brody Power Model indicates that the aluminum bat has elevated performance at its position of maximum power, it does not accurately depict the degree to which this performance increases. This is due to the fact that, while the model is sensitive to the masses of the objects involved in the collision, the speeds at which they are traveling, and the moment of inertia of the bat, it does not take into account a unique behavior exhibited by aluminum bats known as the trampoline effect.[14]
377
+
378
+ The difference in behavior stems from the fact that the metal bat is malleable and hollow with a relatively thin shell, while the wooden bat is solid. Taken together, this makes it possible for the metal bat to deform when struck by
379
+
380
+ ![](images/ea2af0249e5faf0905a9ded4489d03b377292a14f8c2d1b7482d010a960bc445.jpg)
381
+ Figure 3
382
+
383
+ a baseball that impinges upon it. The resulting deformation takes the pattern of a quadrupole, meaning that it includes two sets of dipolar moments (Figure 3).[14]
384
+
385
+ When the ball strikes the bat and causes this deformation, a transfer of energy takes place. Kinetic energy from the ball is stored in the bat as an elastic potential energy when the bat morphs into the elongated oval shown in the second frame of Figure 3. As the bat then moves to its next dipole that stored energy is transferred back to the ball as kinetic energy. This extra push can return up to $20\%$ of the energy contained by the ball prior to the collision.[15] Combined with the energy delivered to the ball from the momentum of the bat, the ball is able to attain significantly higher speeds than conservation of momentum alone predicts. The Brody Power Model only accounts for the momentum portion of this dynamic. As a result, it is likely that the aluminum bat would produce an even higher ball exit velocity than is modeled here.
386
+
387
+ The data produced by our model, in addition to the information on the trampoline effect, sheds light on the reasoning of Major League Baseball in prohibiting the use of the aluminum bat. We have identified two primary concerns:
388
+
389
+ - **Safety:** As discussed above, the aluminum bat imparts a higher exit velocity for the ball as it leaves the bat. Wielded in the hands of today's top athletes, this could pose an even greater risk to pitchers and corner infidgers than already exists with wooden bats.
390
+ - **Tradition:** Greater exit velocity for batted balls would impact the tradition and legacy of the game. A ball hit with an aluminum bat would be more likely to travel further than a ball hit with a wooden bat under the same conditions. This could lead to an inflation of home runs and extra base hits. The dimensions of Major League ballparks would likely be rendered inadequate, and the cherished achievements of Hall of Famers like Babe Ruth and Joe DiMaggio would be overshadowed by modern displays of technology. The damage to the integrity of the game surrounding the modern steroid scandal would likely pale in comparison to a change to aluminum bats.
391
+
392
+ # 8. Sensitivity Analysis
393
+
394
+ An examination of our processes and assumptions through a sensitivity analysis will allow us to gauge how well we have met the criteria of resiliency and applicability.
395
+
396
+ # Is our data valid and realistic?
397
+
398
+ The first step which must be taken in the analysis of our models is to determine the validity of the assumptions we used.
399
+
400
+ # Masses and Respective Moments of Inertia of Bats:
401
+
402
+ We referenced a table for the wood and aluminum bats, which included relevant specifications for our equations.[9] Since mass and moment of inertia are interconnected, this ensured that combinations of the two variables reflect true properties of the different bats. In the case of the corked bat, we researched sources which identified the reduction of mass by 2 oz when a bat is corked.[19] We then made reasonable assumptions about the resulting change in the moment of inertia.
403
+
404
+ # Mass and Velocity of Ball:
405
+
406
+ MLB has very strict guidelines on the mass of the ball. It must be between 5 and 5.25 oz.[18] We therefore used an average ball mass of 5.125 oz. The velocity of the ball was determined by finding the average speed of a fastball, which we estimated around 93 mph. The fastball is the best choice because a ball at this speed has the most kinetic energy and therefore potential for outgoing velocity. In addition, this pitch type comes closest to a straight-line path; we used it as a consistent baseline for our models.
407
+
408
+ # Velocity of Bat:
409
+
410
+ Much like mass and moment of inertia, we found empirical data that stated that the average bat swing is approximately $30\mathrm{m / s}$ . We set this as a baseline attached to the speed which a wooden bat would be swung at. When the moment of inertia dropped during the change from wood to aluminum, we estimated that there would be a significant change in the speed with which the bat could be swung of roughly 4 MPH.[20] Since the corked bat had an intermediate moment of inertia, which leaned towards that of the wooden bat, we placed the velocity of the corked bat between the aluminum and wooden bats, slightly closer to that of the wooden bat.
411
+
412
+ With the reasoning behind our assumptions in mind, we can now examine the sensitivity of the model to each variable. The only variables not included in this section are the mass of the ball and the location of optimal ball exit speed. The ball mass is not included because the range of possible masses is so narrow that for our purposes it does not vary. The value for the distance is not included because distance is completely dependent on the interplay of the other values.
413
+
414
+ First, we decided to determine under what conditions our model would be useful. Using the specifications of the wooden bat as a baseline, we plotted a three-dimensional graph of the
415
+
416
+ location on a bat that would be produced for a wide range of initial ball velocities and angular bat speeds. We then created a second plot which used the same model but added two constraints:
417
+
418
+ - The ball speed must be between 67 and 97 MPH, covering the vast majority of pitch velocities.
419
+ - The value for $b$ must be between 21 and 24 inches, which is the generally accepted location of the range of sweet spots.
420
+
421
+ The graph is shown below:
422
+
423
+ ![](images/0e30e73f71f79881085c3e860429540ac09315c69edbddbf5d940a512902e5f3.jpg)
424
+
425
+ Graph B shows where on the wooden bat the optimal location for final ball velocity is located based on the initial velocities of ball and bat. However, since it is unrealistic to believe that the ball should optimally be hit just above the hands or on the tapered part of the bat, we set up the 4-7 inch constraint. The constraint on initial ball speed was set by considering the fact that only a select group of pitchers throw any faster than 97 MPH, and pitchers typically don't throw any less than 67 MPH. The darker area of Graph B represents the portion of combinations that would make the model applicable in reality.
426
+
427
+ In order to test the sensitivity of our model to changes in its variables we first substituted Equation 2 into Equation 3 for the variable $b$ and found:
428
+
429
+ $$
430
+ v _ {f} = v _ {i} - [ \frac {(1 + \frac {v _ {f}}{V _ {i} - v _ {i}}) (v _ {i} - (\frac {v _ {i}}{\omega_ {i}} \pm \sqrt {(\frac {v _ {i}}{\omega_ {i}}) ^ {2} + I (\frac {M + m}{\mathrm {M m}})}) \omega_ {i})}{1 + (m / M) + (m (\frac {v _ {i}}{\omega_ {i}} \pm \sqrt {(\frac {v _ {i}}{\omega_ {i}}) ^ {2} + I (\frac {M + m}{\mathrm {M m}})}) ^ {2} / I)} ]
431
+ $$
432
+
433
+ Using this equation, we were able to define all of the variables that were used with the assumptions from our normal wooden bat model. We then replaced each of the values one at a time with a variable to see how the final velocity changed as that variable changed.
434
+
435
+ # Mass of Bat
436
+
437
+ When the final velocity was tested against the mass of the bat, the following graph was produced:
438
+
439
+ ![](images/3ad4913ebf7f6e27444b8e4985209cd0f4c7365a922f5c696c7570f48a736356.jpg)
440
+
441
+ As the mass of the bat increases, so does the final speed of the ball. This is fairly easy to explain in terms of inertia. If a bat with a larger mass is swung with the same velocity, it will produce a greater final ball velocity upon contact. This graph may be misleading, however, in that the final speed is not very sensitive to the mass of the bat. This is only the case when the mass changes independent of the other variables, which is rarely the case.
442
+
443
+ # - Initial Velocity of Bat
444
+
445
+ When the final velocity was tested against the initial velocity of the bat, the following graph was produced:
446
+
447
+ ![](images/6a3329708a65145a882ac69b2b5da4596e2601fbbb8f556513c59ce7799a4be3.jpg)
448
+
449
+ As the initial bat velocity is increased, the final ball speed increases as well, as long as all other variables remain constant. This graph indicates a high sensitivity of the model to the initial velocity of the bat. This is most easily explained by conservation of momentum—since the bat is many times more massive than the ball, a change within the range of expected bat speed values would have a strong impact on the final velocity of the less massive ball. The bat speed usually is not increased in isolation from the weight of the bat or manipulating the moment of inertia, so the ball speed may be even more sensitive to a changing bat speed.
450
+
451
+ # Moment of Inertia
452
+
453
+ When the final velocity was tested against the moment of inertia of the bat, the following graph was produced:
454
+
455
+ ![](images/ffda0074909dfb46f03f1c3629bd4e6a150490cc7d1b6665b313e8de621cf12c.jpg)
456
+
457
+ This plot indicates a weak sensitivity of the model to changes in the moment of inertia, however, it is not as simple a determination as this plot assumes. Changing the moment of inertia shifts the distribution of the weight closer to the hitter's hands. As this happens the hitter would then be able to swing the bat at a higher velocity, which we determined to have a strong positive effect on final velocity. Since the relation of bat speed to final ball speed is stronger than that of moment of inertia, the slight increase in speed caused by increasing the moment of inertia would be masked by the resulting decrease in swing speed. By only isolating one variable at a time, we are unable to discover which variable is more important towards increasing final ball speed.
458
+
459
+ # - Initial Velocity of Ball
460
+
461
+ When the final velocity was tested against the initial velocity of the ball, the following graph was produced:
462
+
463
+ ![](images/fc187828cf3f329828ad314e79b0a6b79345a7eaa196723921c2b4a3685a2d9f.jpg)
464
+
465
+ It appears that the final velocity of the ball is highly sensitive to its initial velocity. The concept represented here is fairly intuitive: as velocity of the pitch increases, the speed of the batted ball increases as well.
466
+
467
+ # 9. Conclusion
468
+
469
+ Before producing our model, we laid out a set of four primary objectives which we intended to use to guide and bracket our investigation, presented in section two:
470
+
471
+ - Answering the Problem
472
+ - Simplicity and Clarity
473
+ Applicability
474
+ Resiliency
475
+
476
+ With the completion of our sensitivity analysis a number of conclusions can now be drawn about the model as a whole, in light of these original objectives. While the COP has traditionally been identified as a marker of the sweet spot of a baseball bat, it proved inadequate for the purposes of this problem. Example I shows that the COP is capable of predicting the sweet spot in terms of "feel," but is unable to document the sweet spot in terms of maximum performance. Instead, we were forced to draw on the Brody Power Model in order to optimize this variable. The Brody model is capable of making this optimization on account of its construction from conservation principles and the coefficient of restitution. Despite their differences, both the COP and the Brody model do provide an explanation for the empirical observation that the sweet spot is located on the middle portion of the bat barrel and not the extreme end as torque would predict.
477
+
478
+ We then augmented our assumptions in the Brody model in order to test for the effects of corked and aluminum bats. Example II demonstrates that the Brody model is capable of estimating the location of the sweet spot. For our baseline wooden bat, this was 5.733 inches from the free end of the barrel. When the assumptions are varied to predict the outcome of using a corked bat, a .31 m/s increase in ball exit speed is yielded. This constitutes a negligible enhancement of the sweet spot effect, and is subject to factors that might further mitigate this increase. It appears that increased performance, at least in terms of ball exit velocity, does not merit the prohibition of corked bats instituted by Major League Baseball. Our discussion identified other possible explanations for this prohibition.
479
+
480
+ Finally, we varied our assumptions to model the effects of an aluminum bat in comparison to the original wooden bat. This model predicts a $.76 \, \text{m/s}$ increase in ball exit speed, over two times the corked bat increase. While this improvement in speed of $145\%$ can be viewed as more significant, it is not as significant as empirical tests have shown. It is evident then that there are other factors which come into play for aluminum bats which the model does not take into account, for instance, the trampoline effect. Our results do support the conclusion that different materials used in construction matter, and offers support to Major League Baseball's metal bat prohibition for reasons of both safety and tradition.
481
+
482
+ We feel our model adequately achieves our goals of simplicity and clarity. Referencing the guiding intent presented in section 2, our model and explanations would be understandable to a baseball audience while still providing sufficient technical detail to present solid justification for our conclusions.
483
+
484
+ In terms of applicability and resiliency, our sensitivity analysis has given us some valuable feedback about our assumptions and our model as a whole. When isolating each variable, we found that the model was relatively insensitive to changes to the mass or moment of inertia. However, there was a much stronger sensitivity of the final velocity of the ball to the speed of the bat. Since changes to the mass affect moment of inertia, and both in turn affect the speed at which the bat can be swung, small changes in a structural parameter could be amplified in angular velocity. Because changes in each variable were evaluated separately, however, we are unable to tell which variables carry a stronger correlation with changes in batted ball performance. Although the variables cannot be isolated, the model is able to take a given set of circumstances and predict the relative performances of two bats.
485
+
486
+ # Positive aspects of our model include:
487
+
488
+ - Mass, weight distribution, and relative momentums are taken into account.
489
+ - The comparative results seem to reflect fairly accurate representations of whether bats are improved or harmed by changes to their specifications.
490
+
491
+ # Limitations to our model include:
492
+
493
+ - Because many of the parameters are estimated, our model contains a significant possibility for error.
494
+ - It does not take vibration, angles, or lateral movement into account.
495
+
496
+ # Additional research could improve the functionality of our model:
497
+
498
+ - Currently the model only measures relative changes in ball performance between bats, and it cannot produce an accurate measure of what the final ball velocity will be in reality. A more complete model which takes into account final velocity of the bat would be difficult to design but more useful.
499
+ - The model could be expanded to take into account other factors which have been identified as interfering with the amount of energy which a ball departs with, for instance the harmonic vibrations experienced upon impact.
500
+ - Laboratory research which produced empirical data for use specifically as our parameters would provide us with more accurate inputs, instead of relying on our assumptions.
501
+ - Laboratory research would help to confirm the results of our model's predictions to provide feedback in improving the model.
502
+
503
+ # 10. References
504
+
505
+ # References
506
+
507
+ [1] AP. "Unsplendid Splinter." Sports Illustrated. Time Warner Inc., 4 June 2003. Web. 19 Feb.2010.
508
+ [2] Brody, H. “The Sweet Spot of a Baseball Bat”. American Journal of Physics. Volume 54, Issue 7: pp. 640-643. Issue Date: July 1986
509
+ [3] Russel, Daniel A. "What is the COP and Does it Matter?" Physics and Acoustics of Baseball and Softball Bats. Kettering University, 16 June 2005. Web. 19 Feb. 2010. <http://paws.kettering.edu/~drussell/bats-new/cop.html>.
510
+ [4] Russel, Daniel A. "What(and where) is the Sweet Spot of a Baseball/Softball Bat?" Physics and Acoustics of Baseball and Softball Bats. Kettering University, 2004. Web. 19 Feb. 2010. <http://paws.kettering.edu/~drussell/bats-new/sweetspot.html>.
511
+ [5] Cross, Rod. "The Sweet Spot of a Baseball Bat." American Journal of Physics. Volume 66, Issue 9 (1998): 772-79. University of Sydney, 2006. Web. 19 Feb. 2010. <http://www.physics.usyd.edu.au/~cross/PUBLICATIONS/3.%20BatSweetSpot.pdf>.
512
+ [6] Cross, Rod. "Center of Percussion of Handheld Instruments." American Journal of Physics 72.5 (2004): 622-30. University of Sydney, 2004. Web. 19 Feb. 2010. <http://www.physics.usyd.edu.au/~cross/PUBLICATIONS/26.%20COPHandHeld.PDF>.
513
+ [7] Stronge, William J., Mont Hubbard, and Gregory S. Sawicki. "How to Hit Home Runs: Optimum Baseball Bat Swing Parameters." American Journal of Physics. Volume 71, Issue 11 (2003): 1152-162. 2004. Web. 19 Feb. 2010. <http://webusers.npl.illinois.edu/~anathan/pob/AJP-Nov03.pdf>.
514
+ [8] "BESR Certification." JustBats.com. 2000-2010. Web. 19 Feb. 2010. <http://www.justbats.com/bat.care.aspx?c=5>.
515
+ [9] Russel, Dan. "Swing Weights of Baseball and Softball Bats." Kettering University, 2009. Web. 19 Feb. 2010. <http://paws.kettering.edu/~drussell/Publications/TPT-SwingWeight.pdf>.
516
+ [10] "Baseball Bat History - A Full Overview of the History of Baseball Bats." *Baseball Bats - the ultimate baseball & softball bat resource!* Baseball-Bats.net, 2008. Web. 20 Feb. 2010. <http://www.baseball-bats.net/baseball-bats/baseball-bat-history/index.html>.
517
+ [11] Flading, John. "History of Baseball." Web log post. *SwingAway Instructional Blod*. SwingAway Sports Products, 10 June 2009. Web. 20 Feb. 2010. <http://www.swingawayblog.com/2009/06/10/history-of-baseball/>.
518
+ [12] Muskat, Carrie. "Sosa Gets Eight Games, Appeals." MLB.com. Major League Baseball, 6 June 2003. Web. 20 Feb. 2010. <http://mlb.mlb.com/news/article.jsp?ymd=20030606&content_id=358688&vkey=news_mlb&fext=.jsp&c_id=null>.
519
+
520
+ [13] Russel, Daniel A. "What about corked bats?" Physics and Acoustics of Baseball and Softball Bats. Kettering University, 7 October 2004. Web. 20 Feb. 2010. <http://paws.kettering.edu/~drussell/bats-new/cop.html>.
521
+ [14] Russel, Daniel A. "Vibrational Modes of a Baseball Bat
522
+ " Physics and Acoustics of Baseball and Softball Bats. Kettering University, 2003. Web. 20 Feb. 2010. <http://paws.kettering.edu/~drussell/bats-new/cop.html>.
523
+ [15] Nathan, Alan. "Baseball: It's Not Nuclear Physics (or is it?!)." Physics of Baseball. University of Illinois, 5 Feb. 1999. Web. 20 Feb. 2010. <http://74.125.95.132/search?q=cache:aRntmFp-iYJ:webusers.npl.illinois.edu/~anathan/pob/ppt/parkland.ppt+Alan+Nathan+Powerpoint+(parkland.ppt)&cd=1&hl=en&ct=clnk&gl=us>.
524
+ [16] Nathan, Alan M., Daniel A. Russell, and Lloyd V. Smith. "The Physics of the Trampoline Effect in Baseball and Softball Bats." The Physics of Baseball. University of Illionois. Web. 20 Feb. 2010. <http://webusers.npl.illinois.edu/~a-nathan/pob/trampoline-v6.pdf>.
525
+ [17] Bahill, Terry. "Systems Engineering." University of Arizona, 2009. Web. 20 Feb. 2010. <http://www.sie.arizona.edu/sysengr/slides/baseballBat.ppt>.
526
+ [18] "Official Rules | MLB.com: Official info." The Official Site of Major League Baseball / MLB.com: Homepage. Major League Baseball, 2009. Web. 22 Feb. 2010. <http://mlb.mlb.com/mlb/official_info/official_rules/objectives_1.jsp>.
527
+ [19] Nathan, Alan M., Daniel A. Russell, and Lloyd V. Smith. "Some Remarks on Corked Bats." The Physics of Baseball. University of Illionois. Web. 20 Feb. 2010.
528
+ [20] Russel, Daniel A. "Why Aluminum Bats Can Perform Better than Wood Bats?" Physics and Acoustics of Baseball and Softball Bats. Kettering University, 18 October 2006. Web. 19 Feb. 2010. <http://paws.kettering.edu/~drussell/bats-new/alumwood.html>.
MCM/2010/B/2010-MCM-B-Com/2010-MCM-B-Com.md ADDED
@@ -0,0 +1,53 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Judges' Commentary: The Outstanding Geographic Profiling Papers
2
+
3
+ Marie Vanisko
4
+
5
+ Dept. of Mathematics, Engineering, and Computer Science
6
+
7
+ Carroll College
8
+
9
+ Helena, MT 59625
10
+
11
+ mvanisko@carroll.edu
12
+
13
+ # Introduction
14
+
15
+ The stated problem this year dealt with the issue of geographical profiling in the investigation of serial criminals. International interest in this topic has led to numerous publications, many of which present mathematical models for analyzing the problems involved. Although it was entirely appropriate and expected that teams working on this problem would review the literature on the subject and learn from their review, teams that simply presented published schemes as their mathematical models fell far short of what was expected. The judges looked for sparks of creativity and carefully explained mathematical model building with sensitivity analysis that went beyond what is found in the literature. This factor is what added value to a paper.
16
+
17
+ # Documentation and Graphs
18
+
19
+ We observed a noticeable improvement in how references were identified and in the specific precision in documenting them within the papers. Considering the numerous online resources available, proper documentation was an especially important factor in this year's problem.
20
+
21
+ Despite the improvement, many papers contained charts and graphs from Web sources with no documentation. All graphs and tables need labels and/or legends, and they should provide information about what is referred to in the paper. The best papers used graphs to help clarify their results and documented trustworthy resources whenever used.
22
+
23
+ # Assumptions
24
+
25
+ In many cases, teams made tacit assumptions about the criminals being considered but did not state or justify critical mathematical assumptions that were later used implicitly. Assumptions concerning probability distributions, anchor points, distances, units, mathematical procedures, and how to measure results were generally not discussed or justified.
26
+
27
+ Since this is a modeling contest, a lot of weight is put on whether or not the model could be used, with modification, in the real world. Also, clear writing and exposition is essential to motivate and explain assumptions and to derive and test models based on those assumptions.
28
+
29
+ # Summary
30
+
31
+ The summary is of critical importance, especially in early judging. It should motivate the reader and be polished with a good synopsis of key results. For this problem, teams were asked to add to their one-page summary (which can have some technical details) also a two-page executive summary appropriate for the Chief of Police. Many teams seemed to assume that the Chief of Police would have impressive mathematical credentials.
32
+
33
+ # The Problem and Its Analysis
34
+
35
+ Teams were asked to develop at least two different schemes for generating geographical profiles and then to develop a technique for combining the results of the different schemes in such a way as to generate a useful prediction for law enforcement officers. Although the papers designated as Meritorious generally developed interesting schemes, very few papers did an adequate job of testing their results and doing sensitivity analysis.
36
+
37
+ Most papers dealt with issues associated with the serial criminal's home base, usually referred to as the anchor point, and the buffer zone around that point within which the criminal is unlikely to commit crimes. Locations were identified using latitude and longitude and sometimes a time factor. Weights were frequently assigned to data points, sometimes taking more recent crimes into account more heavily and sometimes incorporating qualitative factors into the scheme. Teams used various metrics in describing "distances" between the anchor point and crime locations. Papers that rose to the top used well-defined metrics that were clearly explained. One cannot measure the reliability or validity of a model without clearly defined metrics.
38
+
39
+ Many teams mentioned that there was not a lot of data with which they could validate their model, although they did find some specific location information that included from 13 to 20 crimes in a given series. Some teams used as their only example the Sutcliffe case cited in the problem. In almost all cases, teams
40
+
41
+ used their model to predict the location of the final crime based on all of the previous locations for that criminal. They could easily have had many more data points with which to validate their models. For example, if 13 crime locations were available, they could have used the first $n$ locations to predict the location of crime $n + 1$ , for each $n = 7, \ldots, 12$ . The judges agreed that this problem did not lend itself to validation by simulation, as many other problems do.
42
+
43
+ In describing the reliability of predicted results for proposed models, it was sometimes difficult to determine precisely how teams had arrived at their results. Since the literature is full of models and even computer models, it would have been worthy if teams had solved a problem via one of these methods and used that as a baseline to compare the results of original models that they proposed. Not a single team did this to the judge's satisfaction. Judges do not generally look for computer code, but they definitely look for precise algorithms that produce results based on a given model.
44
+
45
+ # Concluding Remarks
46
+
47
+ Mathematical modeling is an art. It is an art that requires considerable skill and practice in order to develop proficiency. The big problems that we face now and in the future will be solved in large part by those with the talent, the insight, and the will to model these real-world problems and continuously refine those models. Surely the issue of solving crimes involving serial killers is an important challenge that we face.
48
+
49
+ The judges are very proud of all participants in this Mathematical Contest in Modeling and we commend you for your hard work and dedication.
50
+
51
+ # About the Author
52
+
53
+ Marie Vanisko is a Mathematics Professor Emerita from Carroll College in Helena, Montana, where she taught for more than 30 years. She was also a Visiting Professor at the U.S. Military Academy at West Point and taught for five years at California State University Stanislaus. In both California and Montana, she directed MAA Tensor Foundation grants on mathematical modeling for high school girls. She also directs a mathematical modeling project for Montana high school and college mathematics and science teachers through the Montana Learning Center at Canyon Ferry, where she chairs the Board of Directors. She has served as a judge for both the MCM and HiMCM.
MCM/2010/B/2010-MCM-B-Com2/2010-MCM-B-Com2.md ADDED
@@ -0,0 +1,83 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Judges' Commentary:
2
+
3
+ # The Fusaro Award for the
4
+
5
+ # Geographic Profiling Problem
6
+
7
+ Marie Vanisko
8
+
9
+ Dept. of Mathematics, Engineering, and Computer Science
10
+
11
+ Carroll College
12
+
13
+ Helena, MT 59625
14
+
15
+ mvanisko@carroll.edu
16
+
17
+ Peter Anspach
18
+
19
+ National Security Agency
20
+
21
+ Ft. Meade, MD
22
+
23
+ anspach@aol.com
24
+
25
+ # Introduction
26
+
27
+ MCM Founding Director Fusaro attributes the competition's popularity in part to the challenge of working on practical problems. "Students generally like a challenge and probably are attracted by the opportunity, for perhaps the first time in their mathematical lives, to work as a team on a realistic applied problem," he says. The most important aspect of the MCM is the impact that it has on its participants and, as Fusaro puts it, "the confidence that this experience engenders."
28
+
29
+ The Ben Fusaro Award for the 2010 Geographic Profiling Problem went to a team from Duke University in Durham, NC. This solution paper was among the top Meritorious papers that this year received the designation of Finalist. It exemplified some outstanding characteristics:
30
+
31
+ - It presented a high-quality application of the complete modeling process.
32
+ - It demonstrated noteworthy originality and creativity in the modeling effort to solve the problem as given.
33
+ - It was well-written, in a clear expository style, making it a pleasure to read.
34
+
35
+ # Criminology and Geographic Profiling
36
+
37
+ Each team was asked to develop a method to aid in the investigations of serial criminals. The team was to develop an approach that makes use of at least two different schemes and then combine those schemes to generate a geographic profile that would be a useful prediction for law enforcement officers. The prediction was to provide some kind of estimate or guidance about possible locations of the next crime. based on the time and locations of the past crimes. In addition to the required one-page summary, teams had to write a two-page less-technical executive summary for the Chief of Police.
38
+
39
+ In doing Web searches on this topic, teams found many publications and many proposed models. While it was important to review the literature, to receive an Outstanding or Meritorious designation, teams had to address all the issues raised and come up with a solution that demonstrated their own creativity.
40
+
41
+ # The Duke University Paper
42
+
43
+ # Abstract (One-Page Summary)
44
+
45
+ The Duke team did an excellent job with their abstract. In one page, they motivated the reader and provided the reader with a good sense of what the team had accomplished. It gave an overview of everything from the assumptions, to the modeling and how it was done, to the testing of their models, and finally, to the analysis of the accuracy of their results and limitations of their models. It was well-written and a great example of what an abstract should be.
46
+
47
+ # Executive Summary (for the Police Chief)
48
+
49
+ The executive summary too was well-written and gave an overview of the team's approach, acknowledging limitations of their models. However, it was a little too vague in providing a precise idea of exactly what information would need to be collected and how to go about implementing the proposed models. Because the executive summary is a critical part of the requirements, this was part of what kept the Duke paper from being designated as Outstanding.
50
+
51
+ # Assumptions
52
+
53
+ The team began with a brief survey of previous research on geographic profiling and used the information that they had gathered to decide what
54
+
55
+ assumptions seemed appropriate. As a result, their list of assumptions was well-founded. The team exemplified one of the most important aspects in mathematical modeling by demonstrating precisely how their assumptions were used in the development of their models and how the assumptions enabled them to determine parameters in their models.
56
+
57
+ # The Models
58
+
59
+ The team's first model involved a geographic method that took into account not only the location of crimes but also population densities, crime rates, and selected psychological characteristics. They used a bivariate Gaussian probability function and numerous parameters associated with previous crime locations. They did a very good job of showing how their assumptions and previous crime scenes led to the computation of these parameters and then using these parameters to estimate the probability function to be used in their model.
60
+
61
+ The team's second model involved a risk-intensity method and made use of the geographic method but extended it to make different projections with different probabilities associated with each of those projections.
62
+
63
+ # Testing the Models
64
+
65
+ The Duke team was among the top papers, not only because of their well-based models, but because they tested their models—not with just one serial crime case, but with many cases. Their parameters allowed them to consider crimes other than murder, and they were able to examine how good their models were in several real-life situations. By analyzing their results, they were able to comment on the sensitivity and robustness of their models. This was something that very few papers were able to do, and a very important step in the modeling process.
66
+
67
+ # Recognizing Limitations of the Model
68
+
69
+ Recognizing the limitations of a model is an important last step in the completion of the modeling process. The teams recognized that their models would fail if their assumptions did not hold—for example, if the criminal did not have a predictable pattern of movement.
70
+
71
+ # References and Bibliography
72
+
73
+ The list of references consulted and used was sufficient, but specific documentation of where those references were used appeared only for a few at the start of the paper. Precise documentation of references used is important throughout the paper.
74
+
75
+ # Conclusion
76
+
77
+ The careful exposition in the development of the mathematical models made this paper one that the judges felt was worthy of the Finalist designation. The team members are to be congratulated on their analysis, their clarity, and for using the mathematics that they knew to create and justify their own model for the problem.
78
+
79
+ # About the Authors
80
+
81
+ Marie Vanisko is a Mathematics Professor Emerita from Carroll College in Helena, Montana, where she taught for more than 30 years. She was also a Visiting Professor at the U.S. Military Academy at West Point and taught for five years at California State University Stanislaus. In both California and Montana, she directed MAA Tensor Foundation grants on mathematical modeling for high school girls. She also directs a mathematical modeling project for Montana high school and college mathematics and science teachers through the Montana Learning Center at Canyon Ferry, where she chairs the Board of Directors. She has served as a judge for both the MCM and HiMCM.
82
+
83
+ Peter Anspach was born and raised in the Chicago area. He graduated from Amherst College, then went on to get a Ph.D. in Mathematics from the University of Chicago. After a post-doc at the University of Oklahoma, he joined the National Security Agency to work as a mathematician.
MCM/2010/B/7273/7273.md ADDED
@@ -0,0 +1,390 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Tracking Serial Criminals with a Road Metric
2
+
3
+ Control #7273
4
+
5
+ February 22, 2010
6
+
7
+ # Abstract
8
+
9
+ In this paper, we present a computerized model to predict future crime locations and probable residences for a serial criminal based on the locations and times of a sequence of past crimes. We first create a "road metric" in order to measure distances by automobile travel time. In order to predict future crime locations, we apply the nonparametric statistical technique of kernel density estimation with our road metric, which allows us to estimate a time dependent probability distribution function. In order to predict the residences of serial criminals, we use a refinement of a model developed by Rossmo, again adapting it to the road metric. This method develops a probability distribution for where a criminal might live by balancing the convenience of previous attack locations with the observation that greater distances from home afford more opportunities for crime and a lesser chance of being caught. We apply our model to several high profile serial murder cases, namely the case of Peter Sutcliffe, the "Yorkshire Ripper" and that of Wayne Williams, the "Atlanta Child Murderer." In both cases, our model was successful in predicting the region of the criminal, and might prove useful in a criminal investigation.
10
+
11
+ # Contents
12
+
13
+ # 1 Introduction 3
14
+
15
+ 1.1 Outline of Our Approach 3
16
+ 1.2 Assumptions 3
17
+
18
+ # 2 Mapping Crime and the Road Metric 4
19
+
20
+ 2.1 Road Metric 4
21
+
22
+ # 3 Estimating and Extrapolating a Probability Density Function 5
23
+
24
+ 3.1 Kernel Density Estimation 5
25
+ 3.2 Extrapolating Future Probability Density Functions 6
26
+
27
+ # 4 Best Fit Circle and Rossmo's Model 7
28
+
29
+ 4.1 Centrography 7
30
+ 4.2 Best Fit Circle 7
31
+
32
+ 4.2.1 A First Attempt 7
33
+ 4.2.2 A More Refined Best Fit Circle 8
34
+
35
+ 4.3 Application to Rossmo's Model 8
36
+
37
+ # 5 Case Studies 10
38
+
39
+ 5.1 Yorkshire Ripper 10
40
+
41
+ 5.1.1 Map and Metric 10
42
+ 5.1.2 Probability Density Estimate 11
43
+ 5.1.3 Rossmo's Model 11
44
+ 5.1.4 Assessment of Results 11
45
+
46
+ 5.2 Atlanta Child Murders 13
47
+
48
+ 5.2.1 Metric and Map 13
49
+ 5.2.2 Probability Density Estimate 13
50
+ 5.2.3 Application to Rossmo's Method 13
51
+ 5.2.4 Assessment of Results 14
52
+
53
+ # 6 Improving the Model 16
54
+
55
+ 6.1 Expanding the Road Metric 16
56
+ 6.2 Profiling Potential Victims 16
57
+ 6.3 Improving Computational Efficiency 17
58
+ 6.4 Testing to Choose Optimal Parameters 17
59
+
60
+ # 7 Conclusion 17
61
+
62
+ # 8 Executive Summary 17
63
+
64
+ # References 19
65
+
66
+ # 1 Introduction
67
+
68
+ In this paper we present a computerized model for studying serial crime patterns. Our model takes highway systems into account as a significant determinant of how criminals behave. Specifically, we created a road metric to give a measure of the time it takes for a driver to travel between two points on a map. We incorporate this road metric into several different models to give an estimate of where future crimes may occur and where the perpetrator may live. Our models give law enforcement agencies an estimate as to where they could patrol with the highest likelihood of apprehending the criminal, as well as where they should conduct their investigation into the residence of the criminal.
69
+
70
+ # 1.1 Outline of Our Approach
71
+
72
+ The beginning of our paper will be devoted to presenting the theoretical framework and an outline of our computer implementation. The later sections of our paper will be devoted to applying our model to famous notable criminal cases and assessing the accuracy of the model. For each criminal investigation we will do roughly the following:
73
+
74
+ - Develop the "road metric" to measure distances based on how long it takes to drive between them. This metric will be based primarily on the local highway system.
75
+ - Estimate and extrapolate probability density functions for where criminals are likely to commit future crimes.
76
+ - Estimate offender's residence using a best fit circle under the road metric and by applying a modification of current models.
77
+
78
+ # 1.2 Assumptions
79
+
80
+ Due to the huge variation in criminal activity, as well as the highly unusually and unpredictable psychological pathologies that high profile serial killers often have, using a relatively simple computer model to predict serial crime in general faces several hurdles. Below are the assumptions we take about the crimes that our model is applied to:
81
+
82
+ - Crimes are committed by individuals. We assume the series of crimes has been attribute with a high degree of certainty to a single individual. We did not design our model to analyze organized crime, gang, or mob activity.
83
+ - Criminals travel mostly by automobile. Our model makes significant use of analyzing highway patterns to judge distances based on how long it takes to travel them by automobile.
84
+
85
+ - Crimes are committed near to at where they are discovered by police. In terms of murder cases, this means that body dump sites are murder sites. This is not an unreasonable assumption, as disorganized serial killers often leave their victims near the murder site [9]. Note that in serial crimes such as serial rape, burglary, or arson, there is no distinction between the site of the crime and the site found by police.
86
+ - Crimes are occurring within a small region, i.e. roughly within a city or county. This eliminates the case that crimes are occurring across multiples states or countries. In those cases, our model should be applied to each cluster of crimes.
87
+
88
+ # 2 Mapping Crime and the Road Metric
89
+
90
+ We used the mathematical software Sage to create a map of the area surrounding serial criminal incidents. The main features we included in the maps are the major highways, as well as the locations of criminal activities. We treat the highway system as a graph, considering each highway entrance and exit as a vertex in the graph, and each road section in between a pair of entrances and exits as an edge of the graph. Each edge is given a weight corresponding to to the Euclidean length between its corresponding vertices.
91
+
92
+ # 2.1 Road Metric
93
+
94
+ For each map $M$ , we computed a road metric $d: M \times M \to \mathbb{R}^+$ . The road metric measures distances within a region based on the approximate total travel time by car. The metric makes the assumption that drivers will usually take the shortest path between two points, utilizing highways as much as possible instead of traveling entirely via side-streets. The metric makes the assumptions that
95
+
96
+ - Time spent on side streets is proportional to the Manhattan metric since side streets are often organized into a grid shape,
97
+ - Time spent on the freeway is proportional to the Euclidean distance.
98
+
99
+ Given points $\mathbf{a}$ and $\mathbf{b}$ on the map, we compute the road metric as follows:
100
+
101
+ 1. For each vertex $\mathbf{v}$ in our highway system (which correspond to entrances or exits on the freeway), we compute the Manhattan distances $M(\mathbf{v},\mathbf{a})$ and $M(\mathbf{v},\mathbf{b})$ .
102
+ 2. For each pair of vertices $\mathbf{v}_1, \mathbf{v}_2$ , we apply Dijkstra's algorithm to find the shortest path (taking into consideration the lengths of the edge segments) on the graph of our highway system from $\mathbf{v}_1$ to $\mathbf{v}_2$ . We call this distance $E(\mathbf{v}_1, \mathbf{v}_2)$ .
103
+
104
+ 3. Then our road metric is defined as
105
+
106
+ $$
107
+ d (\mathbf {a}, \mathbf {b}) = \min \left\{\min _ {\mathbf {v} _ {1}, \mathbf {v} _ {2}} \left\{M (\mathbf {a}, \mathbf {v} _ {1}) + E (\mathbf {v} _ {1}, \mathbf {v} _ {2}) + M (\mathbf {v} _ {2}, \mathbf {b}) \right\}, M (\mathbf {a}, \mathbf {b}) \right\},
108
+ $$
109
+
110
+ or in other words, the minimum of the fastest way to take the highway, and taking a route which avoids the highway.
111
+
112
+ In terms of actually computing the road metric, we divided the maps into $m \times n$ grids and computed and then stored the distance from each grid space to each other grid space.
113
+
114
+ # 3 Estimating and Extrapolating a Probability Density Function
115
+
116
+ # 3.1 Kernel Density Estimation
117
+
118
+ Given the locations and positions of criminal incident, we created a first estimate for the probability density distribution for future crimes as follows. In the case of a Euclidean distances, given a set of sample points $\{\mathbf{x}_1,\dots ,\mathbf{x}_n\}$ generated by a random variable, one often estimates the probability density function as
119
+
120
+ $$
121
+ \phi (\mathbf {x}) = \frac {1}{N} \sum_ {i = 1} ^ {N} K _ {A} (\mathbf {x} - \mathbf {x} _ {i})
122
+ $$
123
+
124
+ where
125
+
126
+ $$
127
+ K _ {A} (\mathbf {x} - \mathbf {x} _ {i}) = \frac {1}{2 \pi | A | ^ {1 / 2}} e ^ {- ((\mathbf {x} - \mathbf {x} _ {i}) ^ {t} A ^ {- 1} (\mathbf {x} - \mathbf {x} _ {i}))}
128
+ $$
129
+
130
+ where $A$ is a covariance matrix, which amounts to adding small normal distributions around each point to generate an estimated probability density function [7], [10],[2]. We observe that since covariance matrices are positive definite, the quadratic form $\mathbf{x}^t A\mathbf{x}$ induces a norm on $\mathbb{R}^n$ defined by
131
+
132
+ $$
133
+ \| \mathbf {x} \| = \sqrt {\mathbf {x} ^ {t} A \mathbf {x}},
134
+ $$
135
+
136
+ which in turn induces a metric. This naturally presents an application of our road metric by replacing the Gaussian function $K$ (shown above) with the modified Gaussian function, $G$ , defined by
137
+
138
+ $$
139
+ G (\mathbf {x}, \mathbf {x} _ {i}, t) = \frac {1}{M _ {i} (t)} \mathrm {E x p} \left(- \frac {d (\mathbf {x} , \mathbf {x} _ {i}) ^ {2}}{(h (t - t _ {i})) ^ {2}}\right),
140
+ $$
141
+
142
+ where $d$ is the road metric defined above, and $h$ is essentially an indirect control the covariance of this kernel and $M_{i}$ is the normalizing constant defined by
143
+
144
+ $$
145
+ M _ {i} (t) = \int \operatorname {E x p} \left(- \frac {d (\mathbf {x} , \mathbf {x} _ {i}) ^ {2}}{(h (t - t _ {i})) ^ {2}}\right) d \mathbf {x}.
146
+ $$
147
+
148
+ We treat $h$ as a function of time since it is reasonable to assume that more recent crimes are more useful in predicting future crimes than earlier crimes and hence it is reasonable to let the modified Gaussian functions "diffuse" over time by letting $h$ be an increasing function. We empirically determined that setting $h(t) = 2.5\arctan (2t + .2)$ , where $t$ is measured in weeks, was a good fit. Our estimate for the probability density function thus becomes
149
+
150
+ $$
151
+ \phi (\mathbf {x}, t) = \frac {1}{N} \sum_ {i = 1} ^ {N} G (\mathbf {x}, \mathbf {x} _ {i}, t). \tag {1}
152
+ $$
153
+
154
+ In terms of actually computing $\phi (\mathbf{x},t)$ , as we did with the road metric, we computing $\phi$ for each grid space in our partitioned $m\times n$ grid using the precomputed values of our road metric.
155
+
156
+ # 3.2 Extrapolating Future Probability Density Functions
157
+
158
+ Our above discussion gives a reasonable idea of where crimes have been committed, but it is not necessarily reasonable to expect that a criminal will distribute his or her crimes based on a single probability distribution function for all time. Therefore we decided to perform a weighted least squares approximation based on trends in the probability distribution functions discussed above. The idea is to make a linear approximation to the probability density functions and predict a future probability density function outside of our data set. Our method is as follows. Suppose $\mathbf{x}_1, \ldots, \mathbf{x}_n$ represent the location of crimes which occurred respectively at $t_1 < t_2 < \dots < t_n$ , and that we want a probability density function at some time $t^* > t_n$ .
159
+
160
+ - In our equation for $\phi(\mathbf{x}, t)$ (Equation 1), we sum over only the crimes which have occurred before time $t$ .
161
+ - So that our linear approximation is not biased by the initial spikes from the kernels functions resulting from individual murders initially being point masses, we wish to let our density functions "diffuse" as much as possible. Using the probability density estimation in Equation 1, we compute the probability density functions
162
+
163
+ $$
164
+ \phi (\mathbf {x}, t _ {2} - \epsilon), \phi (\mathbf {x}, t _ {3} - \epsilon), \dots , \phi (\mathbf {x}, t _ {n} - \epsilon)
165
+ $$
166
+
167
+ for some small $\epsilon$ as well as
168
+
169
+ $$
170
+ \phi (\mathbf {x}, t _ {M})
171
+ $$
172
+
173
+ for some $t_M$ between $t_n$ and $t^*$ .
174
+
175
+ - We now do a weighted least squares approximation to estimate $\phi(\mathbf{x}, t^*)$ . To do this, we consider $\phi(\mathbf{x}, t)$ pointwise over our $m \times n$ map grid and perform a weighted least squares. We modify the well known normal
176
+
177
+ equations for a standard least squares problem $X^{t}X\hat{\beta} = X^{t}y$ as described in [1] to get the modified normal equations
178
+
179
+ $$
180
+ X ^ {t} W X \hat {\beta} = X ^ {t} W y
181
+ $$
182
+
183
+ where $W$ is a weight matrix. We chose to weight the probability density functions $\phi(\mathbf{x}, t_n)$ linearly in time (so that later crimes are weighted much more than earlier crimes).
184
+
185
+ # 4 Best Fit Circle and Rossmo's Model
186
+
187
+ # 4.1 Centrography
188
+
189
+ One common method for estimating the residence of a serial criminal is to treat the location of each crime as a point mass and find the centroid of the point masses by means of a spacial average. According to [9], centrography is one of the most common search methods for criminal investigations and has been used to examine serial rape cases in San Diego, as well as the case of the Yorkshire Ripper. According to [4], there is a significant amount of evidence showing that serial rapists often live close to the centroid of their offense locations.
190
+
191
+ # 4.2 Best Fit Circle
192
+
193
+ A reasonable extension of the idea of using a centroid to estimate the location of a criminals residence is to attempt to fit a circle to the location of criminal activities. This is based on the assumption that criminals will typically try to avoid committing crimes very near to where they live, but at the same time will expend roughly the same amount of effort in each of their crimes. Since we would expect the energy put into a crime by a criminal would be comparable to the total amount of time they spend traveling, we thought to apply the road metric we discussed earlier.
194
+
195
+ # 4.2.1 A First Attempt
196
+
197
+ The most naive thing we could do is fit a circle to the data by minimizing the square of the distance from our crime locations to a circle, i.e. minimize
198
+
199
+ $$
200
+ \sum_ {i} (d (C _ {r} (\mathbf {x}), \mathbf {y})) ^ {2} = \sum_ {i} | d (\mathbf {x}, \mathbf {y}) - r | ^ {2},
201
+ $$
202
+
203
+ where the sum is taken over all of crime locations $\mathbf{x}_i$ . Unfortunately this method is very unstable. Consider the example in Figure 1 of the best fit circle for three points. Moving the middle point slightly produces a tremendous change in the best fit circle's radius and center.
204
+
205
+ ![](images/00adbda949210d072f77d478cb85d7593eee82fbace23e21d483621578a69312.jpg)
206
+ Figure 1: The unstable behavior of a naive best fit circle.
207
+
208
+ # 4.2.2 A More Refined Best Fit Circle
209
+
210
+ A better generalization would be to notice that in Euclidean coordinates, given a set of points $\mathbf{x}_1,\ldots \mathbf{x}_n$ , then the point $\mathbf{x}$ such that
211
+
212
+ $$
213
+ \sum_ {i = 1} ^ {n} \left\| \mathbf {x} - \mathbf {x} _ {n} \right\| ^ {2}
214
+ $$
215
+
216
+ is minimized is the centroid of the region.
217
+
218
+ This provides a natural generalization to the road metric which is perhaps a better extension of the often used method of centrography. Namely, we define the center of our best fit circle to be the point $\mathbf{x}$ such that
219
+
220
+ $$
221
+ \sum_ {i = 1} ^ {n} (d (\mathbf {x}, \mathbf {x} _ {i})) ^ {2}
222
+ $$
223
+
224
+ is minimized. We will define the radius, $r$ , to be the average distance from $\mathbf{x}$ to the crime locations, namely
225
+
226
+ $$
227
+ r = \frac {1}{n} \sum_ {i = 1} ^ {n} (d (\mathbf {x}, \mathbf {x} _ {i})).
228
+ $$
229
+
230
+ In Figure 2, we show an example of a circle in the road metric induced by a highway system. The particular highway system is from the Sutcliffe murder case which is discussed in further detail later in the paper.
231
+
232
+ # 4.3 Application to Rossmo's Model
233
+
234
+ In [9], Rossmo presents the model for estimating the residence of a criminal based on the location of their offenses. The model makes use of the idea of a buffer zone. The buffer zone is an area surrounding a criminals residence in which a criminal avoids committing crimes. The idea behind the buffer zone is that criminals will attempt to balance the energy they need to expend in order to go long distances away from their residence to commit crimes, and the increased risk of committing crimes near to where they live. There is a significant amount of research supporting Rossmo's model, as well as the idea
235
+
236
+ ![](images/a6abb42775ba769daf2b6952cbdde57df172b7337aec341079c7690db3e4b727.jpg)
237
+ Figure 2: A circle in the road metric induced by the graph for the Manchester Leeds area of England. The highway system graph is black and the circle is grey.
238
+
239
+ of a buffer zone, both in terms of criminal patterns as well as animal hunting patterns [5],[8].
240
+
241
+ Rossmo's model is based on subdividing a map of the general location of the crimes into a grid and then computing the estimated probability of a perpetrator residing in a particular grid space as
242
+
243
+ $$
244
+ p _ {j k} = K \sum_ {i = 1} ^ {N} \left(\frac {\phi}{\| \mathbf {x} _ {j k} - \mathbf {x} _ {i} \| ^ {f}} + \frac {(1 - \phi) B ^ {g - f}}{(2 B - \| \mathbf {x} _ {j k} - \mathbf {x} _ {i} \|) ^ {g}}\right)
245
+ $$
246
+
247
+ where
248
+
249
+ $$
250
+ \phi = \left\{ \begin{array}{l l} 1 & \text {i f} \| \mathbf {x} _ {j k} - \mathbf {x} _ {i} \| < B \\ 0 & \text {i f} \| \mathbf {x} _ {j k} - \mathbf {x} _ {i} \| \geq B, \end{array} \right.
251
+ $$
252
+
253
+ and $f, g$ are constants, $B$ is the radius of the buffer zone, and $K$ is some constant used to normalize the entire probability distribution. Empirically, Rossmo found that for criminal cases the optimal values for $f$ and $g$ were $f = g = 1.2$ . In our model we adapted the formulation of Rossmo's model to use the road metric. Thus the formulation of Rossmo's model that we used in our simulations becomes
254
+
255
+ $$
256
+ p _ {j k} = K \sum_ {i = 1} ^ {N} \left(\frac {\phi}{| d (\mathbf {x} _ {j k} , \mathbf {x} _ {i}) | ^ {1 . 2}} + \frac {(1 - \phi)}{| 2 B - d (\mathbf {x} _ {j k} , \mathbf {x} _ {i}) | ^ {1 . 2}}\right).
257
+ $$
258
+
259
+ In order to estimate the radius of the buffer zone, we used our best fit circle in order to approximate the radius of a best fit circle under our metric, and then used half of this distance as the estimated radius of the buffer zone.
260
+
261
+ # 5 Case Studies
262
+
263
+ In this section we apply the models discussed earlier to two notable cases, namely the Yorkshire Ripper (Peter Sutcliffe) and the Atlanta Child Murderer (Wayne Williams).
264
+
265
+ # 5.1 Yorkshire Ripper
266
+
267
+ The Yorkshire Ripper murders were a series of murders which occurred between 1975 and 1981 in which 13 women were murdered and 7 attacked in the Manchester and Leeds area of England. Most of the women killed were prostitutes. Peter Sutcliffe was convicted of the murders and attacks. According to [3], the bodies did not appear to have been moved after the murders. We treated attacks and murders identically in our analysis.
268
+
269
+ # 5.1.1 Map and Metric
270
+
271
+ Using a map of the Manchester and Leeds area, we represented the highways as a graph, making sure to concentrate the exits and entrances (the nodes of the graph) around population centers. Figure 3 shows a color plot of the distance under the road metric from what was later determined to be Sutcliffe's residence.
272
+
273
+ ![](images/7b16dda657692224a8a2ae4d17b083b216455926bb9df451029b49b16a339385.jpg)
274
+ Figure 3: This shows the distance from Sutcliffe's house under the road metric with an overlay of our graph representation of the nearby highway system. Sutcliffe's house is at the house symbol. Red corresponds to a very short distance and dark blue corresponds to a long distance.
275
+
276
+ # 5.1.2 Probability Density Estimate
277
+
278
+ There were a total of 20 criminal incidents committed by Sutcliffe. We ran our density distribution model on the first 19 cases in order to predict the location of the 20th. We used the locations of the first 19 attacks and the time of the 20th in order to predict the location of the 20th. If one were to use our model in real life, the time of the next crime would not actually be known, but this is not of great importance since our choice of the variance function in our kernel causes the model to stabilize very quickly. The results are shown in Figure 4.
279
+
280
+ ![](images/e7ae6b1e04f01ce40ef4eb438a126347e35d4674339d9095d962243440834e81.jpg)
281
+ Figure 4: This shows the estimated probability distribution for the 20th attack knowing the first 19. The dots represent the attacks and the 'x' denotes the actual location of the 20th attack. Red indicates the highest probability and purple the lowest.
282
+
283
+ # 5.1.3 Rossmo's Model
284
+
285
+ As discussed earlier, Rossmo's model is designed to predict the location of the offender's residence. We modified Rossmo's model to make use of our road metric. As was discussed earlier, we first computed a best fit circle for the crimes using a generalized centroid, which is shown in Figure 5. We then used half of this value in the equation for Rossmo's model to estimate the location of the residence of the murderer. A colorized plot is shown in Figure 6 and a plot of the estimated function as a surface is shown in Figure 7.
286
+
287
+ # 5.1.4 Assessment of Results
288
+
289
+ Our kernel method produced a strong hotspot around where the 20th murder actually took place. From our extrapolated probability distribution, the 20th attack was 8.58 times more likely to occur in the cell containing the actual murder site than it was to occur in an average square. More importantly, the
290
+
291
+ ![](images/2d9c2e91c94c8b85406c268ebb5f8c8bb0cefa1451eb9d7ca9bbd6db9fc45469.jpg)
292
+ Figure 5: The best fit circle of the attack locations under the road metric. The circle is shown in black, the graph is shown in grey, and the kill locations are shown in red.
293
+
294
+ ![](images/a19b7c7b719961ab7033d188502bcdf062eeb5419d99826f4048a994930dbf58.jpg)
295
+ Figure 6: The probability density estimated by Rossmo's model for the perpetrator's residence. Red indicates the highest probability. The actual location of Sutcliffe's house is shown with the house symbol.
296
+
297
+ result of our model would direct law enforcement officers to begin their search in precisely the square where the murder occurred.
298
+
299
+ Although Rossmo's model did not generate a hotspot directly on Sutcliffe's residence, it still provided a good starting point for a police investigation. Sutcliffe's house was on a second or third priority band and would be located reasonably quickly in a search effort.
300
+
301
+ ![](images/549a5f08111b83935390f953e5f053f553778f80949b3f5182d0b2a8bdf43653.jpg)
302
+ Figure 7: Rossmo's probability estimate as a surface.
303
+
304
+ # 5.2 Atlanta Child Murders
305
+
306
+ The Atlanta child murders refers to the murders of 29 boys and men between 1979 and 1981. Only 22 of these murders were conclusively linked to Wayne Williams, so we will use only these data points in our models. There is evidence that the bodies were found not far from where they were murdered [6]. Williams was tried and convicted of two of the murders in 1982.
307
+
308
+ # 5.2.1 Metric and Map
309
+
310
+ We used a map in [6] of the highway system surrounding the murder locations. As described earlier in the paper, we implement this as a graph and compute the road metric for this case. The distance from Williams' house is shown in Figure 8.
311
+
312
+ # 5.2.2 Probability Density Estimate
313
+
314
+ We ran our model using the kernel probability function on 21 of the data points in order to predict the location of the 22nd murder. The results are shown in figure 9.
315
+
316
+ # 5.2.3 Application to Rossmo's Method
317
+
318
+ Following the description of our model above, we first computed the best fit circle for the kills in the Atlanta data set. This is shown in Figure 10.
319
+
320
+ ![](images/a6f59c4bc3d4bc10c1c2a3a0a83b757006099686cea41210f7bfbb2f0ff17eb0.jpg)
321
+ Figure 8: The distance from Wayne Williams' residence under the road metric.
322
+
323
+ ![](images/0098d384c6cae4d83b54d3c2ad8138a849be9ff4f8882906e2a0b908392b9173.jpg)
324
+ Figure 9: This shows our computed probability distribution for the 22nd attack knowing the first 21. The actual location of the 22nd attack is shown as an 'x'.
325
+
326
+ We then applied a modification of Rossmo's method to estimate a probability distribution of where the criminal lives. A colorized illustration of the results are shown in Figure 11 and a representation of the results as a surface is shown in Figure 12.
327
+
328
+ # 5.2.4 Assessment of Results
329
+
330
+ Our application of the kernel method for this case was not as successful as it was for the Sutcliffe case. It happened that the 22nd murder was in an unexpected position relative to the prior kills. The model was still able to predict a very nontrivial probability for the 22nd murder being in this region, and the hotspot
331
+
332
+ ![](images/ca79e5258da292ef023911f473479dff6f820bd5db91f49248e0e0b30bdac13a.jpg)
333
+ Figure 10: The best fit circle for the Atlanta child murders data.
334
+
335
+ ![](images/d4619f82bde9a42d8e7cf4c8c4248ec4317cee5ff6d3d7137d2cfc7b7a2fff88.jpg)
336
+ Figure 11: A colorized representation of the prediction made by Rossmo's model for the location of the criminal's residence with the locations of the crimes overlaid. Wayne Williams' residence is represented by the house symbol.
337
+
338
+ it generated is reasonably near this section. Surprisingly, the hotspot happens to be centered over Williams' residence.
339
+
340
+ Rossmo's model was also able to create a hotspot very near the murderer's residence. Given the result of the model as guidance, law-enforcement could very easily locate Williams.
341
+
342
+ ![](images/90094dc4df3b0eed2234536fce9d9df2c3ae5d67a2d0f6e102db633ef0ea09f0.jpg)
343
+ Figure 12: Rossmo's probability estimate as a surface.
344
+
345
+ # 6 Improving the Model
346
+
347
+ # 6.1 Expanding the Road Metric
348
+
349
+ One of the most novel aspects of our approach was the design of the road metric to accurately estimate the cost of travel to a criminal. We can improve this estimate by taking into account other geographical features on a map, such as lakes, rivers or terrains of varying elevation. The presence of irregularly-shaped bodies of water can potentially have a significant impact on the results of our computations.
350
+
351
+ # 6.2 Profiling Potential Victims
352
+
353
+ As we observed in the case of the Atlanta Child Murderer, the kernel method was unable to predict a relatively unexpected kill site (in terms of pure geography). We may be able to improve our model by skewing our probability distribution toward areas with high concentrations of potential victims. For instance, Peter Sutcliffe generally targeted prostitutes, so our model would benefit from increasing the probability of a region being a potential kill site if it has a high level of prostitution.
354
+
355
+ # 6.3 Improving Computational Efficiency
356
+
357
+ Our algorithm saves time by overlaying a grid on the map of the region and saving the distance between every pair of cells. As each distance computation requires an application of Dijkstra's algorithm, the determination of all of these distances is quite slow. With a more efficient algorithm (or implementation that is faster than what Sage has available) for measuring distances in the road metric, we can make our grids finer and improve the precision of our model.
358
+
359
+ # 6.4 Testing to Choose Optimal Parameters
360
+
361
+ Since we had a limited amount of time to perform computationally hard tests, our empirically-determined choice for the variance function $h$ in the kernel method may be far from optimal. Our model could improve substantially if we make a different choice for $h$ . Further, the values for $f$ and $g$ that we used in Rossmo's model were determined to be optimal for the Manhattan metric; there may be better choices for these parameters depending on the road metric in use.
362
+
363
+ # 7 Conclusion
364
+
365
+ The problem of tracking a serial criminal given a limited amount of data has proven to be very nontrivial. The significant difference between our approaches highlights the creativity needed to tackle this problem. Both of our methods proved to be accurate means of locating the hotspots they were designed to find. The kernel method was particularly viable in the Sutcliffe case, most likely due to the tendency of attacks to cluster. However, it became less accurate as attacks spread out more uniformly. Rossmo's method had strong predictive power in both of our test cases. It consistently gave hotspots that would allow law-enforcement to easily locate the criminal. Synthesizing our models by informing police of all the hotspots we locate is an effective way of balancing accuracy with the size of search regions.
366
+
367
+ # 8 Executive Summary
368
+
369
+ In our paper we present a computerized model for predicting future crimes and estimating the location of a criminal based on the location and times of past crimes. The model is based on geographic profiling and makes heavy use of the local highway systems near to the crimes. For our model to be useful, it must be applied to cases where the location of crimes has been determined. For instance, the predictive power of our model is greatly reduced if body dump sites
370
+
371
+ are used instead of kill sites. Secondly, our model is most useful if applied to relatively small areas, such as over the area of a county or city. Our model greatly factors local highway systems into consideration, and thus should be applied primarily to cases where the perpetrator is suspected of using an automobile for transportation.
372
+
373
+ Our model makes two predictions. Firstly it gives an estimate of where future crimes will occur. The estimate is based on where past crimes have occurred, putting the most weight on the most recent ones, and also taking into account the local highway system. It does not give an accurate estimate for the absolute probability that an offender will be within a particular region, but instead provides the locations of hotspots where law enforcement should begin its investigation. By following the paths of constant and slowly changing color, the officers can logically search progressively larger areas around these hotspots to gradually increase the scope of their investigation.
374
+
375
+ Secondly, our model makes an estimate based on Rossmo's model. This approach balances two factors affecting the distance a criminal travels to commit a crime. The first factor reflects the effort expended by a criminal in traveling farther away from their residence, whereas the second reflects the risk involved in committing crimes near where a criminal lives. As with the previous component of the model, this method provides hotspots at which law enforcement should begin its investigation, and sequentially larger regions for it to expand its investigation.
376
+
377
+ Empirical tests of our model on high profile criminal cases has shown that both of these methods give useful starting points for investigation. We recommend that your officers apply our techniques to complement the other methods at their disposal.
378
+
379
+ # References
380
+
381
+ [1] Anthony Atkinson, Marco Riani, Robust Diagnostic Regression Analysis, Springer Press, 2000.
382
+ [2] Adrian W. Bowman, Adelchi Azzalini. Applied Smoothing Techniques for Data Analysis, Clarendon Press, 1997.
383
+ [3] Lawrence Byford. The Yorkshire Ripper Case: Review of the Police Investigation of the Case. Presented to the Secretary of State for the Home Department. December 1981.
384
+ [4] David Canter, Laura Hammond, “Prioritizing Burglars: Comparing the Effectiveness of Geographical Profiling Methods”. *Police Practice and Research*, Vol. 8, No. 4, September 2007, pp. 371-384.
385
+ [5] Steven C. Le Comber, Barry Nicholls, D. Kim Rossmo, Paul A. Racey. "Geographic profiling and animal foraging." Journal of Theoretical Biology 279 (2009) 111-118.
386
+ [6] Chet Dettlinger. The List. Philmay Enterprises, 1983.
387
+ [7] David Hand, Heikki Mannila, Padhraic Smyth, *Principles of Data Mining*, MIT Press, 2001.
388
+ [8] Martin, Rossmo, Hammerschlag. “Hunting patterns and geographic profiling of white shark predation.” Journal of Zoology v20 (2006) 233-240.
389
+ [9] Darcy Kim Rossmo, Geographic Profiling: Target Patterns of Serial Murderers. Doctoral Dissertation, Simon Fraser University, 1995.
390
+ [10] Xibin Zhang, Maxwell L. King, Rob J. Hyndman, Bandwidth Selection for Multivariate Kernel Density Estimation Using MCMC. Monash University, Clayton, Victory. 2004. http://editorialexpress.com/cgi-bin/conference/download.cgi? db_name=ESAM04&paper_id=120. Accessed 2/20/2010.
MCM/2010/B/7507/7507.md ADDED
@@ -0,0 +1,639 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # 2010 Mathematical Contest in Modeling (MCM) Summary Sheet
2
+
3
+ (Attach a copy of this page to each copy of your solution paper.)
4
+
5
+ Type a summary of your results on this page. Do not include the name of your school, advisor, or team members on this page.
6
+
7
+ # Centroids, Clusters and Crime: Anchoring the Geographic Profiles of Serial Criminals
8
+
9
+ Crime prediction is an old human tradition. The technology employed has changed drastically over the last fifty years with the advent of the computer, yet the techniques employed are the same as ever. The task of predicting future crime locations is still fundamentally about discovering and exploiting patterns and is therefore a mathematical task. A particularly challenging problem in this field is modeling the behavior of serial killers, as most (violent) repeat offenders choose victims who are complete strangers to them. Since finding correlations between the victims of a serial criminal is algorithmically difficult, we predict where a criminal will strike next, instead of whom. This practice of predicting a criminal's spatial patterns is called Geographic Profiling.
10
+
11
+ Geographic profiling research has clearly shown that, for most violent serial criminals, there is a significant and strong correlation between the distance between crime sites and the distance to the criminal's home or residence; roughly, serial criminals tend to commit crimes in a distinct radial band around a central point. Subsequent researchers, however, have extended the idea of 'home' to include areas of significance to the criminal's activities: for example, a serial killer may strike around his workplace, or only in the part of town where prostitutes abound. These centers of activity, termed "anchor points" provide a strong pattern within serial criminal behavior with which we can build a predictive algorithm.
12
+
13
+ We build our models off the assumptions that the entire domain of analysis is a potential crime spot, movement of the criminal is uninhibited, and area in question is large enough to contain all possible strike points. With these three assumptions we are able to consider our domain a metric space on which our predictive algorithms can create spatial likelihoods. Additionally, we assume that the offender in question is a "violent" serial criminal, as research suggests serial burglars and arsonists are less likely to obey the aforementioned spatial patterns. From this, we may assume the existence of one or more anchor points, which will provide the basis for our model.
14
+
15
+ There are substantial differences, however, in the criminal's expected behavior depending on whether he has one anchor point, or several. We treat the single anchor point case first, taking only the spatial coordinates of the criminal's last strikes and the sequence of the crimes as inputs. Estimate the single anchor point to be the centroid of the previous crimes, we generate a "likelihood crater": a $\Gamma$ -distribution whose parameters are fit to the preexisting data, including time trends. Height of this crater then corresponds to the likelihood of a future crime at that location. Next, we consider the multiple anchor point case: using a cluster finding and sorting method, we identify groupings in the data and then build likelihood craters around the centroids of each. Individual clusters are then given weight according to recency and number of points. Finally, we test each algorithm on the criminal's past offences (using the first N to predict crime $N + 1$ ) and determine which of the two techniques performs better. From these tests we also develop a standard metric for measuring our level of confidence in the output.
16
+
17
+ There is a notable dearth of public domain data giving both locations and sequencing data for the crimes of violent serial killers, so we extract seven datasets (for three offenders) from published research. We use four in developing our model and examining its response to changes in sequence, geographic concentration, and total number of points. The location of the last crime is removed from the input, and then compared to the models' predictions. To evaluate, we assume the police want ideally to distribute their resources so that one will be on scene when the next crime is committed and compute an "effectiveness multiplier," which describes how much more usefully police resources would be distributed under our model versus a random guess.
18
+
19
+ Then, we evaluate our models by running blind on the remaining datasets. The results show a clear superiority to the multiple anchor points method, and qualitatively support or model, but we would require more data to claim success with statistical significance. Overall, the model will be most successful in predicting longer crime sprees with clearly clustered data. We also demonstrate clear failure if criminal suddenly changes behavior, and also note that the model's output must be partnered with the knowledge and experience of a veteran cop. Finally, we note that by construction, our algorithm will tend to predict the criminal's anchor points more accurately than it predicts his next strike point. Our recommendation is that police consider using the anchor point prediction as their primary investigative tool.
20
+
21
+ # CENTROIDS, CLUSTERS AND CRIME: ANCHORING THE GEOGRAPHIC PROFILES OF SERIAL CRIMINALS
22
+
23
+ Team # 7507
24
+
25
+ February 22, 2010
26
+
27
+ # Table of Contents
28
+
29
+ 1 Introduction 3
30
+ 2 Background 4
31
+
32
+ 3Assumptions 6
33
+
34
+ 3.1 Domain is Approximately Urban 6
35
+ 3.2 Violent Serial Crimes by a Single Offender 8
36
+ 3.3 Spatial Focus 9
37
+
38
+ 4 Developing a Serial Crime Test Set 9
39
+
40
+ 4.1 The Scarcity of Source Data 9
41
+ 4.2 An Alternative: Pixel Point Analysis 11
42
+
43
+ 5 Metrics of Success: The Psychic and the Hairfinder 11
44
+
45
+ 5.1 The Effectiveness Multiplier 12
46
+ 5.2 Robustness of the Metric 14
47
+
48
+ 6 Two Schemes for Spatial Prediction 15
49
+ 7 Single Anchor Point: Centroid Method 15
50
+
51
+ 7.1 Algorithm 16
52
+ 7.2 Results and Analysis 19
53
+ 7.3 Extensions 21
54
+
55
+ 8 Multiple Anchor Points: Cluster Method 24
56
+
57
+ 8.1 Algorithm 24
58
+ 8.2 Results and Analysis 28
59
+ 8.3 Possible Extensions of the Cluster Model 34
60
+
61
+ 9 The Final Predictor: Combining the Schemes 34
62
+
63
+ 9.1 Results and Analysis 36
64
+ 9.2 Possible Extensions to the General Model 38
65
+
66
+ 10 Conclusion 40
67
+ 11 Executive Summary 42
68
+
69
+ References 44
70
+
71
+ # 1 Introduction
72
+
73
+ Crime analysis and prediction is an old human practice. Rachel Boba, in her chapter on the 'History of Crime Analysis' describes the Rancher in the old American West who "noticed that he was losing one or two head of cattle from his grazing land every week." Our Rancher begins to notice that these cattle go missing only at night and from a certain field. His detective-like observations and intuition may have led him to respond in a way fitting for the lone rancher: watch at night with a gun or move the herd altogether.[1] Although primitive compared to our popular notions of crime analysis (ala television shows like 24 and Numb3rs), the techniques of the Rancher from the American West are the same as those employed today. Current criminology theory posits that criminals behave in patterns that are deeply tied to environment, social surroundings, and personal history. The goal of any crime analysis and prediction, whether by the FBI of the present or the Rancher of the past, is fundamentally one of discovering and exploiting patterns and is therefore an explicitly mathematical task.
74
+
75
+ The task of predicting a serial criminal's next crime location is consequentially a problem of finding and using patterns from that criminal's crime history. There is an established literature on geographic patterns in serial crime sequences that has shown a strong patterning of serial crimes around an "anchor point" - locations of daily familiarity for individual serial criminals. We build two prediction schemes based on this underlying theory of anchor points in order to predict future crime geography. Both schemes are given an ordered sequence of past crime locations and then produce a surface of likelihood values and a robust metric. The surface produced is equivalent to a geographic profile of potential crime locations. The schemes differ in their underlying assumption of the number of anchor points.
76
+
77
+ The first scheme finds a single anchor point using a center of mass method and uses it to predict likely future crime locations. The second scheme assumes two to four anchor points and uses a cluster finding algorithm to sort and group points. These clusters are used to predict likely future crime locations. Both schemes use a statistical technique we call cratering to predict future crime locations. The comparison and eventual choice/rejection of a scheme will use an algorithm based on our effectiveness multiplier.
78
+
79
+ The strength of our model is its close connection to the research that surrounds serial crimes. We will see that for the several sample serial crime sequences, our final prediction algorithm provides a result that is honest about when it preforms well and honest when the underlying data does not fit the pattern at the heart of our model. Our end goal is not to beat the intuitive cop, but we believe that this algorithm detects and utilizes underlying serial crime patterns in a way that can drastically reduce the resources needed to help curb the devastating effects of a serial criminal.
80
+
81
+ # 2 Background
82
+
83
+ When Peter Sutcliffe was arrested in Sheffield, England, on suspicion of being the man behind the so-called "Yorkshire Ripper killings," the nature of quantitative criminology changed forever. The arrest (and subsequent conviction) of Sutcliffe marked a personal victory for Stuart Kind, the forensic biologist whose ground-breaking application of mathematical principles had successfully predicted that the Yorkshire Ripper lived between the towns of Shettley and Bingley. More importantly, however, this success in early 1981 marked the beginning of three decades of research developing increasingly-powerful methods of analyzing criminal patterns with mathematics.[4]
84
+
85
+ Kind certainly was not the first to put mathematics in the hands of police officers; indeed the use of statistical techniques to improve a law enforcement agency's understanding of criminal patterns has existed since at least 1847, when the Metropolitan Police of London began tracking yearly aggregate levels of various crimes.[1] These techniques have gradually grown in sophistication to the present day, and now, information-intensive models can be constructed using heat map techniques to identify the hot spots for a specific type of crime, or to derive correlations between the rate of criminal activity and the various attributes of a location (such as lighting, urbanization, etc.)[1] Although, by 1980, such techniques in forecasting trends in general categories of crime, committed by many, unconnected individuals were familiar to crime analysts, Kind's approach took the practice of mixing computations and crime-prevention into a new realm by applying algorithmic analysis to the criminal acts of a single, serial offender.[15]
86
+
87
+ Since Kind's groundbreaking calculation, the practice of "geographically profiling" the crimes of a single criminal has produced a substantial body of research, principally focused upon techniques for locating the criminal's "anchor points"—locations (such as a home, workplace, or a relative's house) at which he spends substantial amounts of time and to which he returns regularly between crimes. This is can be a difficult proposition, as narrowing the focus to one individual offender reduces the number of available data points substantially. In general, therefore, these techniques exploit experimentally validated correlations between bases of apprehended serial criminals and the areas where they committed their crimes.
88
+
89
+ Canter and Larkin (1993), for example, proposed that a serial criminal's home (or other anchor point) tends to be contained within a circle constructed such that its diameter is the line segment between the two crime locations with the greatest distance between them. Further studies have shown that this is indeed true in the vast majority of cases, particularly for violent serial criminals like rapists and murderers. [8] Observations like these can be used to create algorithms which prioritize the areas needing to be searched in order to find the criminal's home or base; Canter et. al. (2000) find that (for the specific case of serial murders), these
90
+
91
+ techniques on average reduce the area to be searched by nearly a factor of 10.[2]
92
+
93
+ By contrast, the practice of forecasting where a criminal will strike next has not been explored deeply in quantitative criminology literature[15]. Indeed, predicting where, say, a serial rapist will strike next may require (in addition to an uncomfortable level of clinical detachment from the past victims), a far more complex set of inputs, as it is far less clear what information might inform the criminals decision about where to strike.
94
+
95
+ Naively, one might be tempted to simply generate a heat map of rape incidents in the rapists region of activity, and predict that the next strike will occur where the most rapes have occurred before. Clearly, however, this approach essentially discards all previous information about the modus operandi of the criminal in question, whose individual pattern of strikes may differ substantially from the aggregate behavior of previous rapists. Paulsen and Robinson (2009) also observe that for many departments there are substantial practical, ethical, and legal issues involved in collecting the data for a detailed mapping of criminal tendencies, with the result that only $16\%$ of local police departments in the United States employ a computerized mapping technique.
96
+
97
+ A more sophisticated technique could map a wide variety of variables from lighting levels or proximity to bars, and seek to correlate the rapists previous strikes to whichever variables appear most significant. But this too has its drawbacks, for in this low-statistics environment, it is not at all clear that a sufficiently strong correlation could be found.
98
+
99
+ Furthermore, this method only exacerbates the practical problems of data collection, potentially requiring police officers to trawl the streets with keyboards subjectively evaluating dozens of criteria at each city block. While a computer model might indeed produce valuable output once this encyclopedic compendium of data was compiled, we suspect in many cases that a highly trained officer who is familiar with the ins and outs of his city could likely intuit a similar result at a reduced cost. Rossmo (1987), for example, describes the case of one Richard Ramirez, who raped or killed 33 individuals in the mid-eighties and "seemed to target single-story houses painted in light, pastel colors." Naturally, all but the most egregiously detailed mapping schemes would fail to identify this pattern computationally, whereas an alert policeman might well notice it instantly.
100
+
101
+ Our treatment of the problem will therefore employ an approach more akin to the anchor point finding algorithms described above. By examining experimentally confirmed trends in the relationships between crime locations (and between crime locations and estimated anchor points), we generate likelihood surfaces which effectively act as a prioritization scheme for regions which should be monitored or patrolled more aggressively in hopes of intercepting the next crime.
102
+
103
+ # 3 Assumptions
104
+
105
+ As with any numerical model, we will make use of a number of simplifying assumptions in order to make it mathematically tractable. In general, however, we find that these assumptions are not only computationally necessary, but also justified by empirical research or by the practical considerations of implementing this model as a law enforcement tool. Generally speaking, these assumptions can be summarized in two groups:
106
+
107
+ # 3.1 Domain is Approximately Urban
108
+
109
+ Here, we are not literally requiring that the criminal in question operates in lower Manhattan. We use the word "urban" because there are certain properties of a highly urbanized area which substantially simplify our modeling treatment: namely, that the entire domain is a potential crime spot, that the movement of the criminal is completely uninhibited, and that the area in question is large enough to contain all possible strike points. It is important to note, however, that even for serial crime committed in suburbs, villages, or spread between towns, the urbanization condition holds on the subset of the map in which crimes are regularly committed. To see this, consider the three urbanization conditions separately:
110
+
111
+ # 3.1.1 Entire Domain is a Potential Crime Spot
112
+
113
+ Put more technically, we assume the potential targets of the criminal are dense within the domain of interest; that is, that any neighborhood selected within the map will contain a possible crime location. Naturally, a continuous version of this condition would not actually be satisfied by any physical city, but when we discretized our model, taking the minimum area of interest to be about the size of a city block, then (particularly for violent crimes, where the targets are any human being), it is hardly unreasonable that each block contains a potential target. Indeed, an equivalent assumption is present in nearly all the existing geographic profiling techniques.[2][15]
114
+
115
+ It is obvious that every domain will violate these conditions to some extent: All but the most inventive serial killers, for example, will not commit a crime in the middle of a lake, or in the uninhabited farmland between small towns. Nevertheless, this observation simply requires that the output of the model be interpreted intelligently. In other words, while we assume for simplicity that the entire map is a potential target, police officers interpreting the results can easily ignore any predictions we make which fall into an obvious "dead zone."
116
+
117
+ # 3.1.2 Criminal's Movement is Uninhibited
118
+
119
+ This assumption is required for us to compute distances using a simple $L^2$ norm. As we shall discuss, many of our predictions are based upon observations about how far a criminal tends to travel to commit a crime. Since these observations are meant to reflect the actual experiences of the killer, one might imagine that if the "as the crow flies" distance between, say, the criminal's home and place of attack were appreciably different from the "real-world" distance along roads and highways, the latter would be a more reasonable measure. Finding this data, however, quickly becomes quite difficult as the scope of the crimes scales up (even in the age of GoogleMaps), and so we invoke the so called "Manhattan assumption": that there are enough streets and sidewalks, laid out in a sufficiently grid-like pattern, that the criminals movements around the map following real-world movement routes is the same as "straight line" movement in a space discretized into areas of city blocks.[15]
120
+
121
+ Moreover, there is some evidence to suggest that considering real-world travel times is not beneficial. Kent et. al. (2006) demonstrated that, across several types of serial crime, the use of real-world distance instead of Euclidean or "Manhattan" distance, actually performs more poorly in predicting criminal anchor points, and that the Euclidean and Manhattan distances are essentially interchangeable. Although our model is concerned with predicting the next crime site rather than the anchor points, the methods employed have the same theoretical basis, and thus, based on this evidence, we can safely take the $L^2$ norm of points when generating data about distances.
122
+
123
+ # 3.1.3 Domain Contains All Possible Strike Points
124
+
125
+ This condition, perhaps the most trivial of the three, simply says that the two conditions above hold on a sufficiently large area that they will still hold wherever the criminal may strike next. Put another way, if our model ever predicts that the next strike will be outside the initially considered area, we will expand our total examined area until this is not so, implicitly assuming that the targets continue to be dense and movement is still uninhibited.
126
+
127
+ To ensure this, we simply look to make sure that our likelihood surfaces decay sufficiently at the boundary of the region being mapped, expanding the region if this is not so.
128
+
129
+ Taken together, these three conditions describe the region of interest as a metric space in which
130
+
131
+ 1. The subset of potential targets is dense,
132
+ 2. The metric is the $L^2$ norm, and
133
+
134
+ 3. The space is "complete" in the rough sense that sequences of crimes do not lead to predictions of crimes which lie outside the metric space.
135
+
136
+ # 3.2 Violent Serial Crimes by a Single Offender
137
+
138
+ Naturally, in addition to making assumptions about the map space in which the criminal moves, we must make assumptions about the criminal himself in order to attempt to anticipate his behavior. To do so, we focus the scope to violent crimes, develop a definition for "serial," and assume all crimes were committed by the same offender.
139
+
140
+ # 3.2.1 Focus on Violent Crimes
141
+
142
+ Empirical research has consistently shown that geographic profiling is most successful for murders and rapes, with the average anchor point prediction algorithm being $30\%$ less effective for criminals who are serial burglars or arsonists.[15][2] It is easy to imagine reasons why this might be the case: an arsonist who sets fire to churches, for example, violates the "denseness" assumption described above, as does a burglar who targets electronics stores. While a "generic" residential burglar would still satisfy this assumption, the very fact that his behavior is "generic" suggests simple heat-mapping of burglary might be more apt.
143
+
144
+ Those employing our model may of course use it to predict the strike points of nonviolent offenders, but should only do so when a qualitative reason exists to suspect the criminal will behave like a serial killer or rapist.
145
+
146
+ # 3.2.2 Serial Crimes
147
+
148
+ Quite simply, we assume we are predicting only the behavior of serial offenders. Contemporary literature most commonly defines a serial killing as the murder of "three or more people over a period of 30 or more days, with a significant cooling-off period between the murders."[6] We will define "serial violent crime" equivalently, as three or more assaults, rapes, and/or murders over 30 or more days, with a cooling off period that is on the order of one ore more days. In this way, we exclude the murder or assault of multiple people in one mass "event" (like a school shooting or a suicide bombing) from which there is no particular reason to project future crimes.
149
+
150
+ This also means that, whenever inputting data points in our model, two or more events whose spatial-temporal separation is far smaller the average for the data set with high statistical significance ( $p > 0.01$ ), should be coded as one single event.[15]
151
+
152
+ # 3.2.3 Single Offender
153
+
154
+ Finally, we must assume that there is only one actor involved in the killings. There is some leeway, of course, in the interpretation of the word "actor"; some sources consider two or more individuals acting with collective intent to be a single "actor" for the purposes of geographic profiling.[15]. For the purposes of our model, if multiple individuals committing the crimes are truly acting with the same shared mindset, we should be able to project their next strike with the same level of confidence. What is more important to note is that the model treats all input data points as crimes which were definitively committed by the same person. Consequently, it is inadvisable to include a crime which "might have been the work of the serial criminal" solely for the purposes of improving statistics.
155
+
156
+ # 3.3 Spatial Focus
157
+
158
+ While some research in the temporal patterns of serial criminals exists, research seems that the best predictor of future geographic crime location is past geographic data. Temporal data is problematic for use in our model for several key reasons.
159
+
160
+ 1. While research has found cyclical patterns within the time between crimes, these patterns don't correlate directly to predicting the next geographic location. What is useful is general trends in spacial movement over an ordering of the locations. We will utilize this in our model by ignoring the specific time data that is present in crime sets, using this information only to develop an ordering within the crime sequence itself.
161
+ 2. For the most violent of serial crimes (murder, rape/murder), time data can be incredibly inaccurate. While it is generally good enough to establish an ordering in the time sequence, the dates and times of murders are usually discerned from the remnants of body dumps. While rape cases generally have clear time stamps, we use only an ordering of the data to stay inclusive over the set of violent crimes which we want our model to address.
162
+
163
+ # 4 Developing a Serial Crime Test Set
164
+
165
+ # 4.1 The Scarcity of Source Data
166
+
167
+ Finding representative and accurate data is a unique challenge within the development process of any algorithm. Testing our models accuracy in predicting actual crimes is is an obvious best
168
+
169
+ practice before proposing a method to an agency that plans to utilize these techniques in real life serial cases.
170
+
171
+ # 4.1.1 Existing Crime Sets
172
+
173
+ The most prominent researchers within the field have built large databases of serial crimes to use in their own evaluations (Rossmos FBI and SFU databases[15], LeBeaus San Diego Rape Case data-set[9], or Canter's Baltimore crime set[2]). Each of these databases was developed with specific methods of integrity and specific source locations. For example, Rossmo develops a data set of serial murderers using an FBI source (created using newspaper and media information). Rossmo then combs this data for cases where a set of criterion is used to eliminate weakly justified and inapplicable data.[13]. Other databases are generated using the crime reports of local police forces.[9]
174
+
175
+ The difficulty in our own search is that the databases found in our literature review are all private and proprietary. Additionally, the resources that used police department data were working directly with departments to gain access to this information. This meant practically that no established data source was directly available for our algorithmic analysis. Here we are faced with two options: simulate serial criminal data or find an indirect way of using the private data.
176
+
177
+ # 4.1.2 The Problem with Simulation
178
+
179
+ Simulation (by computer, not employed serial criminal) might seem like an initially attractive solution to the low level of data available. However, a brief examination of the types of simulated data one might produce provides motivation to use another source of data.
180
+
181
+ 1. The easiest simulation of crime data would be a random crime site generation within a given space. This would be unhelpful in general however because the basic assumption underlying our approach is that serial crimes are strongly connected to spatial patterns. A randomly generated crime set violates this basic assumption and seems to contradict the large amount of research in environmental patterns in crime.
182
+ 2. The next intuitive approach might be to generate the crime sequence using an underlying distribution supported by research. For example, one might generate a series of crimes using a journey to crime model or some periodic model. Many of these such models could be found in the criminal psychology research. The problem with this approach is subtle. While certainly having a model of how criminals choose crime sites is both intelligent and inherently necessary in the problem of spatial prediction, the predictive
183
+
184
+ algorithm will also be based on this model. This means that both the predictive algorithm and the generated criminal data will be created from the same model and will therefore give a false confidence in the chosen algorithm. Actual criminal data must be used if there is to be any confidence in the underlying assumptions made when creating a model.
185
+
186
+ # 4.2 An Alternative: Pixel Point Analysis
187
+
188
+ With these limitations in mind, we decided that the best approach to developing a set of crime series sequences in the absence of an actual explicit data set would be to "mine" the data types that are available. The clearest spacial (and sometimes temporal) occurrence of serial crime data is represented as scatter plots in Journey-to-Crime research (see Figure 1). While this data is not numerically explicit, it is the closest we found to direct data. The two sources of the crime sequences are found in a Ph.D. thesis by Rossmo[13] and a spatial analysis of journey-to-crime patterns in serial rape cases by LeBeau[9]. To translate this pictorial representation of a crime sequence, pixel point analysis was used to scale the raster coordinate to a meter coordinate. Although there is a small error in this scaling due to pixel representation, the error is smaller than the discretization that we use in our own models. This process was applied to seven different criminals' crime data. Four of these are serial killer sequences and three are serial rape sequences. The serial rape sequences have an explicit ordering while the serial murder sequences are unordered.
189
+
190
+ We note several strengths of this approach to further justify our choice of pixel data over simulation. First and most importantly, this data is already combed for inconsistency by the researchers and fits both our definitions of serial crime and the basic assumptions. The considerations and rules in data mining are complex and numerous. Using data that has already been deemed appropriate for serial crime analysis by leaders in the field is clearly superior to trying to mine or simulate data ourselves. Additionally, the data presented in the papers has been chosen to represent the range of possible patterns and scales within serial crimes. This means that our algorithm is being tested over a semi-representative set of data. We also incorporate two types of violent serial crime to add robustness in our testing.
191
+
192
+ # 5 Metrics of Success: The Psychic and the Hairfinder
193
+
194
+ In a 1998 presentation to a conference of the Naval Criminal Investigative Service, Rossmo recalled the investigation of the disappearance of a little girl named Mindy Tran. Tran's apparent abduction attracted a substantial amount of media attention and prompted a both a self-described psychic woman and a man claiming to be a "hairfinder" to volunteer their services
195
+
196
+ ![](images/a44e267431d9c3854f8965b6108c6bc5b5f6a5263538930a1df5476e5abef145.jpg)
197
+ Figure 1: Comparison of crime sequence for violent offender in LeBeau[9] and scatter plot of same data found through pixel analysis.
198
+
199
+ ![](images/ac19d6d725390c36db52fcb0f481e92ee101df87445af2a52f252bd0aefd5697.jpg)
200
+
201
+ to the local police
202
+
203
+ As Rossmo describes a hairfinder, "You have a little plastic film canister and you take the hair of the person you're looking for and you put it in that plastic film canister, you attach that to a stick, and it takes you to the person you're looking for." [14] Nevertheless, when the police sent the two out together (to "get rid of two nuts at once"), the psychic identified a park to search, and the hairfinder wandered his way over to the body. In Rossmo's words, "Those people could put [crime profilers] out of business" if their models are not better than random chance.[14] We take this advice to heart, and insist that for a model to be successful, it must outperform a random predictive algorithm or else it does more harm than good. In other words, we must beat the psychic and the hairfinder.
204
+
205
+ # 5.1 The Effectiveness Multiplier
206
+
207
+ To quantify this condition (that we outperform a randomly-guessing team of pseudoscience practitioners), we must define what it means to "outperform." We suggest the following intuitive definition: suppose a police department has a finite number of "resource" (cops, detectives, community watch groups, etc) with which to attempt to intercept the criminal's next strike. In response to the output of any model they find trustworthy, they should distribute their resources such that the number of resources at any point is proportional to the supposed likelihood of the criminal attempt to strike at that point. We say that one model outperforms another if it recommends allocating more resources to the point where the next crime was actually
208
+
209
+ # committed.
210
+
211
+ A logical next step is to ask how much a model outperforms another, which can simply be done by taking the ratio of the amount of resources allocated at the crime point under the first model to the amount allocated under the second. When this ratio is greater than 1, the first metric outperforms the second, and vice-versa. Assuming that the police force is twice as effective at a location when they have twice the resources allocated, we can think of this ratio as an effectiveness multiplier, representing how many times more prepared to stop the criminal the police will be under the first model compared to the second. In this paper, effectiveness ratios will be denoted by the symbol $\kappa$ :
212
+
213
+ $$
214
+ \kappa \equiv \frac {\text {r e s o u r c e a l l o c a t i n g a t c r i m e p o i n t , m o d e l 1}}{\text {r e s o u r c e a l l o c a t i n g a t c r i m e p o i n t , m o d e l 2}}
215
+ $$
216
+
217
+ Observe that if we normalize the integral of each likelihood plot to the total number of resources available to the department in question, the equation above is simply a ratio of the percentage of department resources allocated to the point, and since the total resources available are the same under both models, we can evaluate $\kappa$ simply as
218
+
219
+ $$
220
+ \kappa = \frac {Z _ {1} (C r i m e P o i n t)}{Z _ {2} (C r i m e P o i n t)}
221
+ $$
222
+
223
+ where $Z_{n}$ is the likelihood function of model $n$ and CrimePoint is the location of the actual next crime. We note that though this information is obviously not available in most cases, we use our metric by truncating a data sequence and using that set to predict the truncated crime.
224
+
225
+ Finally, note that since a randomly guessing algorithm, like an unconvincing psychic, will have no reason to privilege any one point over another, averaging over a large number of attempts to predict a crime location, their resource allocation appears the same as a model which produces a perfectly flat likelihood plot. This allows us to compare our model to a random guess by computing a "standard effectiveness multiplier," in which model 2 is assumed to be a uniform, flat distribution
226
+
227
+ $$
228
+ \kappa_ {s} = \frac {Z _ {1} (C r i m e P o i n t)}{Z _ {f l a t} (C r i m e P o i n t)}
229
+ $$
230
+
231
+ This standard effectiveness multiplier $\kappa_{s}$ is the metric we will use to assess how well our model performs. A value of exactly 1 indicated it is no better (on average) than a random guess, and a fractional value would indicate we are actively misleading the police. The upper bound of this number is simply the magnitude of the maximum potential striking area considered by the model, which is achieved only if the model predicts the next crime point exactly and assigns it $100\%$ likelihood<sup>1</sup>.
232
+
233
+ # 5.2 Robustness of the Metric
234
+
235
+ Primarily, we use this metric to compare the effectiveness of different models when looking at the same data set. In such a case, the metric is perfectly robust and needs no qualification. However, we will also want to compare the success of a single algorithm across multiple datasets, which introduces a slight problem. Clearly, the distribution of a fixed resources under a flat likelihood plot depends on the area to be covered (i.e, if there are 100 cops to be deployed over 10 city blocks, each receives 10 cops, but over 100 blocks each receives only 1). Our algorithm, however, is not area dependent, and consequently, one can make $\kappa_{s}$ arbitrarily large by simply including more and more pointless area well outside the killer's active region.
236
+
237
+ It is therefore only legitimate to compare $\kappa_{s}$ values between two data sets between which the ratio of the killer's active region to the total area considered stays constant. Necessarily, the size of the killer's active region cannot be precisely known; however, we will employ a standard technique to make this condition approximately true, based on one used by Canter et. al. (2000) to assess algorithms which predict anchor points. From the findings of Canter and Larkin (1993) and Paulsen (2005), we know that experimentally speaking that in more than $90\%$ of cases, all future crimes committed by an offender will fall within a square whose side length is the maximum distance between previous crime points and whose center is the centroid of the data.[3][11] For each of our test data-sets, we construct such a square and them multiply each side length by 3, creating an overall search area which can very nearly be said to be 9 times larger than the criminal's active area. Since this ratio is now (nearly) constant between the data sets, this will allow us to compare effectiveness multipliers between different data-sets as well.
238
+
239
+ It is important to note that, since we have only three complete data sets on which to test our model, our results are not generalizable to predict success or failure when applied to any other serial crime cases, and we will not be able say with statistical significance whether or not we outperform a random guess. Instead, because the data-sets we do use were at least selected to be as representative as possible of a typical violent serial crime, we will simply make some qualitative assessments of success.
240
+
241
+ Notwithstanding our limited access to data, any police department with access to a sufficiently large number of sample serial crime cases (research suggests approximately $n > 50$ )[2] could easily employ the methods outlined above to test our algorithm, with the average effect
242
+
243
+ tiveness multiplier serving as a representation of our model's utility in a general case of serial crime.
244
+
245
+ # 6 Two Schemes for Spatial Prediction
246
+
247
+ As discussed in the Background, the problem of predicting next crime location is not well documented. The most common approach to analyzing a criminal crime sequence is to predict a home location. The research literature on journey-to-crime research for violent serial crimes strongly suggests that serial crime is patterned around a criminals home, workplace, or other place of daily activity[15, 8, 4, 6, 17]. This has lead researchers to spend most of their resources developing and evaluating methods of finding this crime center point. The centroid is then investigated as an anchor point in the criminal's daily activity. For the majority of research, this anchor point is the serial criminal's home. This method has been tested on large sets of data and was found to reduce the necessary search area by a factor of ten.
248
+
249
+ Our schemes will use the strength of an anchor point finding algorithm to predict likely locations of future crimes. The motivation is straightforward: if we know the location of an anchor point and the pattern of crime location from this anchor point, we can generate an area of future likely hood by projecting from the anchor point back to the crime points. It seems that this conceptualization of patterning is more accurate than a direct forecasting scheme that ignores anchor point behavior.
250
+
251
+ We develop two schemes with the base assumption of the existence of anchor point patterning. The first scheme assumes only a single anchor point. The second scheme assumes multiple anchor points. These related schemes can then be used in combination to provide an analysis of likely future crime locations.
252
+
253
+ # 7 Single Anchor Point: Centroid Method
254
+
255
+ Figure 2 is the algorithm used to predict likely crime locations using a single anchor point. Because of the use of centroid to calculate the single anchor, we call this the centroid method (although centroids will be used in both methods). Description of the algorithm and discussion of its results are presented. The green blocks in the diagram represent extensions of the model which will be discussed.
256
+
257
+ ![](images/9066e89888d98d98a9d42167e2a0031c493255fa004981f8e209115ec0f99c7d.jpg)
258
+ Figure 2: Algorithm Flow Chart for Centroid Method
259
+
260
+ # 7.1 Algorithm
261
+
262
+ # 7.1.1 Create Search Domain
263
+
264
+ Bluntly put, we carefully construct the smallest square which will contain every previous crime, then wildly scale up each dimension by a factor of 3. This ensures that all of our fundamental assumptions about the underlying domain are satisfied, and the consistent scale factor of 3 allows us to compare the algorithm between data sets with different degrees of geographical separation (recall Section 3 for justification).
265
+
266
+ # 7.1.2 Find Centroid of Crime Sites
267
+
268
+ The anchor point of the crime sequence is determined by finding the "center of mass" of the crime points. Each point is given a common weight making the computation an average of $n$ crime coordinates $(x_{i},y_{i})$ . The coordinate of the centroid is, simply,
269
+
270
+ $$
271
+ (\tilde {x}, \tilde {y}) = \left(\frac {\sum x _ {i}}{n}, \frac {\sum y _ {i}}{n}\right)
272
+ $$
273
+
274
+ While we choose to use the centroid to model the anchor point, this is not the only approach taken in the literature. Specifically, Rossmo first introduced the idea of placing distributions
275
+
276
+ (Gaussian or negative exponential) over each crime site. The sum of these produce a surface of likely home sites (CGT Method)[15]. This method is more effective for determining likely anchor point locations because it assigns values of 'likelihood' over the entire search, as opposed to the centroid method which gives one anchor point location. Since we use the centroid only as a step to predicting crime locations (and not as a method for searching anchor points) the singular output of the centroid approach makes more sense to use in our algorithm.
277
+
278
+ # 7.1.3 Building a Likelihood Crater
279
+
280
+ With the estimate of an anchor point location, we now predict future crime locations using the "journey-to-crime" model within serial killer psychology. Again, this says that the serial criminal's spatial pattern of crime around an anchor point, in general, does not change. For example, if a serial rapist commits his crimes 120 meters away from his home, on average, we can predict that he will likely strike again within this distance from his home[17]. A rough and first pass prediction discussed in Paulsen's 2005 conference proceedings might be to draw a large shape (circle, square, polygon, etc) around this anchor point based on the largest distance from a crime point to the anchor point. This prediction method was shown to be incredibly ineffective against the largest circle guess described in the background[11].
281
+
282
+ We instead use a more effective cratering technique first described in a home find algorithm by Rossmo. To do this, the two dimensional crime points $x_{i}$ are mapped to their radius from the anchor point $a_{i}$ i.e. $f: x_{i} \to r_{i}$ where $f(x_{i}) = \| x_{i} - a_{i}\|_{2}$ (a shifted modulus). The set $r_{i}$ is then used to generate a crater around the anchor point.
283
+
284
+ There are two dominating theories for the pattern serial crimes follow around an anchor point. The first says that there should be a buffer zone around the anchor point. A serial criminal, according to this theory, will commit crimes in an annulus centered at the anchor point[8]. This theory alone is often modeled using the positive portion of a Gaussian curve with parameters mean and variance of $\{r_i\}$ . The other theory says that crimes follow a decaying exponential pattern from the attractor point. Both theories have been substantiated by analysis on journey to crime research. Yet we seek a distribution that would model either theory, depending on the patterns displayed in the crime sequence. For this we turn to the flexibility of the $\Gamma$ -distribution.
285
+
286
+ Hogg et. al. note that the probability density function (pdf) for the $\Gamma$ -distribution is a good model "for many nonnegative random variables of the continuous type" due to the two flexible parameters[5]. The $\Gamma$ -distribution provides a compelling shape for our model: it offers a "shifted-Gaussian" like behavior when points lie further away and a curve similar to a negative exponential when the parameters are small. Figure 3 displays two plots demonstrating the flexibility of the shape of the $\Gamma$ -distribution.
287
+
288
+ ![](images/a9a5889ba052765d715f8cc33d10d8e70af29e43fe16fc5f966d9b3a209d12ec.jpg)
289
+ (a) $k = 1,\theta = .5$
290
+ Figure 3: Example $\Gamma$ probability density functions
291
+
292
+ ![](images/33f27ba053c0cba217a71b148fafdf6eb06f64d741e5c33d356740dd999aa8db.jpg)
293
+ (b) $k = 4,\theta = 10$
294
+
295
+ Let us define the random variable $X$ to be the radius between crime point and anchor point, $r$ . As stated in above, $X \sim \Gamma(k, \theta)$ . For purposes of computing the estimators, we assume independence of $\{X\}$ . The two parameters of the $\Gamma$ -distribution, $k$ and $\theta$ , are determined using the gamfit function in MatLab(2009). This routine uses the maximum likelihood estimators (MLE) of the $\Gamma$ -distribution to estimate the parameters. The MLE is found by first noting that independence gives us our likelihood function for a set of $N \{x_n\}$
296
+
297
+ $$
298
+ L (k, \theta) = f (x; k, \theta) = \prod_ {i = 1} ^ {N} f (x _ {i}; k, \theta) = \prod_ {i = 1} ^ {N} x ^ {k - 1} \frac {\exp (- x / \theta)}{\Gamma (k) \theta^ {k}}
299
+ $$
300
+
301
+ Taking the logarithm of this gives us the log-likelihood function
302
+
303
+ $$
304
+ \ell (k, \theta) = (k - 1) \sum_ {i = 1} ^ {N} \ln (x _ {i}) - \sum_ {i = 1} ^ {N} x _ {i} / \theta - N k \ln (\theta) - N \ln \Gamma (k)
305
+ $$
306
+
307
+ To find the MLE for $\theta$ , $\ell(k, \theta)$ is maximized with respect to $\theta$ and $r$ :
308
+
309
+ $$
310
+ \hat {\theta} = \frac {1}{k N} \sum_ {i = 1} ^ {N} x _ {i} = \frac {\overline {{x}}}{k}
311
+ $$
312
+
313
+ There is no closed form solution for $\hat{r}$ , and gamfit numerically solves for this MLE. One could also use a Newton method to do this as described by Minka[10].
314
+
315
+ We use these parameters to fit the $\Gamma$ -distribution to our set of $r_i$ and then build the crater of likely crime locations using this distribution. This is accomplished by computing $r$ (the distance from the point to the anchor point) at every point in the search region and evaluating $\Gamma$ function pdf with fitted parameters at these values. We then normalize such that the sum of our likelihood surface is exactly 1.
316
+
317
+ Applying this method to the set of crime locations of Peter Sutcliffe, the heat map shown in Figure 4.
318
+
319
+ ![](images/9a6e89349c7e006e7b3dcd3398b060cc12652b443fb2a9e803b0d321eda87a9a.jpg)
320
+ Figure 4: Heat Map showing cratering technique applied to the crime sequence of Peter Sutcliffe
321
+
322
+ # 7.1.4 Adjust for Temporal Trends
323
+
324
+ We would also like our prediction to account for any radial trend in time (the criminal becoming more bold and committing a crime closer or further away from home). Some research has suggested that an outward or inward trend in $r_i$ may suggest that the next crime will follow this trend[8]. To accomplish this, we let $\tilde{X} = X + \overline{\Delta r}$ where $\Delta r = r_n - r_{n-1}$ . We note that this new random variable, $\tilde{X}$ , gives our intended temporal adjustment in expected value:
325
+
326
+ $$
327
+ E [ \tilde {X} ] = E [ X + \overline {{\triangle r}} ] = E [ X ] + \overline {{\triangle r}}
328
+ $$
329
+
330
+ # 7.2 Results and Analysis
331
+
332
+ To evaluate this first method, we feed it data from three serial rape sprees. Since the mechanisms of the model were developed before these data-sets were collected and debugging was done with different data-sets entirely, we can consider this a blind test.
333
+
334
+ In each test case, we remove the data point representing the chronologically last crime, and produce a likelihood surface $Z(x,y)$ . As outlined above, we then identify the actual location of the final crime and compute the standard effectiveness multiplier $\kappa_{s}$ for each case.
335
+
336
+ Our first test data-set, coded as "Offender C," is a comparative success for the model (Figure 5). With $k_{s} = 12.19$ , it is a full order of magnitude better to distribute police resources using this model instead of distributing uniformly (although it should be remembered that the absolute size of this number is somewhat more arbitrary for our data than it would be if the maximum search area had been defined intuitively by a cop familiar with the region).
337
+
338
+ Qualitatively, one can see the reason for the high $\kappa_{s}$ : the next crime does indeed on the surface of our "crater," and falls satisfyingly near the isoline of maximum height. Nevertheless,
339
+
340
+ ![](images/9fb30bd774d64db25fac5bd9f0457fd77f8ea6b59392e4f1aaef62738fff2eea.jpg)
341
+ (a) Heat Map of Offender C Predictions, Centroid Model
342
+
343
+ ![](images/0a4155f790071dd5592ea8c7b24d97bd8ecabf33edb2c9b1ba615b02971b247d.jpg)
344
+ (b) Surface Plot of Offender C Predictions, Centroid Model
345
+ Figure 5: Offender C, Centroid Model Output (a), (b)
346
+
347
+ there are some 120 grid square rated greater or equal in likelihood, meaning some $0.3 \, km^2$ of area are being patrolled with the same intensity for no reason. This amount of area is small in an absolute sense, but is large given that the vast majority of the crimes in this case were committed within an area of 1 square kilometer
348
+
349
+ With $\bar{\delta r} = -0.276$ , the temporal corrections in this distribution are negligible; our projection is simply a radially symmetric fit to the geographic dispersion of previous crimes. The surface plot also shows the steepness of the "inside" of the crater, as the geographic distribution apparently suggests a small but sharply defined buffer zone around the centroid (with all the
350
+
351
+ crimes contained within two square kilometers, any buffer zone would have to be small).
352
+
353
+ Our second test data-set, coded "Offender B," (see Figure 6) is similarly successful, with $k_{s} = 12.15$ . Here, the fit parameters of the $\Gamma$ estimate the existence of a substantially larger buffer zone than for Offender C, but the substantial dispersion of the points along the horizontal axis results in a larger outer radius, so that a similar portion of the high-likelihood grid points appear "wasted," particularly in the upper right section of the crater. Here too, temporal connections are negligible with $\bar{\delta r} = -0.0076$ .
354
+
355
+ Just as our naive desire to extrapolate from two data points begins to insist our model must be robust, we apply our model to Offender A and find a clear example of how it can fail. In this case, the last crime (see Figure 7) is one of two substantial outliers, and in fact, with a standard effectiveness multiplier of $k_{s} = 0.38$ , our model is nearly three times less helpful than the average random guess in selecting a distribution of resources and manpower. Note, however, that the vast majority of previous crimes are still well-described by the model, so the assumption of an anchor point somewhere within the crater region does not appear to have been a poor one. It is simply the case, as it will often be in real life, that some scheme, whim, or outside influence caused the criminal to deviate sharply from his previous pattern in this case.
356
+
357
+ As remarked in the development of our metric, testing the model on these three data points alone is hardly sufficient to draw any broad conclusions about the utility of our model in a general setting. Still, since these three data sets were selected arbitrarily from published literature from within samples considered to be highly representative of typical serial killings, it is at least reassuring that our model is somewhat successful in two of three cases, and it is hard to imagine any model being highly successful in the third case.
358
+
359
+ # 7.3 Extensions
360
+
361
+ Several of the test cases considered above stray from the idealized, radially symmetric distribution which centroid model assumes. In particular, the data from offender B has considerably more variation along the horizontal axis than the vertical, and informally, it appears an elongated ellipsoid might be a better fit than a perfect circle. Alternatively, one might imagine that points on the right and left of the circle should have higher likelihood scores than the ones in the top and bottom, as though some of the volume under the crater surface had "gravitated" towards the regions more dense with points. Indeed, there is some empirical evidence to suggest that many serial crime patterns display such angular asymmetries.[13]
362
+
363
+ The first idea could be accomplished by constructing a special, two-dimensional distribution function which would fit the points elliptically, or the construction of the function in a space where the geometry was non-Euclidean and determined by the presence of past-crime data
364
+
365
+ ![](images/f092087a6139e36560c7fa6ed3533ecd6834ab2daf589dd43df00501cfb977a9.jpg)
366
+ (a) Heat Map of Offender B Predictions, Centroid Model
367
+
368
+ ![](images/c93dac28e5190e527e3b5ddf38682e687cb3ab2dafd2372e3b9ff3181c7be81e.jpg)
369
+ (b) Surface Plot of Offender C Predictions, Centroid Model
370
+ Figure 6: Offender B, Centroid Model Output (a), (b)
371
+
372
+ points. To simulate the second, a convolution with a kernel built around the nearness to the centroid as seen by each of the crime-points could potentially be used, or an actual "gravity-like" simulation could be performed via the transport equation, with previous crime points as attractors.
373
+
374
+ A cursory investigation of all of these methods during the development of our model
375
+
376
+ ![](images/18ee5ca28f2d7256f27a6a36552690963f218319ed0f19b4f84c0735d10ed7ff.jpg)
377
+ (a) Heat Map of Offender A Predictions, Centroid Model
378
+
379
+ ![](images/736776db12cdeebb360c6cabc0260a8066826a41caeb04ed2d7ddc0fd546574f.jpg)
380
+ (b) Surface Plot of Offender A Predictions, Centroid Model
381
+ Figure 7: Offender A, Centroid Model Output (a), (b)
382
+
383
+ showed some potential successes, but all would have to contain additional parameters. Without access to a large data-set, however, from which to empirically determine the best values of these parameters, any extension of the centroid model to include such algorithms would be at best an arbitrary guess. Since one of the central premises of our model is that it assumes only behavioral and psychological trends which are robustly supported by broad studies of serial crime, it does not seem appropriate to include such additional modifications in our single-anchor point model.
384
+
385
+ # 8 Multiple Anchor Points: Cluster Method
386
+
387
+ A different way to account for the angular asymmetries like the ones in the Offender B data-set, however, would be to simply assume that no angular symmetry should be expected because there is not simply one anchor point around which crimes are arrayed. Our second method explicitly assumes that instead, there are at least two anchor points in the distribution of crimes (for example, a home and a workplace), and treats each anchor point as the centroid of its own local cluster of crimes. Like the single-anchor point extensions described above, this method will require an additional input parameter: the number of clusters to be created. However, we show that this parameter can be derived for each data set from only the locations of the previous crimes, removing the need to set parameters arbitrarily.
388
+
389
+ # 8.1 Algorithm
390
+
391
+ The basis of this algorithm is a hierarchical clustering scheme [7]. Once clusters are found the previously described algorithm for centroids is used at each computed cluster centroid. A sketch of the algorithm can be seen in Figure 8. The algorithm starts with the same data as the Centroid method but begins computations with a cluster determination scheme.
392
+
393
+ # 8.1.1 Finding Clusters in Crime Sequences
394
+
395
+ The first objective of this algorithm is to determine the number of clusters and group points into various clusters. This is accomplished through the use of hierarchical clustering and comparison of Silhouettes for various cluster counts.
396
+
397
+ The first step towards this goal is to cluster the data such that there are a total of 2-4 clusters. We forced a minimum of 2 clusters because the single cluster case is identical to the earlier centroid method. A maximum of 4 clusters was chosen based on our assumption that each of these clusters represents crimes around an anchor point. Subdividing into too many clusters would be akin to saying a serial criminal has six or seven significant points of daily activity. While this may be the case, anchor-point theory would suggest a number of four or fewer. The clustering algorithm is accomplished in a 3 step process.
398
+
399
+ In general the first step to clustering is to compute the distances between all points on a selected metric [7]. In this case the euclidian distance, henceforth designated $d(x,y)$ , is used since the only measure we care about is physical distance between points. Once all of these distances have been computed the data is organized into a hierarchical cluster tree, the tree can be represented by what is known as a dendrogram [7]. The dendrogram for Offender B is seen in Figure 9.
400
+
401
+ ![](images/e302952d0dc277b727c1815a25d56572dfaebe108e4c870942f0571665a8993d.jpg)
402
+ Figure 8: Algorithm Flow Chart for Cluster Method
403
+
404
+ This cluster tree, of $N$ data points $P_{i}, i = 1, \dots, N$ , is built up by first assuming that each data point is its own cluster. Clusters are then merged based on which 2 are the closest in some defined metric $d_{c}$ . Formally if we have clusters $1, \dots, n$ denoted as $C_{i}, i = 1, \dots, n$ we choose to merge the two clusters $i$ and $j$ , $i < j$ into cluster $i$ and delete cluster $j$ where $i, j$ minimizes $d_{c}(C_{i}, C_{j})$ .
405
+
406
+ This process is repeated until all of the points have been merged into the number of desired clusters, to create a dendrogram the number of desired clusters is set to one. In this case we used as our metric the distance between the centroids of any two clusters. These cluster merges are plotted as the horizontal lines in the dendrogram, and their height is based on the distance between merged clusters at the time of merging.
407
+
408
+ The above algorithm requires a fixed number of clusters, however we have no a priori knowledge of the number of actual clusters. Because of this we needed a way to determine the optimal number of clusters. To accomplish this we use the notion of Silhouettes [16]. The silhouette at at point is denoted as $s(P_i)$ and is computed for all data points.
409
+
410
+ We denote $a(P_i)$ as the average distance from $P_i$ to all other points in its cluster and $b(P_i, k)$
411
+
412
+ ![](images/ae02d47aaf9640c75c047e1612530a51da75ad9a6c3ab221485b88355816e65b.jpg)
413
+ Figure 9: Dendrogram for Offender B
414
+
415
+ as the average distance from $P_{i}$ to points in cluster $C_k$ where $P_{i} \notin C_{k}$ . Now we can define $(i = 1, \dots, N)$
416
+
417
+ $$
418
+ s \left(P _ {i}\right) = \frac {\left(\min _ {k \mid P _ {i} \notin C _ {k}} \left(b \left(P _ {i} , k\right)\right) - a \left(P _ {i}\right)\right)}{\max \left(a \left(P _ {i}\right) , \min _ {k \mid P _ {i} \notin C _ {k}} \left(b \left(P _ {i} , k\right)\right)\right)} \tag {1}
419
+ $$
420
+
421
+ Inspection of (1) shows that $s$ can take values in $[-1, 1]$ . Where the value of 1 is only taken if a cluster is a single point and the value $-1$ can only be taken in cases where multiple points at the same location belong to different clusters. The closer $s(P_i)$ is to 1 the better $P_i$ fits into its current cluster and the closer to $s(P_i)$ is to -1 the worse it fits within its current cluster.
422
+
423
+ Now to optimize the number of clusters we compute the clustering such that there are 2, 3 and 4 clusters. Then in each case we compute the average Silhouette value across every point that is not the only point in a cluster. The Silhouette values at these single point clusters are ignored because otherwise clusters of 1 influence the average in an undesirable way. We then seek to maximize the average Silhouette value across the number of clusters. A sample plot of average Silhouette value versus number of clusters is seen in Figure 10
424
+
425
+ In this case we select 4 as the desired number of clusters. The cluster groupings computed by the algorithm are shown in Figure 11. We note that because the average Silhouette plot has a tendency to increase with number of clusters we cap the possible number of clusters at 4. It
426
+
427
+ ![](images/36473394d4bc5c6652904d83d968f0027f53ea219b1696c7833a25252a3ba326.jpg)
428
+ Figure 10: Offender B average Silhouette Values Vs Number of Clusters
429
+
430
+ is important to note however that the algorithm does not always pick 4 clusters, for example the clustering of Offender C's crimes is optimized by having 2 clusters.
431
+
432
+ ![](images/5782a5efcea39425baf8ca353e32096129e5cbe0db9a11834d193ac1002225a4.jpg)
433
+ Figure 11: Offender B Crime Points Sorted into Four Clusters
434
+
435
+ We now have a method that not only clusters the data, but also determines based on our metric of distance, what an optimal clustering is. Now we can take the output from this algorithm and progress to developing our likelihood surface.
436
+
437
+ # 8.1.2 Cluster Loop Algorithm
438
+
439
+ From this point forward our clustering algorithm draws heavily from our centroid algorithm with a few small changes. The first thing we do is compute the likelihood surface for the centroid of each cluster, in the case where a cluster contains more than two points we simply use our centroid algorithm to develop this likelihood surface. If a cluster contains a single point we treat it differently, we do not assume that this cluster represents an anchor point.
440
+
441
+ Instead we treat this point as an outlier. However, based on the idea that this point exists for some reason we do want to add some likelihood to the area. Towards this end we use a gaussian centered at that point as our likelihood surface. The mean of this gaussian is determined by the expected value of the gamma function placed over every anchor point of a cluster that has more than 1 point.
442
+
443
+ # 8.1.3 Combining Cluster Predictions: Temporal and Size Weighting
444
+
445
+ Using the separate likelihood surfaces computed for each cluster we can create our final surface through a linear combination of the individual surfaces. The surfaces returned by the centroid function are normalized, so there is some equivariance between them.
446
+
447
+ We now compute weights for each likelihood surface. There are two major components of our weights, one is simply number of points in that weights cluster over total number of points. This weights the more common crime locations. The next component of the weighting is the average temporal index of the events in a given cluster over the final point index. This weights more recent clusters and is not influenced directly by the number of points in a cluster.
448
+
449
+ Once all of the likelihood surfaces are weighted they are summed together and the result is again normalized to have sum 1. This weighted sum is the final output of our algorithm and is equivalent in type to the output of our centroid algorithm.
450
+
451
+ # 8.2 Results and Analysis
452
+
453
+ The three test data-sets conveniently display the cluster method's superior adaptability. In the case of Offender C (see Figure 12) for example, the highly localized nature of the data points means there is very little difference between the centroid method and and the cluster method results (and indeed, as the data becomes increasingly centralized, one would hope the two
454
+
455
+ results would converge). Indeed, the only difference is that the cluster method, insisting on at least two separate clusters, identifies the point directly below the centroid as a cluster of one (an outlier), and therefore excludes it from the computation of the larger cluster's centroid (a slight Gaussian contribution from this point's "own" cluster can be seen in the surface plot). This has the effect of slightly reducing the variance and therefore narrowing the fit function; consequently, the standard effectiveness multiplier rises slightly to 15.82.
456
+
457
+ Testing with Offender B (see Figure 13), by contrast, shows the cluster method operating at the other edge of its range, as the silhouette optimization routine decides this data is best represented by four distinct clusters with four separate anchor points. After unblinding the details of this case, we discover that the researcher documenting the case also found it notable for the fact that the offender seemed to choose his targets alternately between three city blocks, then briefly expanded his range geographically to an additional neighborhood further away before returning to offend again in one of his earlier active areas. The researcher hypothesizes this may have been a tactic to avoid known locations of parole officers, as the offender was a parolee during the majority of his crimes;[9] whatever the reason, it is a success (if a statistically insignificant one) for our model that we discerned this pattern from the geographic dispersion alone. This data-set also provides a good example of a case where multiple anchor point analysis seems valid even though most do not represent homes or workplaces: the model is simply responding to the statistical evidence that criminal seems most comfortable committing crimes in one of 4 distinct locations.
458
+
459
+ On first glance, however, it might appear that the centroid method still outperforms the cluster algorithm for this data-set; after all, the actual crime point no longer appears in the band of maximum likelihood. This is true and intentional, as the model chooses to weight the largest cluster most strongly and the "freshest" cluster behind it. Nevertheless, while not accurately predicting that the offender would choose to return to one of his earliest activity zones, it still outperforms the centroid method with a $\kappa_{s}$ value of 22.95. This is because the craters generated by the cluster method are sharper and taller for this data set, so there are fewer resources "wasted" at high-likelihood areas where no crime is committed. We argue that this is the correct weighting, a model which makes a bolder, narrower prediction which is still successful is more valuable than one which covers the right point by simply splashing guesses across a large swath of area.
460
+
461
+ Unsurprisingly, the cluster model fares no better than the centroid method when confronted with Offender A (see Figure 14). Indeed, since the outlier points are excluded from the centroid calculation for the larger cluster, the model bets even more aggressively on this macro-cluster, and with a resulting measurement of $\kappa_{s} = 0.001$ , we can see that following this procedure would virtually eliminate the ability of the police to anticipate the next crime.
462
+
463
+ On the whole, these low-statistics tests are consistent with (while still not predictive of) our earlier belief that the cluster model is more concerned with avoiding waste: it makes more precise guesses in the hopes of wasting few resources on average, but runs the risk of being dramatically wrong from time to time. This can also be seen from the comparatively smaller size of the buffer zones within craters, and the tendency for the maximum point on the likelihood surface plot to be higher when using the cluster method. This tendency of the cluster method to gamble more aggressively has paid off in the three datasets we examine (one might call it "cluster luck"), but our complete model will need a way to balance this boldness when a more conservative approach is warranted.
464
+
465
+ ![](images/436241a80788e4bc4bde06de03d29b2263c9e91801ceafb652a24cf18011a25c.jpg)
466
+ (a) Heat Map of Offender C Predictions, Cluster Model
467
+
468
+ ![](images/f1944bcce5490866154f45078cc154887fe4effb7625663e536a27662639f85f.jpg)
469
+ (b) Surface Plot of Offender S Predictions, Centroid Model
470
+ Figure 12: Offender C, Cluster Model Output (a), (b)
471
+
472
+ ![](images/265ea5fbf99d70807c678cdcfcf704f4ac29ce060b13e7f7db32f223992bfb5a.jpg)
473
+ (a) Heat Map of Offender B Predictions, Cluster Model
474
+
475
+ ![](images/e3d98664133b7e44d93dba17017847fd5bfbff041f1ea07d913496e4d02089c4.jpg)
476
+ (b) Surface Plot of Offender B Predictions, Centroid Model
477
+ Figure 13: Offender B, Cluster Model Output (a), (b)
478
+
479
+ ![](images/b90d306a86a80f702dd910f4b3104a03cdb2cdd64d0748f2e6c636aecd0a5380.jpg)
480
+ (a) Heat Map of Offender A Predictions, Cluster Model
481
+
482
+ ![](images/ee1081b3e5db55ce66b8feac4d9d2a4c578cf46c3c685fc1bcd4fa16aee21ce7.jpg)
483
+ (b) Surface Plot of Offender B Predictions, Centroid Model
484
+ Figure 14: Offender A, Cluster Model Output (a), (b)
485
+
486
+ # 8.3 Possible Extensions of the Cluster Model
487
+
488
+ One potential extension which could have been made to the cluster algorithm would be to allow it to consider a higher maximum number of clusters. We find that the common trend in geographic profiling research is that most serial crime data-sets do not suggest more than three or four anchor points[15] but this is not strongly validated by empirical studies. It seems intuitively clear that, since most violent serial crime sprees contain between 5 and 25 offenses,[2] it is unlikely that increasing the cluster size appreciably would be reasonable. On the other hand, if one were confronted with an unusually long crime spree (say, 40 events or more) or if one wanted to apply this to residential crime sprees (which are generally longer[12]), it might well be prudent to allow 5 or more clusters. The ideal maximum number in such cases would have to be determined by testing across sample data, or through an intelligent human guess.
489
+
490
+ Another natural way to extend the cluster model would be to allow it to create only one anchor point if there was insufficiently strong evidence that something deserved to be treated as a separate cluster. However, there are mathematical problems which arise in attempting to implement this; for one thing, the silhouette of a point is only defined if there is more than one cluster under consideration, so there is no simply way to evaluate via silhouettes whether the single-cluster case is superior to the others. Similarly, one could reject this method and simply use a single cluster if the average silhouette value dropped below some threshold; as always, however, we can only select this threshold arbitrarily since we do not have sufficient data to experimentally estimate a reasonable value. As we shall see, however, there is a different method which can be used to combine these two schemes.
491
+
492
+ # 9 The Final Predictor: Combining the Schemes
493
+
494
+ Given that our two methods each show some ability to identify geographic trends in serial criminals, we develop a method to decide if, in a given situation, one method is more appropriate than the other. On the way to this decision we also present criteria for whether or not our methods have any predictive value at all.
495
+
496
+ The first step towards making a decision is to run the algorithm many times on truncated data and compute effectiveness values at each step. This is accomplished by truncating the available data and testing both algorithms against the actual next crime. Starting with $N$ crimes we seek to determine which method, if either, should be used to predict the $N + 1$ crime. Based on our assumption that you are not trying to predict with less than 4 crimes we must enforce $N \geq 5$ . Now for $M = 4, \dots, N - 1$ use the first $M$ crimes to predict the $M + 1$ crime and then based on the actual location of the $M + 1$ crime determine the effectiveness multiplier. Denote
497
+
498
+ these effectiveness multipliers $\kappa_{M}^{m,c}$ , where $\kappa_{M}^{m}$ indicates the multiplier for the cluster method and $\kappa_{M}^{c}$ the centroid method. Also denote $\overline{\kappa_{M}^{m,c}}$ as the mean of $\kappa_{4}^{m,c},\ldots,\kappa_{M}^{m,c}$ . In Figure 15 we see $\overline{\kappa_{M}^{c}}$ and $\overline{\kappa_{M}^{m}}$ for Offender C. In Figure 16 we see $\kappa_{M}^{m}$ and $\kappa_{M}^{c}$ for Offender C.
499
+
500
+ ![](images/944e8994231a0bc9b24a31eb069f6800b989984b365560854b64bbbb275ba569.jpg)
501
+ Figure 15: Running Mean of Effectiveness Multiplier
502
+
503
+ ![](images/0672100dbde5236b2975014d50beefe809c5ece140600e1c5a9854fa8118f48e.jpg)
504
+ Figure 16: Effectiveness Multiplier vs. Crime Predicted
505
+
506
+ We now determine if either of our methods recognizes a geographic trend. To do this, denote $a_{c}$ as the number of $\kappa_{M}^{c}$ that are less than 1 and similarly denote $a_{m}$ as the number of $\kappa_{M}^{m}$ that are less than 1. If $a_{c} > \frac{N - 4}{2}$ we state the our centroid method is invalid, and similarly
507
+
508
+ if $a_{m} > \frac{N - 4}{2}$ we state the our cluster method is invalid. We place our break point at half of the effectiveness multipliers being less than 1 because that means that over $50\%$ of time we do not increase effectiveness, thus we note that our algorithm is not detecting geographic trends in the data.
509
+
510
+ If only one of the methods is determined to be valid we choose to use that one. If neither is valid we simply state that no geographic trends have been found in the data and our algorithm should not be heavily relied upon in relation to other methods which may be available.
511
+
512
+ If both methods are determined to be valid we simply look at $\overline{\kappa_{N-1}^{c}}$ and $\overline{\kappa_{N-1}^{m}}$ , if $\overline{\kappa_{N-1}^{c}} > \overline{\kappa_{N-1}^{m}}$ then we pick the centroid method, otherwise we pick the cluster method. In Figure 15 we note that the cluster method would be picked.
513
+
514
+ # 9.1 Results and Analysis
515
+
516
+ Using the aforementioned decision algorithm, our combined method would choose to use the clustering algorithm for all three cases (the end results then being those given in the cluster section above). In fact, one would have to have a highly cohesive single cluster of data with no outliers in order for the Centroid method to ever prevail, and this is as it should be, for even when there appears to be only one true anchor point, cluster is capable of rejecting up to 3 statistical outliers before computing the centroid, which should in general improve the fits and consequently improve the results.
517
+
518
+ Our combined model contains one other piece of information: the average effectiveness multiplier $\overline{\kappa_s} = \overline{\kappa_{N - 1}^{m,c}}$ , which we are now treating as a rough confidence estimate for the model when applied to a given data set. While the interpretation is not perfect, this value essentially tells us how well the algorithm would have fared against this criminal in the past, and assumes this value will also represent its effectiveness in the immediate future. The flatness in the tail of the $\overline{\kappa_s}$ plot for Offender C (Figure 15), while insufficient to justify this metric, is at least consistent with the assumptions. We can also test this mean against the actual values computed above:
519
+
520
+ <table><tr><td>Offender</td><td>κs</td><td>κs</td></tr><tr><td>C</td><td>6.65</td><td>15.82</td></tr><tr><td>B</td><td>8.473</td><td>22.95</td></tr><tr><td>A</td><td>12.25</td><td>0.001</td></tr></table>
521
+
522
+ As one would suspect for the mean over a small number of points, it is not a perfect estimator of the next value. Our now-familiar nemesis, Offender A, completely deceives the model once again, because it predicts his movements extremely well up until his very last crime. It does seem, though, that if one allows for occasional outliers, this mean effectiveness multiplier
523
+
524
+ provides a ballpark estimate of our confidence in the model. Note that in the two more well-behaved cases, the model with the higher projected effectiveness $\overline{\kappa_s}$ is in fact the more effective model in reality. Moreover, by simply properties of averages, it should be true that to within the level of uncertainty created by the number of points in the data-set, when one compares size of $\overline{\kappa_s}$ for the same model on two different data sets, the one with the larger $\overline{\kappa_s}$ will on average be the more trustworthy.
525
+
526
+ # 9.1.1 The Psychic, the Hairfinder, and the Intuitive Cop
527
+
528
+ Although the size of our datasets will not allow us to demonstrate anything conclusively, the model we develop is a strong candidate to beat the psychic and the hairfinder (random chance), for several reasons.
529
+
530
+ - The predictions are based on the assumption of trends in serial crime behavior which have been tested on large sets of real-world data. [12] [2] [8]
531
+ - Similar mathematical techniques are used in the anchor-point estimation schemes currently being employed and researched, which also consistently outperform random guesses when tested across data samples
532
+ - The model was successful in the two of the three datasets on which it was tested, and these datasets were selected from preexisting research as representative samples of serial crime data
533
+ - Several crimes in each dataset can be predicted well, including the case where our model fails to predict the final, outlier crime point– i.e., in a dataset of 16 crimes, we are also often able to predict the 15 crime using the preceding 14, the 14th using the preceding 13, etc., all with more success than a random guess.
534
+
535
+ This evidence suggests, without proving, that on average, our model will be better than blind chance. We do not, however, make the claim that we will do better than a police department assigning resources based off no model at all. In the absence of mathematical guidance, a policeman in such a department would have rely on his own knowledge of his patrol area, his sense of patterning in the offender's crimes, and any "gut feelings" he developed about the offender's psychology, based on his previous experience in law enforcement. This "Intuitive Cop" has considerably more information at his disposal than our model, as he knows, for example, that a killer targeting prostitutes is more likely to strike in the red-light district of his city. Additionally, research suggests that with a little informal training, any layperson can perform nearly as well as anchor point prediction algorithms when guessing a criminal's
536
+
537
+ home location.[18] ${}^{2}$ . As a result,one might well expect the intuitive cop to outperform the model overall,as he can apparently approximate many of its mathematical strengths through intelligent estimation and also brings a breadth of knowledge to bear about the physical characteristics of the underlying map.
538
+
539
+ # 9.2 Possible Extensions to the General Model
540
+
541
+ # 9.2.1 Partnership With the Intuitive Cop
542
+
543
+ The best possible extension of this model, therefore, is simply to ensure that its output results are always interpreted by the intuitive cop himself (or indeed, by the intuitive police force as a whole). In this way, obvious "dead zones" in the underlying map (such as lakes, military bases, or uninhabited farmland) can be immediately rejected as priority areas without significantly decreasing the odds of preventing the next crime. This immediately increases the effectiveness multiplier for the combined process. Similarly, the intuitive cop might be called upon to manually set (or at least experiment with) some of the parameters we assign based purely off of geography. It may be clear to a human, for example, that the killer in question has three anchor points, as he is committing crimes in each of three different rural towns. If these towns are quite close, however, our model could be deceived into believing that there are fewer than 3 clusters on the map.
544
+
545
+ Just as it should be for statisticians, historians, and tabloid journalists everywhere, the general mantra for a police department hoping to employ computational geographic profiling should be "more data is better." We have repeatedly made a point of saying that our model cannot be considered robust until it has been tested on a large dataset, but it is similarly true that any additional data input in the form of human intuition and interpolation will tend to improve results. The best extension of this algorithm is simply to ensure it does not overrule the pre-existing predictions and theories of local law enforcement, but rather simply augments them.
546
+
547
+ # 9.2.2 Crime Prediction Versus Anchor Point Prediction
548
+
549
+ Finally, as all our discussions to this point have presupposed that predicting a criminal's next strike point is the right thing to do, it is important to observe that in many ways, this may not always be the case. On the contrary, having examined in detail the scope of literature on quantitative criminology, we find at least three reasons to believe it is often a better practice to
550
+
551
+ attempt to predict the location of his anchor points instead.
552
+
553
+ 1. Uncertainty in the estimate will tend to be smaller for an anchor point predictor. This is easy to see from the inner workings of our algorithm. Each method postulates the existence of an anchor point at the centroid of a local cluster (or the entire dataset), and then predicts future crimes based on geographic trends about that anchor. As a result, all the uncertainty in the anchor point prediction will contribute also to error in the crime point prediction, and on average, the total uncertainty will be their quadrature sum, which is necessarily greater than or equal to the uncertainty in the anchor location itself.
554
+ 2. Anchor point prediction schemes are much more widely researched and documented. Criminologists like Rossmo, Canter, and many others have studied a broad family of anchor point predictors and tested them against large sets real-world data from serial criminals. This allows them to optimize over a wide variety of parameters such as distance functions, decay weightings, temporal weightings, etc. While our model attempts to compensate by minimizing the number of such parameters that need to be set and choosing the others based on the data being examined, the fact remains that its methods simply have not been validated to the same level of confidence
555
+ 3. Real-world considerations often make an anchor point search more practical. The simple truth is that, equipped with a prediction of likely crime points, the best a police force can hope to do is increase patrol levels and other resources in the likely areas, and wait to see if the criminal strikes again. On the other hand, when given a predicted region for the criminal's anchor point, they can take a more active approach by searching the area in question for suspicious behavior, eyewitness data, and additional evidence. Recall that at least one of the anchor points is almost certainly the suspect's home (one study found this to be true $83\%$ of the time[15]); and consequently, pursuing this anchor point can give the police a sense of the perpetrator's home region, provides potential neighbors who can be interviewed, and could even lead to the capture of the criminal in his own home.
556
+
557
+ We do not completely eschew the utility of crime-point prediction: it has the obvious benefit of potentially saving a life or preventing a rape. Depending on circumstances, it could also be of use in predicting the body location of a missing person presumed to have been taken by a serial criminal, or even in determining whether a new crime was committed by the same individual as those in a previous dataset. Once again, we appeal to the expertise of the intuitive cop: when resources permit or circumstances demand, predicting a future crime point may be a good idea, but our model itself makes no claims as to whether this is a good idea in any given case.
558
+
559
+ # 10 Conclusion
560
+
561
+ Taking only the most basic and well-supported trends in the psychological and geographical patterns of violent serial criminals, one finds that a basic estimate of a criminal's next strike point can be obtained from only the spatial-temporal coordinates of previous offenses. The twin assumptions that a serial criminal has an "anchor point" (like a house), and to avoid detection, he may tend to avoid committing crimes in his own backyard, have been used for years in quantitative techniques to estimate a criminal's home or other anchor point. We instead begin with this likely anchor point location. Then, we use it to predict potential next-strike points by presuming the distribution of crimes around the anchor will be described by a $\Gamma$ function of the radial distance from the anchor. The use of a $\Gamma$ allows us to fit the mean and variance to the distribution to the points already on the grid. In this way, we create a buffer zone only when the existing data suggest one, and with a size also defined by existing data.
562
+
563
+ There is, however, nothing fundamental, about the single anchor point assumption, and some research suggests it is not uncommon to find more than one. We therefore also consider a model which begins by searching for evidence of multiple anchor points. This model uses cluster theory techniques to detect groups of points which have more in common with each other than with the larger group. Then, we assume an additional anchor point at the center of each cluster, and model each cluster with another gamma distribution. Weighting the clusters relative to each other allows us to prioritize larger clusters or ones the criminal has visited more recently.
564
+
565
+ Lacking any readily available, public-domain datasets, we extract data from past research projects and test these two models on the cases of three serial rapists. We define a parameter $\kappa_{s}$ , which essentially represents how much better a police department would do in attempting to stop the next crime if they distributed their energy and resources according to our model rather than randomly. We find evidence that our models are substantially better than simple guessing, but it is not statistically significant for the amount of data available to us.
566
+
567
+ Finally, we propose a method for treating a general serial crime data set, by testing both models abilities to predict known points, and deciding which is most likely to succeed in future predictions. This also allows us to calculate a new value, $\overline{\kappa_s}$ , which roughly represents our confidence in the model. A higher $\overline{\kappa_s}$ value indicates a higher degree of confidence in future predictions, although we demonstrate a real-world dataset where the presence of an outlier contradicts our confidence level.
568
+
569
+ On the whole, we believe we can produce a model which can beat a "psychic and a hair finder"—i.e., which can outperform random guesses and uniform distributions of resources. We cannot, however, reasonably expect to beat an "intuitive cop"; a human with a strong familiarity
570
+
571
+ of actual map underlying our abstract data points will always have access to information we do not. Indeed, to be most effective, our model should be combined with this kind of human intelligence.
572
+
573
+ Finally, it is important to note that there while predicting the next crime point is an interesting mathematical problem and would be an excellent tool in theory, there may be many cases where it is not the ideal approach in practice. When it is, we provide a potentially powerful tool that can be used should to predict future crime locations.
574
+
575
+ # 11 Executive Summary
576
+
577
+ # Overview: Strengths and Weaknesses of the Model
578
+
579
+ This paper presents a model of where a violent serial criminal will strike next, based upon the locations and times of his previous crimes. Our algorithm creates a color-coded map of the area surrounding the criminal's previous strikes, with the color at each point indicating the likelihood of a strike there. The model has several key strengths:
580
+
581
+ - The model contains no arbitrary parameters. In other words, most aspects of the model are determined simply by trends which researchers have observed in many serial criminals across many many datasets.
582
+ - The model can estimate the level of confidence in its predictions. Before making a prediction, our model first checks how well it would have predicted the criminals previous crimes, in order to provide an estimate of how well it can predict his future crimes.
583
+ - The model understands that police have limited resources. In particular, the confidence number described above becomes large if we are making good predictions, but will shrink again if our predicted areas is so large as to become unhelpful.
584
+
585
+ At the same time, it must be observed that our model contains some fundamental limitations:
586
+
587
+ - The model is only clearly applicable to violent serial criminals. We only claim applicability in the case of serial killers and rapists, as our research shows serial burglars and arsonists are more unpredictable and influenced by non-geographic factors.
588
+ - The model cannot predict when a criminal will strike. While we do consider the order of previous crimes in our model in order to predict locations, we do not predict a strike time.
589
+ - The model cannot make use of underlying map data. To maintain generality, we do not make any assumptions about the underlying physical geography. A human being must interpret the output (for example, choosing to ignore a prediction in the middle of a lake).
590
+ - The model has not been validated on a large set of empirical data. The fact is, sizable sets of data on serial criminals are not widely available. Any police department using this model should test it on a large number of previous cases first, as described in our paper.
591
+
592
+ In addition, the standard warnings that would apply to any geographic profiling scheme apply: The output should not be treated as a single prediction, but rather as a tool to help prioritize areas of focus. It is designed to do well on average, but may fail in outlier cases. And to implement it reliably, it must be paired with a human assessment. Please note also that models for predicting the offenders "home base" are much more well-researched, and in general will be more accurate than any algorithm claiming to predict the next strike point. A police department should choose which model type to use on a case-by-case basis.
593
+
594
+ # Internal Workings of the Model
595
+
596
+ # Inputs
597
+
598
+ Our model requires the coordinate locations of a serial criminal's previous offenses, as well as the order in which these crimes were committed.
599
+
600
+ # Assumpitons
601
+
602
+ 1. The Criminal will tend to strike at locations around one or more "anchor points." (often, the criminal's house).
603
+ 2. Around this anchor point, there may be a "buffer zone" within which he will not strike.
604
+ 3. If the criminal has multiple anchor points, the regions around those from which he has struck most often or most recently are more likely.
605
+
606
+ # Method
607
+
608
+ The algorithm implements two different models, and then decides which is better.
609
+
610
+ - In the first method, we assume the criminal has a single anchor point, and build likelihood regions around the anchor point based on the distribution of his past crimes.
611
+ - In the second, we assume multiple anchor points, calculate the best number of anchor points to use, and determine likelihood around each point individually. Then area around each is weighted by the criminal's apparent preferences, if any.
612
+
613
+ Finally, the algorithm tests both models to see how well they would have predicted the previous crimes, and uses the model with the better track record.
614
+
615
+ # Summary and Recommendations
616
+
617
+ While our model needs more real-world testing, its strong theoretical basis, self-evaluation scheme and awareness of practical considerations make it a good option for a police department looking to forecast a criminal's next strike. Combining its results with the intuition of a human being will maximize its utility.
618
+
619
+ # References
620
+
621
+ [1] R. Boba. Crime analysis and crime mapping. Sage, 2005.
622
+ [2] D. Canter, T. Coffey, M. Huntley, and C. Missen. Predicting serial killers' home base using a decision support system. Journal of Quantitative Criminology, 16(4):457-478, 2000.
623
+ [3] D. Canter and P. Larkin. The environmental range of serial rapists. Journal of Environmental Psychology, 13:63-63, 1993.
624
+ [4] G.M. Godwin and F. Rosen. Tracker: hunting down serial killers. Running Press, 2005.
625
+ [5] R.V. Hogg, A.T. Craig, and J. McKean. Introduction to mathematical statistics. 1978.
626
+ [6] R.M. Holmes. Contemporary perspectives on serial murder. Sage Pubns, 1998.
627
+ [7] AK Jain, MN Murty, and PJ Flynn. Data clustering: a review. ACM computing surveys (CSUR), 31(3):264-323, 1999.
628
+ [8] R.N. Kocsis and H.J. Irwin. Analysis of Spatial Patterns in Serial Rape, Arson, and Burglary: The Utility of the Circle Theory of Environmental Range for Psychological Profiling, An. Psychiatry Psychol. & L., 4:195, 1997.
629
+ [9] J.L. LeBeau. Four case studies illustrating the spatial-temporal analysis of serial rapists. *Police Stud.: Int'l Rev. Police Dev.*, 15:124, 1992.
630
+ [10] T.P. Minka. Estimating a Gamma distribution. 2002.
631
+ [11] Derek J. Paulsen. Predicting next event locations in a time sequence using advanced spatial prediction methods. In 1005 UK Crime Mapping Conference, 2005.
632
+ [12] D.J. Paulsen and M.B. Robinson. Crime mapping and spatial aspects of crime. Prentice Hall.
633
+ [13] D.K. Rossmo. Geographic profiling: Target patterns of serial murderers. PhD thesis, Simon Fraser University, 1995.
634
+ [14] D.K. Rossmo. NCIS Conference. 1998.
635
+ [15] D.K. Rossmo. Geographic profiling. CRC, 1999.
636
+ [16] P.J. Rousseeuw. Silhouettes: A graphical aid to the interpretation and validation of cluster analysis. Journal of computational and applied mathematics, 20(1):53-65, 1987.
637
+
638
+ [17] B. Snook, R.M. Cullen, A. Mokros, and S. Harbort. Serial murderers spatial decisions: Factors that influence crime location choice. Journal of Investigative Psychology and Offender Profiling, 2(3), 2005.
639
+ [18] B. Snook, P.J. Taylor, and C. Bennell. Geographic profiling: The fast, frugal, and accurate way. Applied Cognitive Psychology, 18(1):105-121, 2004.
MCM/2010/B/7947/7947.md ADDED
@@ -0,0 +1,439 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Summary
2
+
3
+ We develop geographical profiling methods that determine the probable location of a serial criminal's next crime based on the spatiotemporal data of their previous crimes. We assume that the spatial behavior of a serial criminal is non-random.
4
+
5
+ We consider standard deviation, centralization, and probability distance methods for prioritizing a given search area. We also develop analogous methods which weight the spatial data of recent crimes more heavily. We then develop ways of aggregating results from multiple methods.
6
+
7
+ The performance of a geographical profiling method is based on its effectiveness in narrowing down a particular search area into regions likely to contain the next crime location. The accuracy of a method for a particular serial criminal is based on the past performance of the method on the spatiotemporal data of that criminal.
8
+
9
+ All of our methods produce a prioritized search area in which the next crime occurs in approximately the top $10\%$ of the search area, a significant improvement over a uniform random distribution. However, the differences in performance among the different methods developed were statistically insignificant. We also found the accuracy of our methods varied depending on the serial criminal under investigation.
10
+
11
+ # Executive Summary
12
+
13
+ We have developed models to be used in an open investigation of a serial criminal. In this summary we describe how our models can best be employed to help identify probable next target locations of the serial criminal.
14
+
15
+ Our models use the locations and times of previous crimes committed by a serial criminal in order to predict the likely next target location. This prediction prioritizes the search area so that law enforcement resources can be focused on the most probable next target regions. Our models prioritize the search areas in different fashions:
16
+
17
+ - Centralization models: These models are based on the assumption that a serial criminal's activity is centered around a point.
18
+ - Probability distance models: In these models, the closer a location is to previous crimes, the higher the probability the serial criminal will strike there next.
19
+ - Time-based models: In these models, a location's proximity to recent crimes is weighted more heavily than its proximity to crimes in the distant past.
20
+ - Aggregate models: These models combine the above models.
21
+
22
+ There are multiple factors to consider when deciding if a particular model is applicable to a serial criminal investigation:
23
+
24
+ - Evidence of serial nature: Our models assume that all of the observed crimes were committed by one criminal. If there is a strong reason to doubt this assumption, such as the presence of a copy-cat criminal, then our models are not applicable.
25
+ - Accuracy score: Our models provide a statistic that estimates their accuracy as applied to the serial criminal under investigation. If our models would not have performed well in predicting the locations of this serial criminal's past crimes, there is no reason to believe that our models would accurately predict the location of the criminal's next crime.
26
+
27
+ - Available resources: Our models prioritize the search area such that the next crime location is on average in the top $10\%$ of the prioritized search area. This search area can still be prohibitively large to cover with the resources available to the investigation. If this search area is still too large to cover, then our models are still applicable but may not yield optimal results.
28
+
29
+ If it is determined that our models are applicable to a serial criminal investigation, then there are multiple limitations of our models which need to be taken into consideration:
30
+
31
+ - Our models do not take into account relevant geographical information: For example, if you know the serial criminal is targeting banks, our models do not take this into account. In this example, a list of bank locations can be cross-referenced against the prioritized search area given by our models to determine the banks that are likely to be targeted next.
32
+ - Our models do not take into account relevant information about the criminal: The average distance that a serial criminal travels to commit a crime depends on the type of crime and characteristics of the criminal, including gender, race, and age. Our models assume no information of this type is known about the criminal.
33
+ - Our computer models may not outperform geographical profiling techniques employed by a human: Research has suggested that complex computer geographical profiling methods are no more accurate than an investigator with minimal training in geographical profiling techniques.
34
+
35
+ Overall, we recommend that our geographical profiling models not be used as the only tool used in a serial criminal investigation. Instead, our models should complement traditional investigative techniques. It is our hope that our models can further aid law enforcement agencies in their pursuit of serial criminals.
36
+
37
+ # Predicting a Serial Criminal's Next Crime Location Using Geographical Profiling
38
+
39
+ Control Group 7947
40
+
41
+ February 22, 2010
42
+
43
+ # Abstract
44
+
45
+ Geographical profiling techniques can be useful to law enforcement agencies who are investigating a serial criminal. We develop such methods that determine the probable location of a serial criminal's next crime based on the spatiotemporal data of their previous crimes. We consider standard deviation, centralization, and probability distance methods for prioritizing a given search area, and also develop analogous methods which weight the spatial data of recent crimes more heavily. We then develop ways of aggregating results from multiple methods. The performance of a geographical profiling method is based on its effectiveness in narrowing down a particular search area into regions likely to contain the next crime location. All of our methods produce a prioritized search area in which the next crime occurs in approximately the top $10\%$ of the search area, a significant improvement over a uniform random distribution. However, the differences in performance among the different methods developed were statistically insignificant. We also found the accuracy of our methods varied depending on the serial criminal under investigation.
46
+
47
+ # Contents
48
+
49
+ 1 Introduction 3
50
+ 2 Problem Background 3
51
+
52
+ 2.1 Terminology 3
53
+ 2.2 Survey of Current Literature 4
54
+ 2.3 Collection of Data 5
55
+
56
+ 3 Constructing a Model 7
57
+
58
+ 3.1 Assumptions 7
59
+ 3.2 Model Descriptions 7
60
+ 3.3 Model Performance Metric 8
61
+ 3.4 Model Accuracy for a Particular Criminal 9
62
+ 3.5 Search Area Determination 10
63
+
64
+ 4 Individual Models 10
65
+
66
+ 4.1 Spatial Models 10
67
+ 4.2 Spatiotemporal Models 13
68
+ 4.3 Individual Model Results 15
69
+
70
+ 5 Aggregation Models 16
71
+
72
+ 5.1 Aggregation Model Results 19
73
+ 6 Peter Sutcliffe: A Case Study 20
74
+ 7 Conclusions 20
75
+ 8 Future Work 22
76
+
77
+ # 1 Introduction
78
+
79
+ Serial criminals present a unique challenge to law enforcement agencies. In a typical crime, investigators are able to draw a connection between the criminal and the victim. This information often provides enough clues to form the basis of a criminal investigation. In the case of serial criminals, however, there is usually no such relationship between the criminal and the victim [1, 2]. This lack of information about the serial criminal forces law enforcement agencies to consider a larger possible target area for the next crime, which hinders the investigation.
80
+
81
+ In order to better utilize limited law enforcement resources in a serial criminal investigation, geographical profiling can be used to determine likely next crime locations. Geographical profiling is “...a procedure that examines the spatial behavior of offenders with regard to the locations of their crime scenes and the spatial relationships between those scenes” [3]. By using geographical profiling, investigators are able to take advantage of the spatial patterns of a serial criminal to focus their attention on certain geographically important regions.
82
+
83
+ In this paper we examine different geographical profiling methodologies and compare their ability to predict the location of a serial criminal's next crime. In Section 2, we provide the terminology and background information that will be utilized in the rest of the paper. In Section 3, we describe the features that are common to all geographical profiling methods developed and our metrics for determining the performance and accuracy of a particular method. Section 4 develops geographical profiling methods and reports on their performance, and Section 5 considers combinations of these methods. In Section 6, we apply our best geographical profiling methods to the serial murders committed by Peter Sutcliffe. Section 7 summarizes our main conclusions, while Section 8 discusses possible avenues for future work.
84
+
85
+ # 2 Problem Background
86
+
87
+ # 2.1 Terminology
88
+
89
+ - Serial Criminal: A serial criminal is a habitual offender who commits three or more related crimes over a span of time [1, 4]. We consider all
90
+
91
+ forms of serial criminals together, such as murderers, rapists, arsonists, and burglars.
92
+
93
+ - Spatiotemporal Data: The locations and corresponding times of crimes committed by a serial criminal.
94
+ - Spatial Behavior: The spatial behavior of a serial criminal describes how the serial criminal chooses crime locations [5, 6]. We model the spatial behavior of a criminal based on the spatiotemporal data of their previous crimes.
95
+ - Geographical Profiling Method: A geographical profiling method is a particular methodology for modeling the spatial behavior of a serial criminal. We consider spatial methods that only consider the spatial data of previous crimes, spatiotemporal methods that also take into account the times of previous crimes, and aggregate methods that combine two or more geographical profiling methods.
96
+ - Prioritized Search Area: The area to search prioritized by the probability of the next crime given by a specific model. This area is important in determining how to allocate law enforcement resources.
97
+
98
+ # 2.2 Survey of Current Literature
99
+
100
+ There is an ongoing debate as to the effectiveness of different geographical profiling methodologies. Rossmo uses spatial data to determine the residence of a criminal based on assumptions about spatial behavior, such as the tendency to commit crimes close, but not too close, to home [7]. As Rossmo's paper lacks a detailed description of his algorithm, and relies on empirically determined constants, it is difficult to reproduce his geographical profiling methodology accurately [8, 7, 3]. Van der Kemp and van Koppen survey and criticize theories of spatial behavior, including theories of crime location proximity to the criminal's residence and shortcomings of current geographical profiling methods [9]. Beauregard et al. and Snook et al. also examine the spatial behavior or serial criminals [10, 5]. Brown et al. take into account spatiotemporal data as well as additional features of the crime locations, such as proximity to highways, to predict crime locations [11, 2]. Recent work by O'Leary provides a mathematical foundation for geographical profiling [12].
101
+
102
+ We note that most of the work in the field of geographical profiling focuses on identifying the residence of a serial criminal. This allows law enforcement agencies to cross-reference the addresses of potential suspects to the predicted residency of the criminal in order to narrow their search. In contrast, this paper focuses on identifying probable next crime locations in a series of linked crimes. Being able to forecast crime locations would assist law enforcement agencies in ways predicting the residency of the criminal does not. For example, law enforcement agencies would be able to increase patrols along probable crime locations, or alert high-risk neighborhoods to the existence of the serial criminal. Since serial criminals often center their crime locations around their residence [10], these two problems are related, and thus we are able to adapt methods of identifying the residence to the problem of identifying next crime locations.
103
+
104
+ # 2.3 Collection of Data
105
+
106
+ In order to determine the accuracy of proposed geographical profiling models, we needed to see how the models performed against actual spatiotemporal serial crime data. Unfortunately, there is no large collection of serial crime data to compare our models to, or a standard collection of serial crime data that is used across all geographical profiling research. A survey of spatial profiling research shows that the data sets used are varied, which leads to different quantitative conclusions about the spatial behavior of serial criminals; see [9, 8, 10, 5] for contrasting conclusions based on different data sets. For example, the circle center method described in Section 4.1 was found to have different levels of accuracy for serial crime data sets from different countries [4]. Ideally, we would be able to test our geographical profiling models against serial crime data that spanned multiple locations, crime types, and periods of time, but such a data set does not exist.
107
+
108
+ In most studies we considered, a sample of serial crime data was either obtained from a government agency or compiled from newspaper and police reports. Unfortunately, due to the constrained time frame of our study, we were unable to obtain data from police departments and government agencies. However, we were able to compile a data set consisting of the crimes of nine serial criminals, totaling 124 crimes. For the more notorious criminals in this data set including the Beltway Snipers, Peter Sutcliffe, and Dale Hausner we were able to collect data directly from police reports and news articles. We collected data for the remaining crimes from SpotCrime.com, an online crime information source, and verified the data via the referenced
109
+
110
+ police reports and news articles [13]. A listing of the crimes in our data set can be found in Table 1.
111
+
112
+ <table><tr><td>Description</td><td>Number of Crimes</td><td>Accuracy of Temporal Exponential Decay Model</td></tr><tr><td>Murders committed by Peter Sutcliffe in West Yorkshire, England</td><td>12</td><td>0.048</td></tr><tr><td>Beltway sniper attacks committed by John Allen Muhammad and Lee Boyd Malvo in the Washington, DC area</td><td>14</td><td>0.065</td></tr><tr><td>Murders, arson, and other crimes committed by serial criminal Dale Hausner in Phoenix, AZ</td><td>40</td><td>0.098</td></tr><tr><td>Sexual assaults committed by a serial rapist in Columbus, OH</td><td>7</td><td>0.517</td></tr><tr><td>Serial robberies at TCF Banks in Franklin Park, IL area</td><td>12</td><td>0.180</td></tr><tr><td>Serial robberies committed in Denver, CO</td><td>6</td><td>0.103</td></tr><tr><td>Serial bank robberies committed by “The Withdrawal Bandit” in Boca Raton, FL</td><td>12</td><td>0.119</td></tr><tr><td>Serial robberies committed in Columbus, OH</td><td>7</td><td>0.020</td></tr><tr><td>Serial robberies and home invasions committed in Santa Monica, CA</td><td>14</td><td>0.174</td></tr></table>
113
+
114
+ Table 1: Description of data sets with accuracy score for temporal exponential decay model.
115
+
116
+ # 3 Constructing a Model
117
+
118
+ # 3.1 Assumptions
119
+
120
+ - Spatiotemporal data is accurate: A common modeling assumption.
121
+ - Crimes were committed by one serial criminal: Our models are working on the assumption that all the crimes in a data set were committed by the same criminal, and thus the crime locations can be explained by a particular spatial behavior.
122
+ - Serial criminals do not act randomly: We assume the truth of rational crime theory, the notion that "... there is an underlying reason why criminals choose to commit crimes at a particular time in a particular location" [2]. Rational crime theory is a necessary condition for productive geographical profiling, since if serial criminals are acting randomly, then the best model of their spatial behavior would be random as well.
123
+
124
+ # 3.2 Model Descriptions
125
+
126
+ Let $x_{1}, \ldots, x_{n} \in A \subset \mathbb{R}^{2}$ denote crime scene locations of a serial criminal, where $A$ is a given search area encompassing the crime locations. Let $t_{1}, \ldots, t_{n} \in \mathbb{R}$ with $0 = t_{1} < \ldots < t_{n}$ denote the corresponding times of the crimes, where $(x_{i}, t_{i})$ is the spatiotemporal data of the $i^{\text{th}}$ crime. Given a location $y \in A$ , we define a probability function $P$ such that $P(y)$ is the probability that the next crime will happen at that location.
127
+
128
+ We discretize $A$ into sectors $S_{i,j}$ in the order to simplify our calculations; see Figure 1 for a description of the discretization process. We assume that if $y \in S_{i,j}$ , then $P(y) = P(S_{i,j})$ , the probability that the next crime location is in the sector. This is a reasonable assumption for small sector sizes relative to the total search area. Thus, the goal of a geographical profiling method is to determine $P(S_{i,j})$ for all sectors.
129
+
130
+ For our models, let $d$ be a given distance metric. This distance metric can be determined using a variety of methods, including standard Euclidean distances, Manhattan distances, or travel-time distances that take into account the road map data of the area. We arbitrarily choose to use Euclidean
131
+
132
+ distances since we found no evidence that serial criminals evaluate distance using a certain metric, but our models do not make any assumptions about the nature of the distance function.
133
+
134
+ ![](images/8d50ee1c5043ef38edbf76acf9592ad51e37c1f050e769f84972318c6d29d64c.jpg)
135
+ Figure 1: The discretization of a square search area into sectors. In this figure $k$ is equal to the side length of the search area divided by $\Delta s$ .
136
+
137
+ # 3.3 Model Performance Metric
138
+
139
+ There exist two commonly used metrics for the evaluation of the performance of a geographical profiling method. The first measure is the error distance, which is the Euclidean distance between the most likely exact location of the next crime and the actual next crime location [8]. The second measure represents how much of the prioritized search area would need to be searched in order to find the next crime location [12, 7].
140
+
141
+ Our model evaluation metric should reflect how useful these models would be to a law enforcement agency tracking a serial criminal. The most likely exact location of the next crime is not very valuable to law enforcement agencies, since often they are concerned with finding an area to search and not a particular point [12, 14, 7].
142
+
143
+ Thus, we consider the following metric that represents how much of the prioritized search area would need to be searched in order to find the next crime location, which we call the hit score. Given a geographical profiling method with associated probability function $P$ and a known next crime location $x_{n+1}$ , we wish to determine the hit score $H$ .
144
+
145
+ Let $S$ denote the set of all sectors. Let $L \ni x_{n+1}$ denote the sector containing the next crime location. Then
146
+
147
+ $$
148
+ B = \{S _ {i, j} \in S: P (S _ {i, j}) > P (L) \}
149
+ $$
150
+
151
+ is the set of all sectors that have a higher predicted probability than the sector containing the next crime. Thus, we have
152
+
153
+ $$
154
+ H = \frac {| B |}{| S |}
155
+ $$
156
+
157
+ which is the fraction of the search area that would need to be searched in prioritized order before finding the next crime location. Note that a lower value for the hit score is preferable.
158
+
159
+ # 3.4 Model Accuracy for a Particular Criminal
160
+
161
+ Given an active serial criminal, we wish to determine the accuracy to which the criminal's spatial behavior is determined by a model. This is important information for law enforcement agencies basing investigative decisions on a geographical profiling method, since they want to know the extent to which they can trust a model as it is applied to a particular criminal.
162
+
163
+ We calculate model accuracy for a given serial criminal in the following way. Given a geographical profiling method with associated probability function $P$ , we wish to determine the accuracy score of the method, denoted $Z$ .
164
+
165
+ Let $H_{k}$ denote the hit score that considers $x_{1},\ldots ,x_{k - 1}$ as the currently known crime locations, and treats $x_{k}$ as the next crime location. Then,
166
+
167
+ $$
168
+ Z = \frac {1}{n - 3} \sum_ {k = 4} ^ {n} H _ {k}
169
+ $$
170
+
171
+ is the mean of the hit scores determined by sequentially adding each crime location, starting with the fourth, to the set of currently known crimes. We start with the fourth crime because criminal behavior is not generally considered serial until the criminal has committed at least three crimes [1]. Note that a lower value for the accuracy score represents a more accurate model.
172
+
173
+ # 3.5 Search Area Determination
174
+
175
+ Our notions of performance and accuracy depend heavily on the size of the search area. The hit score can be made arbitrarily small by increasing the size of the search area. This is because additional locations on the periphery of the search area are unlikely to be the next crime location, but these additional locations are included in the total search area.
176
+
177
+ Surprisingly, literature that uses the hit score performance metric does not explicitly address how the search area is to be determined given previous crime locations. We take the search area to be a square centered at the mean location of the previous crimes, with side length equal to double the maximum pairwise distance between previous crimes. In an ongoing serial criminal investigation the search area could be provided by law enforcement, but for our purposes the above search area balances the size of the search area based on previous crime distances.
178
+
179
+ # 4 Individual Models
180
+
181
+ # 4.1 Spatial Models
182
+
183
+ In this section we consider geographical profiling methods that only take into account the spatial data of serial crimes.
184
+
185
+ We consider the following spatial models for the calculation of $P(S_{i,j})$ :
186
+
187
+ - Random Method: The random method assigns each $P(S_{i,j})$ a random value uniformly. Theoretically, we expect the hit score of the random method to be 0.5 [14]. We include the random method as a basis for comparison of other geographical profiling methods.
188
+ - Standard Deviation Methods: Since standard deviation methods give no information about the distribution of probabilities inside the standard deviation areas, we cannot meaningfully compare them to other methods. Regardless, we include standard deviation methods in this paper because they are the most basic geographical profiling methods. These methods provide a rudimentary way by which the potential search area can be narrowed. We consider the following standard deviation methods:
189
+
190
+ - Standard Deviation Rectangles: These are rectangular areas defined by the points
191
+
192
+ $$
193
+ \bar {x} + (- c \sigma_ {l o n}, - c \sigma_ {l a t})
194
+ $$
195
+
196
+ $$
197
+ \bar {x} + (- c \sigma_ {l o n}, c \sigma_ {l a t})
198
+ $$
199
+
200
+ $$
201
+ \bar {x} + (c \sigma_ {l o n}, c \sigma_ {l a t})
202
+ $$
203
+
204
+ $$
205
+ \bar {x} + (c \sigma_ {l o n}, - c \sigma_ {l a t})
206
+ $$
207
+
208
+ where $\overline{x}$ is the centroid of the crime locations, and $\sigma_{lon}$ and $\sigma_{lat}$ are the standard deviations of the longitudes and latitudes of the crime locations [15].
209
+
210
+ - Standard Deviation Ellipses: These are elliptical areas, oriented along the trend line of the data in the least-squares sense. A standard deviation ellipse is an ellipse with its center at the centroid of the crime locations, rotated clockwise by an angle $\theta$ , and with axis lengths given by $2c\sigma_{lon}$ and $2c\sigma_{lat}$ , where $\theta$ , $\sigma_{lon}$ , and $\sigma_{lat}$ are calculated as in [16].
211
+
212
+ In both of the above methods, $c$ is a constant that determines the range of the area. Common values are $c = 1$ for the $68^{\text{th}}$ percentile area and $c = 2$ for the $95^{\text{th}}$ percentile area. See Figure 7 for an illustration of these percentile areas.
213
+
214
+ - Centralization Methods: Centralization methods determine a central focal point for the spatial pattern of the serial criminal. In these models, the probability of the next crime decreases the further the location is from the focal point. Thus, in all these methods, given a central focal point $C \in A$ , we have
215
+
216
+ $$
217
+ P (S _ {i, j}) = \frac {1}{d (S _ {i , j} , C)}
218
+ $$
219
+
220
+ We consider the following ways of determining the central focal point:
221
+
222
+ - Centroid: The central focal point is the mean of the locations of the crimes, given by
223
+
224
+ $$
225
+ C = \frac {1}{n} \sum_ {i = 1} ^ {n} x _ {i}
226
+ $$
227
+
228
+ - Harmonic Mean: The central focal point is the harmonic mean of the locations of the crimes, given by
229
+
230
+ $$
231
+ C = \frac {n}{\sum_ {i = 1} ^ {n} \frac {1}{x _ {i}}}
232
+ $$
233
+
234
+ - Circle Center: The central focal point is the mean location of the two crime locations farthest away from each other. This is determined as follows. Let $x_{i}, x_{j}$ be such that $d(x_{i}, x_{j})$ is maximal. Then we have $C = \frac{x_{i} + x_{j}}{2}$ .
235
+ - Median: The central focal point is defined as the point that is the median of the of the longitudes and the median of the latitudes of the crime locations. Compared to the other centralization methods, the median is less sensitive to distant crime locations, which the criminal might have chosen for no particular spatial reason [9].
236
+
237
+ - Probability Distance Method: The probability distance method takes into account a location's distance from each particular previous crime location. In this model, a probable next crime location would be relatively close to multiple previous crimes. Thus, we have
238
+
239
+ $$
240
+ P \left(S _ {i, j}\right) = \sum_ {k = 1} ^ {n} f \left(d \left(S _ {i, j}, x _ {k}\right)\right)
241
+ $$
242
+
243
+ where $f$ is a distance decay function defined in one of the following ways:
244
+
245
+ - Linear Distance Decay: The probability of the next crime location decreases linearly away from a particular previous crime location, given by $f(d) = \alpha - \beta d$ , where $\alpha, \beta$ are decay constants.
246
+ - Exponential Distance Decay: The probability of the next crime location decreases exponentially away from a particular previous crime location, given by $f(d) = e^{-\gamma d}$ , where $\gamma$ is a decay constant.
247
+
248
+ For an illustrative example of the difference between centralization methods and the probability distance method see Figure 2.
249
+
250
+ ![](images/716f7337f65285bf203e8b6fe79252fcceb63d266f77bf982a0bd8e3fbd5bef5.jpg)
251
+ (a) Centroid centralization method on the unit circle
252
+
253
+ ![](images/bbea6ff1266eaca037943ed6c3d8cd91725a576dcaa26ed5707e7d25a9bfc90e.jpg)
254
+ (b) Exponential decay method on the unit circle
255
+ Figure 2: Comparison of prioritized search areas of a hypothetical criminal committing crimes along the unit circle given by a centralization method and a decay method.
256
+
257
+ # 4.2 Spatiotemporal Models
258
+
259
+ In this section we consider geographical profiling methods that take into account both the spatial and temporal data of serial crimes. The additional use of temporal data is motivated by the idea that recent crime locations are more relevant to the spatial behavior of a serial criminal than older crime locations. For example, if the serial criminal is traveling while committing crimes, then old crime location information quickly becomes outdated. For an example of this, see Figure 3.
260
+
261
+ Not all of the spatial models in Section 4.1 have extensions that incorporate temporal data. For example, the circle center centralization method has no spatiotemporal analog, since the longest pairwise distance is unaffected by temporal data.
262
+
263
+ To incorporate the temporal components of the crime data, we calculate a temporal weighting factor for each crime,
264
+
265
+ $$
266
+ w _ {i} = \frac {t _ {i} - t _ {1}}{t _ {n}} + k
267
+ $$
268
+
269
+ where $w_{i}$ denotes the temporal weight of the $i^{\text{th}}$ crime. The offset $k$ is included so that the first crime will not be given a weight of 0. We chose $k = 0.1$ so that the last crime is weighted approximately ten times more heavily than the first crime. Let $W$ denote the sum of the temporal weights.
270
+
271
+ ![](images/38f845aa017474135743029ddf8b964fcd4c6f179150b6b16d49fd657a54ab77.jpg)
272
+ (a) Exponential Decay
273
+
274
+ ![](images/64194a3f547ac2652f0f6a680e46f69446b32921dcc6edb665945effe56a1c50.jpg)
275
+ (b) Temporal Exponential Decay
276
+ Figure 3: Prioritized search area for a hypothetical serial criminal travelling east over time, as given by the exponential decay and temporal exponential decay models.
277
+
278
+ We modify the following spatial models to consider the temporal weighting factors:
279
+
280
+ # - Centralization Methods:
281
+
282
+ - Temporal Centroid: The mean of the locations of the crimes, weighted by time, given by
283
+
284
+ $$
285
+ C = \frac {1}{W} \sum_ {i = 1} ^ {n} x _ {i} w _ {i}
286
+ $$
287
+
288
+ - Temporal Harmonic Mean: The harmonic mean of the locations of the crimes, weighted by time, given by
289
+
290
+ $$
291
+ C = \frac {W}{\sum_ {i = 1} ^ {n} \frac {w _ {i}}{x _ {i}}}
292
+ $$
293
+
294
+ - Temporal Probability Distance Methods: The linear distance decay and exponential distance decay methods incorporate the temporal weight data by using the following modified probability function:
295
+
296
+ $$
297
+ P \left(S _ {i, j}\right) = \sum_ {k = 1} ^ {n} w _ {k} f \left(d \left(S _ {i, j}, x _ {k}\right)\right)
298
+ $$
299
+
300
+ ![](images/c0359b77cc313c4f88a09736965bf4698c41891358ebbcca82c58e106025ee47.jpg)
301
+ Figure 4: Comparison of the relative performance of each of our models on all of our data sets.
302
+
303
+ # 4.3 Individual Model Results
304
+
305
+ To investigate the effectiveness of each of the aforementioned models, we calculated the mean hit score across all of our data sets. After three crimes have been observed we apply our model and record the hit score for the next crime. This process is repeated for all subsequent crimes in the series.
306
+
307
+ The value of the constants in the linear and exponential decay models do not effect the hit score. This is because the hit score is not affected by the magnitude of the probability of the sector but by the relative probability to the rest of the search area. Consequently for an individual model the values of the constants have no effect.
308
+
309
+ First we investigate the overall performance of each of the models. This can be found in Table 2 and in Figure 4. All of our models significantly outperform the random model, but each of the individual models is not statistically superior to any other.
310
+
311
+ Next we investigate the performance of our models based upon the number of crimes observed as seen in Figure 5. No patterns emerge that would indicate that the performance of the model improves having observed more crimes. The data sets considered do not have sufficient data to rule out such a correlation, particularly with the longer streaks as there are only a few data points.
312
+
313
+ ![](images/40ce264d2985fba737269c20042f1b6264bb8dcbc26e76f33a499a40a83d8591.jpg)
314
+ Figure 5: Hit score by number of crimes observed using an exponential decay model and $50\%$ confidence intervals.
315
+
316
+ # 5 Aggregation Models
317
+
318
+ In this section we consider ways of combining predictions from multiple geographical profiling methods into one model. The motivation behind these aggregate approaches is that each of the individual models described in Sections 4.1 and 4.2 have strengths and weaknesses, but by aggregating several models we hope to enhance the strengths and reduce the weaknesses.
319
+
320
+ This issue of aggregation of models relates to the classic problem of
321
+
322
+ <table><tr><td>Model</td><td>Hit Score</td><td>95% Confidence</td></tr><tr><td>Random</td><td>0.500</td><td>0.001</td></tr><tr><td>Centroid Centralization</td><td>0.104</td><td>0.031</td></tr><tr><td>Temporal Centroid Centralization</td><td>0.106</td><td>0.031</td></tr><tr><td>Harmonic Mean Centralization</td><td>0.105</td><td>0.031</td></tr><tr><td>Temporal Harmonic Mean Centralization</td><td>0.106</td><td>0.031</td></tr><tr><td>Circle Center Centralization</td><td>0.116</td><td>0.029</td></tr><tr><td>Median Centralization</td><td>0.123</td><td>0.042</td></tr><tr><td>Exponential Decay</td><td>0.094</td><td>0.030</td></tr><tr><td>Temporal Exponential Decay</td><td>0.092</td><td>0.029</td></tr><tr><td>Linear Decay</td><td>0.125</td><td>0.043</td></tr><tr><td>Temporal Linear Decay</td><td>0.123</td><td>0.043</td></tr></table>
323
+
324
+ Table 2: Models with associated hit scores and $95\%$ confidence intervals.
325
+
326
+ aggregating expert predictions. Consequently, there exist several different techniques for aggregating these models. These techniques can be categorized as either axiomatic or Bayesian. Clemen and Winkler reported no significant difference in the performance of more complex Bayesian models with more simple axiomatic approaches [17], and as such we have chosen to focus on the more elementary axiomatic approaches.
327
+
328
+ Axiomatic Approaches: Given $n$ models that we wish to aggregate, let $P_{k}(S_{i,j})$ denote the probability that a particular sector contains the next crime as predicted by the $k^{\text{th}}$ model. Let $W_{k}$ denote the aggregation weight of the $k^{\text{th}}$ model, given such that all $W_{k}$ 's sum to one.
329
+
330
+ - Linear Opinion Pool: This model is a linear combination of two or more probability distribution functions produced by individual models.
331
+
332
+ $$
333
+ P \left(S _ {i, j}\right) = \sum_ {k = 1} ^ {n} W _ {k} P _ {k} \left(S _ {i, j}\right)
334
+ $$
335
+
336
+ Examples of this model and the effect of different weights can be found in Figure 6.
337
+
338
+ - Logarithmic Opinion Pool: This model is a weighted product of two or more probability distribution functions.
339
+
340
+ $$
341
+ P \left(S _ {i, j}\right) = \prod_ {k = 1} ^ {n} P _ {k} \left(S _ {i, j}\right) ^ {W _ {k}}
342
+ $$
343
+
344
+ ![](images/c4f809b5301ad22f78d71b255fd47b1e8e479b7eba302ee999e9138189a45397.jpg)
345
+ (a) Exponential decay weight 0.00, Circle center weight 1.00
346
+
347
+ ![](images/d2de13b46bf490d51ec96e5434d3a0aac0cbb11d82043cc208787b9ba9c487c1.jpg)
348
+ (b) Exponential decay weight 0.25, Circle center weight 0.75
349
+
350
+ ![](images/ca0a192569ae272e632538a90db2c34d7a7fede0e413af1befa440c9ebdbb8a6.jpg)
351
+ (c) Exponential decay weight 0.50, Circle center weight 0.50
352
+
353
+ ![](images/a596c322e1fe92320db4ef020eb273ed35fd355176f2d353da93479d7c14cb2c.jpg)
354
+ (d) Exponential decay weight 0.75, Circle center weight 0.25
355
+
356
+ ![](images/35030fd22bfee177052e2e14eefcdfd1ae3fea6585f24fd4cc17a9599d298547.jpg)
357
+ (e) Exponential decay weight 1.00, Circle center weight 0.00
358
+ Figure 6: Prioritized search areas resulting from using different weights in the aggregation model of a circle center and exponential decay for the Beltway Sniper.
359
+
360
+ In the exponential decay and the linear decay spatial models described in Section 4.1, we noted the value of the constants chosen for the model has no impact on the hit score metric. In an aggregate model this is not the case, as the relative magnitude of probability is important when added or multiplied by another model.
361
+
362
+ To set these parameters we used the mean pairwise distance. We define the mean pairwise distance to be the mean distance between any two crimes
363
+
364
+ $$
365
+ \bar {\delta} = \frac {2}{n (n - 1)} \sum_ {1 \leq i < j \leq n} d \left(x _ {i}, x _ {j}\right)
366
+ $$
367
+
368
+ In the case of the linear decay model, $\alpha$ was chosen to be 1 and $\beta$ was chosen such that $\alpha -\beta \overline{\delta} = 0$ . In the case of the exponential decay, we set $\gamma = \sqrt{\frac{2}{\overline{\delta}}}$ , which gives a mean distance to crime of $\frac{\overline{\delta}}{2}$ . We believed these values would provide a reasonable magnitude of probability when compared with other models.
369
+
370
+ # 5.1 Aggregation Model Results
371
+
372
+ To investigate the benefits of aggregation models, we used a $2^{k}$ factorial experimental design [18]. For each of our models described in Section 4, we investigated two possible scenarios: the model was included in the aggregate and weighted equally with the others, and the model was not included. We then computed the mean hit score for the resultant aggregate model for all possible models on all of our data sets.
373
+
374
+ While a further optimization process could be conducted in which more weights or different decay constants were investigated, this process allowed us to see the effect of different models on the aggregate as well as see which individual models produce a strong aggregate model.
375
+
376
+ Through this process we found the strongest combination of individual models used the temporal exponential decay and the circle center centralization each equally weighted. The mean hit score for the linear opinion pool was $0.081 \pm 0.027$ with a $95\%$ confidence interval. The logarithmic opinion pool did not perform any better than any individual model. Neither model is statistically better than any of our other models.
377
+
378
+ # 6 Peter Sutcliffe: A Case Study
379
+
380
+ Peter Sutcliffe was a serial murderer who targeted women in England during the late 1970's [19]. A map of his murders can be found in Figure 8(a). We have applied the models we have developed to predict where the next murder in the series would be if Sutcliffe had not been arrested and imprisoned.
381
+
382
+ ![](images/bfc8fb4ce8d8c2f656fab1d5f206c4b5e563b804c5f8999c5932c67324667cd8.jpg)
383
+ Figure 7: Standard deviation rectangles and ellipses for the murders committed by Peter Sutcliffe.
384
+
385
+ ![](images/6900045d589939f9863384cdd8781c631cdf2e1402b9473b60b19875cb3abc03.jpg)
386
+
387
+ A common but naïve method of identifying an area of high probability of serial crime generates a rectangle or ellipse in which a standard deviation of previous crimes have taken place. These rudimentary models can be seen in Figure 7.
388
+
389
+ In Section 4.3, we found the individual model with the lowest mean hit score across our data set was the temporal exponential decay model. See Figure 8 for an application of this model to the data for murders committed by Peter Sutcliffe. The known crimes had a hit score of $10.6\%$ using this model, so we use this percentage to find a prioritized search area where we can be reasonably confident Sutcliffe would attack next. Of the $27818\mathrm{km}^2$ , we can then isolate $2941\mathrm{km}^2$ in which to concentrate the search effort.
390
+
391
+ # 7 Conclusions
392
+
393
+ - Every geographical profiling method outperformed the random method: On our data set, every geographical profiling method provided a significant improvement in hit score over the random method.
394
+ - All non-random geographical profiling methods considered exhibited roughly the same performance: Differences in the hit
395
+
396
+ ![](images/e79f852f13c44eb8cf5b32f5dc6422530ad4ad360323c3cd68605bda5fabb0b5.jpg)
397
+ (a) Locations of Peter Sutcliffe murders. Map generated via Google Static Maps API [20].
398
+
399
+ ![](images/50f3c480225e895462b8344674b1fba7fc7e5880dad260cdd7dc461d0a353c8b.jpg)
400
+ (b) Temporal exponential decay model applied to Peter Sutcliffe murders.
401
+ Figure 8: Comparison of the prioritized search area produced by the temporal exponential decay model to the map of the geographical area of murders committed by Peter Sutcliffe. This prioritized search area suggests that the Bradford area is at the highest risk of attack.
402
+
403
+ score for different spatial, spatiotemporal, and aggregation methods were statistically insignificant, indicating that all the geographical profiling methods have the same level of performance. This is a similar conclusion reached by Snook et al., who show that complex geographical profiling methods are no more accurate than simple geographical profiling methods [4].
404
+
405
+ - Particular geographical profiling methods are not applicable to all serial criminals: We can see in Table 1 that the accuracy of our best individual geographical profiling method varied largely depending on the particular serial criminal being studied. For example, our best individual method, the temporal exponential decay method, is an accurate predictor of the D.C. sniper attacks (with an accuracy of 0.056), but is grossly inaccurate when applied to the serial rapist in Columbus, OH (with an accuracy of 0.533, which is worse than the random method).
406
+
407
+ # 8 Future Work
408
+
409
+ - Standardization of data set: It is currently difficult to compare geographical profiling method performance to previous research because each study uses a different set of serial crime data to determine performance results. An effort to compile a large, comprehensive list of serial crime data to which all methods would be compared would greatly help the development of geographical profiling techniques.
410
+ - Use of other relevant geographical information: In this paper we only consider the spatiotemporal relationships between crimes. Work by O'Leary and Brown et al. develop the notion of a feature space that groups crime locations by their relevant geographical features, such as population density or proximity to a major highway [12, 11]. The probability that a future crime happens at a particular location is then determined by the proximity of the location to previous crimes in the feature space. Having more information about what connects crime locations could potentially improve our geographical profiling methods.
411
+ - Use of relevant information about the criminal: Research suggests that characteristics of the criminal, such as gender, race, and age,
412
+
413
+ play a role in determining their probable spatial behavior [9, 5, 10]. By taking these factors into account, we may be able to improve the accuracy of our models for a specific criminal.
414
+
415
+ - Evaluation of cost to law enforcement: Law enforcement agencies typically purchase computer-generated geographical profiling information [4, 7]. Research by Snook et al. suggests that a person with minimal training in geographical profiling techniques can determine the probable residence of a serial criminal with just as much accuracy as a complex computer-generated geographical profiling method [8]. Further research should be done to determine if the cost of proprietary geographical profiling software is worth the quality of the information provided to law enforcement agencies.
416
+
417
+ # References
418
+
419
+ [1] Ronald M. Holmes and Stephen T. Holmes. Serial Murder. SAGE Publications, Thousand Oaks, California, 1998.
420
+ [2] Donald Brown and Justin Stile. Geographic profiling with event prediction. Systems, Man and Cybernetics, 2003. IEEE International Conference on, 4:3712-3719, 2003.
421
+ [3] Scotia J. Hicks and Bruce D. Sales. Criminal Profiling: Developing an Effective Science and Practice. American Psychological Association, Washington, DC, 2006.
422
+ [4] Brent Snook, Michele Zito, Craig Bennell, and Paul J. Taylor. On the complexity and accuracy of geographic profiling strategies. Journal of Quantitative Criminology, 21(1):1-26, 2005.
423
+ [5] Brent Snook, Richard M. Cullen, Andreas Mokros, and Stephan Harbort. Serial murderers' spatial descisions: Factors that influence crime location choice. Journal of Investigative Psychology and Offender Profiling, 2(3):147-164, 2005.
424
+ [6] D. Kim Rossmo. Place, space, and police investigations: Hunting serial violent criminals. In Crime and Place: Crime Prevention Studies. Willow Tree Press, NY, 1995.
425
+ [7] D. Kim Rossmo. Geographic profiling: Target patterns of serial murderers. Unpublished doctoral dissertation, 1995.
426
+ [8] Brent Snook, Paul J. Taylor, and Craig Bennell. Shortcuts to geographic profilling success: A reply to Rossmo (2005). Applied Cognitive Psychology, 19(5):655-661, 2005.
427
+ [9] Jasper J. van der Kemp and Peter J. van Koppen. *Fine-Tuning Geographical Profiling*, chapter 17. Criminal Profiling. Humana Press, 2007.
428
+ [10] Eric Beauregard, Jean Proulx, and D. Kim Rossmo. Spatial patterns of sex offenders: Theoretical, empirical, and practical issues. Aggression and Violent Behavior, 10(5):579-603, 2005.
429
+ [11] Donald Brown and Hua Liu. Spatial-temporal event prediction: A new model. Systems, Man, and Cybernetics, 1998. IEEE International Conference on, 3:2933-2937, 1998.
430
+
431
+ [12] Mike O'Leary. The mathematics of geographic profiling. Journal of Investigative Psychology and Offender Profiling, 6(3):253-265, 2009.
432
+ [13] Spotcrime.com. http://www.spotcrime.com. Accessed on February 19, 2010.
433
+ [14] D. Kim Rossmo. Geographic heuristics or shortcuts to failure?: Response to Snook et al. Applied Cognitive Psychology, 19(5):651-654, 2005.
434
+ [15] Steven Gottlieb, Sheldon Arenberg, and Raj Singh. Crime Analysis: From First Report to Final Arrest. Alpha, 1994.
435
+ [16] Joshua Kent and Michael Leitner. Efficacy of standard deviational ellipses in the application of criminal geographic profiling. 4(3):147-165, 2007.
436
+ [17] Robert T. Clemen and Robert L. Winkler. Aggregating probability distributions. In Advances in Decision Analysis. Cambridge University Press, Cambridge, UK, 2007.
437
+ [18] Averill M. Law. Simulation Modeling and Analysis. McGraw-Hill, $3^{\mathrm{rd}}$ edition, 2000.
438
+ [19] Nicole Ward Jouve. 'The Streetcleaner' The Yorkshire Ripper Case on Trial. Marion Boyars Publishers, London, 1986.
439
+ [20] Google Static Maps API. http://code.google.com apis/maps/documentation/staticmaps/. Accessed on February 22, 2010.
MCM/2010/B/8362/8362.md ADDED
@@ -0,0 +1,558 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # FOLLOWING THE TRAIL OF DATA
2
+
3
+ Control # 8362
4
+
5
+ February 22, 2010
6
+
7
+ # Summary Statement
8
+
9
+ As contracted by the police, our task was to make a model to help apprehend criminals and prevent future offenses by predicting their locations and the locations of such offenses. We chose to generate probability plots to help assist the police in assessing which areas are of higher priority for searching and monitoring. We assume we are given crime site locations and times for past offenses of the criminal whom we are considering.
10
+
11
+ Our model determines the probability that a criminal will be found at any given location in the search area by evaluating a distance decay function for each crime scene location and adding up the probabilities, generating a distribution which can be used to prioritize a search for the criminal. The distance decay function can take various forms, several of which we present in the paper. We use a linear regression to predict the time of the next crime and compute the probability that a crime will occur in a given spot by calculating how much that crime site would change the probability distribution for the criminal's location. This finds the location which best fits with the previous crime sites. We also present a model that uses the time of crime to weight the locations. This gives the model the ability to predict changes in the criminal's patterns.
12
+
13
+ To test our model with realistic data, we complied several case studies. We assess the quality of our predictions using multiple metrics: error distance and search cost. In the case of Peter Sutcliffe, our model was able to reduce the search area to $15\%$ of its original size, and predicted the position of his next victim with very high accuracy. We produce color-gradient plots of the probability distributions which we present in the paper.
14
+
15
+ # Executive Summary
16
+
17
+ Dear Chief of Police,
18
+
19
+ We have developed a model to help aid you in investigations regarding serial criminals. I am writing to inform you of its functionality and what cases are most appropriate for its uses.
20
+
21
+ Our model requires input from any member of your squad of the time and locations, in latitude and longitude, of the criminal acts you suspect were committed by any specific criminal. It then uses the locations of these points, along with a function modeling the drop off in probability as the distance from the crime site increases, to generate a probability distribution of the criminal's location. Further, it creates a probability distribution of possible future crime scenes by figuring out which point would be least surprising in future probability distribution calculations. Overall, it tends to guess the locations relatively accurately, especially when it comes to future crimes.
22
+
23
+ Unfortunately, our model is not a godsend. It assumes that all input on a given run is for a single criminal, even though its inherent clustering could provide relevant data in the case of multiple criminals. Similarly, we assume that the killer has just one base location from which he or she operates. Our model does not work well for commuter criminals, or those driving long distances in order to commit crimes.
24
+
25
+ Additionally, it relies on the hypothesis that the probability of the criminal's base being at a given location decreases as you move away from the crime scenes. Furthermore, the model ignores any variations in terrain and in population density, instead assuming that they are both homogeneous.
26
+
27
+ Despite these obvious drawbacks, our model does have many strengths. In addi
28
+
29
+ tion to the aforementioned ability to correctly handle clusters of points, it also gives a very well specified and intuitive suggested search order, helping police minimize time spent searching for the killer. By abstracting its calculation mechanism, our model is flexible enough to allow your department to select the most appropriate of many possible "submodels" for your specific case. Thus, your department need not group bank robbers and rapists in the same category: if the two criminal groups tend to have different criminal patterns, our model allows for your department to use the techniques you find most appropriate.
30
+
31
+ Furthermore, our model is relatively efficient. Despite a mediocre "classical" running time, the extremely parallelizable nature of our algorithm allows for many computers to work on the problem in parallel. With only a small number of machines, our model can complete extremely detailed (one million point) calculations in 4 hours, and ten thousand point calculations (which is still sufficient detail for most cases) in a couple of seconds. Thus, your department need not worry about waiting days or weeks for results to come in, saving invaluable time.
32
+
33
+ Our model's output consists of two maps: one with the probability distribution of the criminal's location and one with the prediction of the location of his/her next crime. The probabilities are displayed in bands of color: a red band represents an area of high priority and a blue band represents one of low priority. In the case of attempting to locate the criminal, we suggest searching the red areas first. In the case of trying to prevent a future crime, we suggest spreading as many officers as possible in the red and yellow bands, as these regions are likely to contain the criminal's next target.
34
+
35
+ We wish you luck in your investigations.
36
+
37
+ # Contents
38
+
39
+ # 1 Background 6
40
+
41
+ 1.1 Biography of the Typical Serial Killer 6
42
+ 1.2 Locating a Killer 6
43
+
44
+ # 2 The Problem 7
45
+
46
+ # 3Assumptions 7
47
+
48
+ # 4 Metrics and Functions 8
49
+
50
+ 4.1 Simple Spatial Metrics 8
51
+ 4.2 Simple Cases and Their Spacial Metrics 9
52
+ 4.3 Distance Decay Functions 10
53
+
54
+ # 5 Our Approach 12
55
+
56
+ 5.1 Predicting Criminal Location 12
57
+
58
+ 5.1.1 Computation 14
59
+ 5.1.2 Pseudocode 14
60
+ 5.1.3 Why it Works 15
61
+
62
+ 5.2 Predicting Location of the Next Crime 16
63
+
64
+ 5.2.1 Method 16
65
+ 5.2.2 Edge Behavior 16
66
+ 5.2.3 Complexity 16
67
+
68
+ 5.3 Predicting Time of Future Crimes 18
69
+
70
+ 5.3.1 Method 18
71
+ 5.3.2 Motivation 19
72
+
73
+ 5.4 Time Coupling 19
74
+
75
+ 5.4.1 Why It Helps 21
76
+
77
+ # 6 Result Discussion 23
78
+
79
+ 6.1 Metrics 23
80
+ 6.2 Procedure 24
81
+
82
+ # 7 Conclusion 25
83
+
84
+ # 8 Future Research 26
85
+
86
+ 8.1 Incorporating Statistics and Landscapes 26
87
+
88
+ # Appendix A 27
89
+
90
+ # 1 Background
91
+
92
+ We attempt, in this paper, to present models and future research avenues to further the effectiveness of geographic profiling in locating serial criminals and their future crimes. Motivation for modeling includes the fact that humans are affected by "prior expectations, overconfidence, information retrieval, and information processing" (Snook et al., 2004) whereas computer simulations remain unbiased and make predictions based on discrete data. While the category of serial criminals includes rapists, killers, robbers and burglars, we specifically focus our research on the effort of capturing and inhibiting the crimes of serial killers. We will also discuss the applications of our research, in our final discussion, to other types of serial criminals.
93
+
94
+ # 1.1 Biography of the Typical Serial Killer
95
+
96
+ When considering serial killers, we will conform to the norm set to us in previous literature (Arndt et al., 2004):
97
+
98
+ - Must have killed a minimum of three.
99
+ - Must include a cooling-off time between killings, to distinguish from mass and spree killers.
100
+
101
+ We do not consider mass murderers because they are generally classified as killing many people in a short period of time, making future crimes unlikely and capture of them relatively easier. Spree killers kill few within a short period of time; although our model may very well apply to them, they may not kill again or may reveal themselves more readily. Serial killers are distinct from these other two categories in that they typically have a defined psychological purpose for killing and have spread-out killings, providing the police with more sufficient time to assemble a search.
102
+
103
+ Usually traumatized during childhood, repercussions of abuse and homicide beginning for the average serial killer at age 27.5 (Hickey, 2002). Many have precursors to killings, including robberies or sexual abuse. This is based on Hickey's Trauma Control Model of the Serial Killer and seems to be supported statistically; most noticeably, $84\%$ admitted to assaults on adults during adolescence (Arndt et al., 2004). When a serial killer does begin his homicidal campaign, each killing leads to more habituation and tolerance to the hormones and relief provided which do not fully alleviate the cravings, leading to decreased period between killings and a cyclical process (Arndt et al., 2004). The career length of a specific serial killer varies, ranging from around 1 to 5 years, although many may enter prison on other accounts during such a career. The number of killings also varies drastically, as with their IQ.
104
+
105
+ # 1.2 Locating a Killer
106
+
107
+ To account for all factors affecting specific killing sites and travel distances would be impossible, as the human decision process is dynamic and virtually boundless.
108
+
109
+ As reported by Laukkanen and Santtila, however, "87% of serial rapists in the UK committed their crimes confirming the so-called circle hypothesis." This technique predicts that the criminal will live within a circle drawn around the two crime locations furthest from each other. Because difficult-to-solve rapes and homicides tend to occur within similar distances from the perpetrator's residence (Santtila et al., 2007), we assume that similarly most serial murders occur within the circle.
110
+
111
+ Most serial killers and other types of criminals are also shown to not travel far from their base location, with crimes committed dropping as a function of distance (Santtila et al., 2007) - a theory we will refer to by its common name as distance decay. Surround the crime scenes, there also usually exists a buffer zone in which the criminal is not likely to live when avoiding detection.
112
+
113
+ # 2 The Problem
114
+
115
+ As contracted by the police, we are given crime scene locations and times of a specific criminal. Based on this data, we must predict the criminal's base location and their next crime's location and time.
116
+
117
+ Although humans may appear random, we must fit a geographical profile to the crime scene data for prediction purposes in a way such that our model can be used for any type of serial criminal with any personality type. This geographical profile generates a prioritized search for the police. In our plots, note that red specifies high search priority and blue low. We use bilinear interpolation to smooth the grid for ease of interpretation for the police.
118
+
119
+ # 3 Assumptions
120
+
121
+ As given such location and time data from the police, we assume that the police suspect all points used in a single run of our model are crime sites from a single criminal. We also assume, to simplify locating the criminal, that the specified killer has precisely one home base which we are trying to find.
122
+
123
+ To simplify the problem, we assume we are dealing with serial killers. Although this does not seriously affect our results in any way (Laukkanen & Santtila, 2006; Santtila et al., 2007), we use case studies and research to support location statistics for serial killers and later extend our model results to other types of serial criminals. Also, despite the fact that different criminal personalities will commit crimes at different distances, we assume a form of the circle theory. Research states that $87\%$ of serial rapists, which can be extended to other types of serial criminals, reside within a circle proposed by the circle hypothesis (see Background - Locating a Criminal) (Santtila et al., 2007); we assume our grid, although not necessarily encompassing the full circle around the furthest two points, will act similarly in encompassing our killer's location for ease of modeling purposes. We also assume a distance decay, as
124
+
125
+ supported by research for typical stable criminals. Thus, we currently do not consider commuters who may travel far distances to commit crimes, though in the future we would like to include an option for different personality traits to compensate for distance preferences and account for such commuter criminals. We do not inherently assume a buffer zone, as will be later discussed, but instead test different models and see the accuracy of such an assumption.
126
+
127
+ Some factors which may affect a criminal's choice of crime site include population density, opportunity, and landscape. In our model, we neglect these differences and assume a homogeneous geographical area for simplicity. If actually implemented, we would like to include these descriptors in our model, similar to CrimeStat (Levine, 2006). Thus, unfortunately our model may predict the criminal's base location to be in an inhabitable area and may predict future crimes to be committed where there is, in actuality, little opportunity for such crimes.
128
+
129
+ For more discussion on our future extensions to dispose of such simplifying assumptions, see Future Research.
130
+
131
+ # 4 Metrics and Functions
132
+
133
+ # 4.1 Simple Spatial Metrics
134
+
135
+ When assuming we are given information including locations of $N$ crime scenes, it is useful to look at spatial metrics to help define global traits of the locations. We define the following metrics:
136
+
137
+ # Mean Center ("Center of Mass")
138
+
139
+ This is the point output when taking the average values of latitudes (y) and longitudes (x):
140
+
141
+ $$
142
+ c _ {x} = \sum_ {i = 1} ^ {N} w _ {i} x _ {i}, c _ {y} = \sum_ {i = 1} ^ {N} w _ {i} y _ {i}.
143
+ $$
144
+
145
+ It is possible to include a weight, $w_{i}$ , for each point to skew the center more toward it, which may be useful when specific points are more important than others.
146
+
147
+ # Center of Minimum Distance
148
+
149
+ This is the point where the summed distances to all of the crime scenes is minimal, or the minimum considering every $(x,y)$ point of
150
+
151
+ $$
152
+ \sum_ {i = 1} ^ {N} \sqrt {\left(x _ {i} - x\right) ^ {2} + \left(y _ {i} - y\right) ^ {2}}
153
+ $$
154
+
155
+ # Standard Deviational Circle (SDC)
156
+
157
+ Here, a circle is drawn at the mean center, assuming $w_{i} = 1$ , such that the radius is one "standard distance" (related to standard deviation) to cover $68\%$ of the data points (assuming randomness) and $95\%$ for a radius of two standard distances. The larger the radius, the more spread out the data is considered to be.
158
+
159
+ # Standard Deviational Ellipse (SDE)
160
+
161
+ While the standard deviational circle ignores skew of the data, the ellipse allows stretching in two dimensions along any two perpendicular lines on our plane.
162
+
163
+ # 4.2 Simple Cases and Their Spacial Metrics
164
+
165
+ In order to gather intuition about what we would expect from a model given crime sites, we illustrate a few simple examples of possible killing distributions with expectations of where we would like to see predictions of home bases and future killings. We analyze some simple spatial statistics for each basic case: mean center ("center of mass") and standard deviational circle.
166
+
167
+ # Uniform Spread
168
+
169
+ First, we look at a uniform distribution of killings within some shape - a square, specifically, for its ease of modeling on a grid. While we might expect the killer to live anywhere in the boundary of this square, it would not be surprising if the killer lived directly in the center for the comparative ease of travel to all points on the square. For future killings, it may be difficult to discern a specific location as the spread is homogeneous.
170
+
171
+ ![](images/5aba6afc6df9392abfd2435c7ed908d50fbb485fecc63a51a663c586a95763e8.jpg)
172
+
173
+ The center of mass is directly in the center, as expected. The SDCs are efficient in grouping the data, meaning that our data is not very spread out.
174
+
175
+ # Outline Spread
176
+
177
+ Here we consider killings along the edges of a square. It is not clear where the killer may live, as this killing trail may be part of his daily route or he may live closer to the center for ease of travel, but we would expect future killings to happen also along the boundary.
178
+
179
+ ![](images/a78370cccb6c4dacb83775768d5087ba006100c056dc9d149e6d6a2193b51b70.jpg)
180
+
181
+ Our mean center is again in the center, but our SDCs are very large, indicating that the data is very spread out from the mean center.
182
+
183
+ # Linear Spread
184
+
185
+ Here, we look at a linear killing distribution. While the future killings may depend on the times of each crime committed, specifically in this example, if we assume the times are irrelevant then we may expect the killer to live near at least a few of the points.
186
+
187
+ Again, our mean center is in the middle and our SDCs are very large, as the points are spread far from
188
+
189
+ ![](images/699453f4a7b3fd53f3860725e71798bd3a22efe8b00350460f32ddb38c1bc16f.jpg)
190
+
191
+ the center. The SDE may be more effective in analyzing this case as there is no spread in latitude.
192
+
193
+ # Clustering
194
+
195
+ This clustering example may be most prevalent when considering a killer with two main base locations (for example, one who lives in one city and works in another). It again may not be clear where the killer lives, but we should predict killings to occur within the two clusters.
196
+
197
+ ![](images/e7e3856e6be9e778de54d2f1c6b3f170d74ff8d70e20a7e3a6c689929d95dd77.jpg)
198
+
199
+ Again, we have a large SDC. These may indicate that when looking for next killer location, it may be beneficial to look at SDC radii and SDEs.
200
+
201
+ # 4.3 Distance Decay Functions
202
+
203
+ When calculating probability distributions, we test different distance decay functions. Recall that the theory of distance decay assumes drop-off of likelihood of our killer living at any given location as distance from crime scenes increase (see Figure 1). We consider Linear, Normal, Negative Exponential, Truncated Negative Exponential, and Plateaued Negative Exponential drop-offs. Of particular distinction between these are their behaviors directly surrounding the crime scenes and the rates of decay with increased distance.
204
+
205
+ # Linear
206
+
207
+ With this model, we assume a linear drop-off rate of the form:
208
+
209
+ $$
210
+ f (x) = \left\{ \begin{array}{l l} a - b x & \text {i f} x \leq a / b \\ 0 & \text {i f} x > a / b. \end{array} \right.
211
+ $$
212
+
213
+ ![](images/81e6b041158b70e45f5bfe60715f131b83611283610e573ed2dfaf9bf9ec2580.jpg)
214
+ Figure 1: A decay plot.
215
+
216
+ This simple model assumes that the expected probability of finding the criminal at a given location decreases linearly with distance from the crime site until the distance is greater than $a / b$ , after which it remains zero. Despite the simplicity of this model, its behavior is clearly somewhat unrealistic.
217
+
218
+ # Normal
219
+
220
+ This model was suggested by Brantingham and Brantingham (1981) to include their hypothesized buffer zone. The function exponentially increases and decreases around such a radius as is maximized by the buffer zone:
221
+
222
+ $$
223
+ f (x) = \frac {a e ^ {\frac {- (x - \mu) ^ {2}}{2 \sigma^ {2}}}}{\sqrt {2 \pi \sigma}}.
224
+ $$
225
+
226
+ Although this does not work well with our idea of serial killers generally living close to their crime scenes, it may be useful when considering other types of serial criminals such as robbers and burglars are expected to on average live further from their crimes.
227
+
228
+ # Negative Exponential
229
+
230
+ Here we assume a negative exponential drop-off rate of the form:
231
+
232
+ $$
233
+ f (x) = e ^ {- c x}
234
+ $$
235
+
236
+ where $\mathrm{f}(0) = 1$ is the maximum value. Thus, this function assumes that the probability of killer location is highest at the specific crime scenes from which it decays
237
+
238
+ exponentially. Although this contradicts our idea of a "buffer zone", presented by Brantingham & Brantingham (1981), this model is shown to work well in its predictions. According to Rhodes, it showed a much better than the Normal curve fit when compared with actual serial burglar, robber, and rapist locations versus their crimes (Rhodes & Conly, 1981; Kent, 2003).
239
+
240
+ # Truncated Negative Exponential
241
+
242
+ Here we assume a linear growth from each crime scene to a certain point where the probability is maximal and then drops exponentially:
243
+
244
+ $$
245
+ f (x) = \left\{ \begin{array}{l l} 1 + b (x - x _ {0}) & \mathrm {i f} x \leq x _ {0} \\ e ^ {- c (x - x _ {0})} & \mathrm {i f} x > x _ {0}. \end{array} \right.
246
+ $$
247
+
248
+ This does support the idea of a buffer zone and has a quick drop-off after some maximal distance. In this model, we are able to specify the parameters $b$ , $c$ , and $x_0$ which may help when considering different types of criminals and their average found distance from crime scenes, or when considering different social profiles of criminals and the effects of such on distance.
249
+
250
+ # Plateaued Negative Exponential
251
+
252
+ A special case of the truncated negative exponential occurs when we let $b = 0$ . Instead of the usual rise-and-fall, the function stays level when $x \leq x_0$ and from there drops off like a regular negative exponential. The function takes the form
253
+
254
+ $$
255
+ f (x) = \left\{ \begin{array}{l l} 1 & \mathrm {i f} x \leq x _ {0} \\ e ^ {- c (x - x _ {0})} & \mathrm {i f} x > x _ {0}. \end{array} \right.
256
+ $$
257
+
258
+ This supports the idea that, although there exists a buffer zone around the crime scenes where the killer is less likely to live, the area spanned when moving outwards from the crime scene squares and so it is innate in the spread from the crime scene that the probability is higher as distance increases when maintaining a constant probability at each grid point. It considers that the buffer zone is not derived from criminals preferring crimes past a certain distance but rather that opportunities increase with distance since the area increases.
259
+
260
+ # 5 Our Approach
261
+
262
+ # 5.1 Predicting Criminal Location
263
+
264
+ To predict the criminal's location, we employ a strategy that, while similar in some superficial ways, greatly generalizes the strategy of locating the criminal by investigating the center of mass. Whereas the center of mass approach returns a single point
265
+
266
+ (around which one can center a Gaussian distribution, for instance), our method attempts to utilize the locations of the criminal acts to the fullest by instead generating a distribution, assigning each point within the given region a probability. These probabilities returned can in turn be used to generate a map, showing which locations the algorithm considers to be of high interest and which can probably be ignored. By transforming the set of crime data points into a specially-constructed distribution rather than a handful of values, our algorithm allows for much more specialized, and in many cases much more accurate, results.
267
+
268
+ The main idea driving the algorithm is the idea that any point $p$ can be assigned a relative "criminal location probability" by taking the sum of some function (mentioned in Metrics and Functions) of the distances between $p$ and prior incidents. It might help to imagine this as the sum of non-canceling "gravitational pulls" of the various locations: any point with a comparatively high total gravitational pull is going to be near many of the crime scenes, and therefore should rightly be assigned a high criminal location probability. Furthermore, the actual strengths of the pulls from each point can be weighted depending on its recency, allowing for the algorithm to more quickly adapt to the changing habits of the criminal. We will currently treat all weights as equal to one, but we will discuss weighting in more detail in Time Coupling.
269
+
270
+ ![](images/a124566ec68adf8d8a8f952bd675020d057ed1d8f0cd287331d31707ba395d2e.jpg)
271
+ Figure 2: Example of a probability distribution. Red star signifies actual criminal's base.
272
+
273
+ # 5.1.1 Computation
274
+
275
+ As it is infeasible to calculate the value at every single point and extremely difficult to model analytically, our algorithm instead discretizes the problem by transforming the candidate search area into a grid and calculating the criminal location probability at each grid point. When increasing the fineness of the grid, which is allowed for in the input to our model, our algorithm better approximates the ideal, smooth, solution. Because the algorithm makes $l$ calculations of $f$ per grid point, it has $O(nml)$ runtime where $n$ is the number of lines on the grid parallel to the Prime Meridian, $m$ is the number of lines parallel to the equator, and $l$ is the number of unlawful acts perpetrated by the criminal. As $l$ is generally relatively small and, at least in our simulations, we often set $m$ and $n$ to be equal, this algorithm can essentially be considered to be $O(n^2)$ , which is going to be very fast for any reasonable mesh size — consider that a single 2GHz machine can use this algorithm to partition an area the size of England into $36 \times 36$ meter blocks and create a corresponding probability distribution using some interesting distance functions in only a matter of minutes!
276
+
277
+ # 5.1.2 Pseudocode
278
+
279
+ To clear up any ambiguities, we provide some pseudocode below for the criminal-finding function.
280
+
281
+ Program 1 Criminal Location Finding Algorithm
282
+ $G$ is the set of grid points after the search area is partitioned by the mesh $C$ is the set of past crimes, each of which has a location and date $f(d)$ is the distance decay function which returns a higher number for more probable locations Prob is the probability of the killer location based on distance decay affects for the grid
283
+
284
+ Function criminalLocationProb (crimes $C$ , function $f$ .. Let Prob be a mapping from $G$ into $\mathbb{R}$
285
+ for $g\in G$ do ${}^{*}\mathrm{Calculated~cumulative~distance~effect~from~all~}c\in C^{*}/$ $\operatorname {Prob}[g] = 0$ for $c\in C$ do $\operatorname {Prob}[g] = \operatorname {Prob}[g] + f(\mathrm{distance~between~}c\mathrm{~and~}g)$
286
+ Normalize the values of Prob so that the sum of the values is 1
287
+ return Prob
288
+
289
+ # 5.1.3 Why it Works
290
+
291
+ Our algorithm has many properties that lead to the generation of useful results.
292
+
293
+ # Prioritized Search
294
+
295
+ The probability distribution that our algorithm returns lends to a very simple algorithm for determining in what order to search various locations for the criminal. One can simply search the possible locations in descending order of their probabilities! This simple strategy often leads to good results, requiring us to go through about $15\%$ of a certain search area to find Peter Sutcliffe, one of our case study serial killers, for most distance decay functions.
296
+
297
+ # Point Clustering
298
+
299
+ One of the nicest features of our algorithm is its handling of point clusters without any outside assistance. Because all crime scenes exert a "gravitational pull," points near any crime scenes will have a probability boost and will create localized "hot-spots" where the algorithm thinks the criminal has some reasonable chance of being caught. This effect can be seen on the figure on the left/right, in which two different clumps form two
300
+
301
+ ![](images/8ae8d80529c9d6fd27cb7b1e033bd5c28a68133ab3ed2f40fa7096092345b4f9.jpg)
302
+
303
+ hotspots, with their relative sizes and magnitudes proportional to the number of points inside.
304
+
305
+ # Function/Parameter Independence
306
+
307
+ Another major strength of our algorithm is its generality. Because the function works on any reasonable arbitrary function with arbitrary parameters, the police can handpick the functions based on statistics and their historical successes in catching the perpetrators of various types of crimes. For example, because burglars and serial killers generally exhibit different behaviors in selecting how far they are willing to travel to commit their crime, it could be that a normal distance decay function would be superior for catching burglars whereas a negative exponential distance decay function could be superior for catching serial killers. As our algorithm is extremely flexible, the police can quickly and easily adapt their search techniques to best match the criminal in question.
308
+
309
+ # 5.2 Predicting Location of the Next Crime
310
+
311
+ # 5.2.1 Method
312
+
313
+ In order to find the location of the next crime, we look for the point on the grid which most closely matches the pattern of the other crimes. If we consider the criminal location probability distribution produced after adding a crime event to our grid, the most probable location for the next crime would be the spot which generates a distribution most similar to the one produced before adding it. This gives us the most likely location for a future crime. Furthermore, for potential crime scenes other than the most probable, we estimate how much worse they are by measuring how much the produced distribution deviates from the original. This gives us a distribution over our search area which can be used to allocate resources for crime prevention.
314
+
315
+ Our algorithm works as follows: First we compute the criminal location probability distribution for the search area. Next we iterate through all of the grid points in the search area, adding a new "virtual crime scene" at each point, calculating the new criminal location probability distribution with the additional crime scene. We then compare each of these distributions to the original one by summing the squares of the differences between corresponding grid points, then assigning this pseudo probability deviation number to the location of the "virtual crime scene." This gives us a distribution over the search area, where lower numbers mean the point is a more likely spot for the criminal's next crime because adding this point as a crime location deviates less from the original probability distribution.
316
+
317
+ # 5.2.2 Edge Behavior
318
+
319
+ If all of the points on the mesh are tested as possible crime scenes, then there is a tendency for points near the edge to be rated higher than appropriate. This is because a point near the edge has fewer other points around it, so when a crime scene is added there it is effecting the criminal location probability distribution of fewer points, and thus has a smaller total effect on the produced distribution. To compensate for this, we add a buffer zone around our search area where the criminal location probability distribution is calculated but no crime scenes are added. This effectively removes the bad edge behavior.
320
+
321
+ # 5.2.3 Complexity
322
+
323
+ When adding each point as a virtual crime scene, we must calculate a different criminal location probability distribution for every point in our search area, thus requiring approximately $n^2$ operations, each of which is $O(n^2)$ (see Predicting Criminal Location - Computation). Therefore the algorithm for determining the location of the next crime has a complexity of $O(n^4)$ . This is a polynomial time algorithm, but it can still become very slow as $n$ increases. To keep our calculation time from getting out of control, we implement various optimizations.
324
+
325
+ Program 2 The killer-finding and future-predicting algorithm
326
+ $G$ is the set of grid points after the search area is partitioned by the mesh $G^{\prime}$ is $G$ augmented with a $40\%$ boundary buffer zone
327
+ $C$ is the set of past crimes, each of which has a location and date
328
+ $f(d)$ is the distance decay function which returns a higher number for more probable locations
329
+ Prob is the probability of the killer location based on distance decay affects for the grid
330
+ $\Delta \mathrm{Prob}$ measures the change in distribution of Prob when adding a new crime scene
331
+
332
+ Function criminalLocationProb (crimes $C$ , function $f$ .. Let Prob be a mapping from $G^{\prime}$ into $\mathbb{R}$ for $g\in G^{\prime}$ do ${}^{*}\mathrm{Calculated~cumulative~distance~affect~from~all~}c\in C^{*}/$ $\operatorname {Prob}[g] = 0$ for $c\in C$ do $\operatorname {Prob}[g] = \operatorname {Prob}[g] + f(\mathrm{distance~between~}c\mathrm{~and~}g)$ Normalize the values of Prob so that the sum of the values is 1 return Prob
333
+
334
+ Function nextCrimeProb (crimes $C$ , function $f$ ..
335
+ killer_prob_distribution $\equiv$ criminalLocationProb $(C,f)$
336
+ Let $\Delta$ Prob be a mapping from $G^{\prime}$ into $\mathbb{R}$
337
+ for $g\in G^{\prime}$ do Let $C^\prime$ be $C$ augmented with a "virtual crime" at $g$ $V =$ criminalLocationProb $(C^{\prime},f)$ /\*Calculate sum of squared differences between killer_prob_distribution and $V^{*} /$ $\Delta \mathrm{Prob}[g] = 0$ for $h\in G^{\prime}$ do $\Delta \mathrm{Prob}[g] = \Delta \mathrm{Prob}[g] + (\mathrm{V}[h] - \mathrm{killer\_prob\_distrib}[h])^{2}$
338
+ Normalize the values of $\Delta$ Prob so that the sum of the values is 1
339
+ return $\Delta$ Prob
340
+
341
+ ![](images/da3ab0604d1d294ba1f12a44eb6ec726de45653a0790dc76b86967b892849814.jpg)
342
+ (a) No buffer.
343
+ Figure 3: Varying buffer percentages to illustrate boundary conditions.
344
+
345
+ ![](images/36ff76867b70cf0253892ff1e4a838261b9560e154d5371b0b786ee8dcfc769f.jpg)
346
+ (b) $30\%$ buffer.
347
+
348
+ The single most important optimization implemented in our algorithm is caching. Suppose our search area has $N$ crime scenes. For each point on the mesh we add a virtual crime scene and then calculate the criminal location probability distribution over the search area using the $N + 1$ points. When calculating the new probability distribution with $N + 1$ points, instead of doing the full calculation, we use the original distribution for the first $N$ points and combine it with a distribution for only the new point in a way that maintains normalization. This gives the algorithm a factor of $N$ speedup.
349
+
350
+ Our algorithm is also nearly perfectly parallelizable. By multi-threading the code, it can be run over a number of cores with a speedup equal to the number of cores used. This makes running very high resolution calculations practical if a multicore or computing cluster is available.
351
+
352
+ # 5.3 Predicting Time of Future Crimes
353
+
354
+ # 5.3.1 Method
355
+
356
+ When considering a serial killer's next time or date of killing, we look at a basic linear regression model. We find the average time between killings to predict the next killing from the last. To test our model, we remove the last point from the N killing times which we attempt to predict using our regression:
357
+
358
+ Control # 8362
359
+
360
+ $$
361
+ t _ {N} = t _ {N - 1} + \frac {t _ {N - 1} - t _ {1}}{N - 1}.
362
+ $$
363
+
364
+ # 5.3.2 Motivation
365
+
366
+ Although a linear regression may not be realistic for every killer, it applies more generally to all as some may increase their frequency of killings as habituation sets in and some may de
367
+
368
+ crease their frequency. Hypothesized by Hickey's Trauma Control Model and verified by Arndt's analysis of Newton's Hunting Humans which is a collection of data of serial killers, the killing rate of serial killers generally increases with time; however, there are many sources which state that the killing rate may also decrease in some cases (Arndt et al., 2004). Because other interpolation techniques such as exponential or quadratic may greatly over-predict future crimes because of strong end-behavior, we choose to use the linear regression for its simplicity and universal niceness.
369
+
370
+ ![](images/86993eb9aeb332da4022e47a2b42d0c80f60188cd0302cc5af56b34c1fe5c777.jpg)
371
+ Figure 4: Predicted Nth crime in red; actual Nth crime in black.
372
+
373
+ # 5.4 Time Coupling
374
+
375
+ Having developed methods for predicting both the criminal location and where the next crime will occur, we now extend these methods by adding in the concept of time. As described so far, the algorithm neglects all chronological properties of the killings, discarding one of the most important pieces of information available to us. Ideally, we want to be able to quickly adapt to the criminal's motion if they were to move to a different location or otherwise alter behavior over time.
376
+
377
+ To account for these possible changes, we weight events more heavily the more recently they occurred. Doing this allows for recent changes in behavior to be more strongly reflected in future predictions, which allows for better calibration with the criminal's latest motives. To keep the algorithm as general as before, we will modify the algorithm to take a weighting function as an input, which is then multiplied by $f(\mathrm{dist})$ when we calculate $\mathrm{Prob}[g]$ .
378
+
379
+ Although the composition of the weighting function $w$ can be anything, we notice that
380
+
381
+ $$
382
+ w (t) = \frac {(d - c) a}{\sqrt {t} + a} + c
383
+ $$
384
+
385
+ yields reasonable results. Due to time constraints, this $w$ was the only reasonable weighting function which we managed to study in depth, and therefore any future
386
+
387
+ Program 3 The killer-finding and future-predicting algorithm with time-based weighting
388
+ $G$ is the set of grid points after the search area is partitioned by the mesh $G^{\prime}$ is $G$ augmented with a $30\%$ boundary buffer zone
389
+ $C$ is the set of past crimes, each of which has a location and date
390
+ $f(d)$ is the distance decay function which returns a higher number for more probable locations
391
+ $w(t)$ is the weight function which returns a higher number for smaller inputs
392
+ Prob is the probability of the killer location based on distance decay affects for the grid
393
+ $\Delta \mathrm{Prob}$ measures the change in distribution of Prob when adding a new crime scene
394
+
395
+ Function criminalLocationProb (crimes $C$ , decay_function $f$ , weight_function $w$ ): Let Prob be a mapping from $G'$ into $\mathbb{R}$
396
+ for $g \in G'$ do
397
+ /*Calculate cumulative distance effect from all $c \in C^*$ /
398
+ $\operatorname{Prob}[g] = 0$
399
+ for $c \in C$ do
400
+ $\operatorname{Prob}[g] = \operatorname{Prob}[g] + w(\text{time since } c_{\text{time}}) \cdot f(\text{distance between } c \text{ and } g)$
401
+ Normalize the values of Prob so that the sum of the values is 1
402
+ return Prob
403
+
404
+ Function futureCrimeProb (crimes $C$ , decay_function $f$ , weight_function $w$ .. killer_prob_distrib $\equiv$ criminalLocationProb $(C,f,w)$ Let $\Delta$ Prob be a mapping from $G^{\prime}$ into $\mathbb{R}$
405
+ for $g\in G^{\prime}$ do Let $C^\prime$ be $C$ augmented with a "virtual crime" at $g$ $V =$ criminalLocationProb $(C^{\prime},f,w)$ /\*Calculate sum of squared differences between killer_prob_distrib and $V^{*}/$ $\Delta \mathrm{Prob}[g] = 0$ for $h\in G^{\prime}$ do $\Delta \mathrm{Prob}[g] = \Delta \mathrm{Prob}[g] + (\mathrm{V}[h] - \mathrm{killer\_prob\_distrib}[h])^{2}$
406
+ Normalize the values of $\Delta$ Prob so that the sum of the values is 1 return $\Delta$ Prob
407
+
408
+ ![](images/aa1e034d560588a42f6cf583e80925110e6e70685d845788e496f5603dd7ba16.jpg)
409
+ Figure 5: A decay plot.
410
+
411
+ time-based models use this $w$ as the time weighting function.
412
+
413
+ # 5.4.1 Why It Helps
414
+
415
+ To help emphasize the benefits of time coupling, we consider a randomly generated scenario. The points in the following graphs were automatically generated to be close-to-linear, and some randomly selected set of times generated by a Poisson process were matched to these points in one of two ways:
416
+
417
+ - In the "correlated" case, the times were chosen to increase monotonically with $x$ . That is, the larger the value of $x$ , the larger the time that was assigned to it.
418
+ - In the "uncorrelated" case, the various times were assigned randomly to the points.
419
+
420
+ Notice how the addition of time-based weighting in the uncorrelated case had almost no effect when compared to the correlated case, in which the effect of the weighting was pronounced. As there was no correlation in the former case, any small perturbations will essentially cancel each other out, leaving each point with no expected
421
+
422
+ ![](images/93ad77298f5434bb9aec2c43f3cf16df26d01178c14df40637c6bc983e74a038.jpg)
423
+ (a) Next Crime Probability Plot with unweighted time, uncorrelated points.
424
+
425
+ ![](images/81f14f83a4e784229d99d0e67027a51dfdcbd58818f6bd60131be7b128600d68.jpg)
426
+ (b) Next Crime Probability Plot with weighted time, uncorrelated points.
427
+
428
+ ![](images/551407c086434fb408dde92ac67c49009acfb8c6712bd69c7e2856ee35acbcf4.jpg)
429
+ (c) Next Crime Probability Plot with unweighted time, correlated points.
430
+
431
+ ![](images/f79b8897cd99064209a3b89194e5a50933907b2ba83ed4faafc915a6a2125c04.jpg)
432
+ (d) Next Crime Probability Plot with weighted time, correlated points.
433
+
434
+ ![](images/64640e8e03055069d5b19f271734c6ea86b66851135c27ade38beeb181aece07.jpg)
435
+ (e) Criminal Location Probability Plot with unweighted time, correlated points.
436
+
437
+ ![](images/d5bd47fa81bfc7e6285f70ecd8d11c27ad25ac3caa764966ff5b9d7faf689861.jpg)
438
+ (f) Criminal Location Probability Plot with weighted time, correlated points.
439
+
440
+ change. Thus, the correctness of the plot is not drastically affected, if at all. In the latter case, however, the correlation was captured by the algorithm and the expected location of the next killing shifted rather drastically to accommodate this change. Intuitively, if a serial killer has been moving north one mile every day for a while, it is reasonable to assume that his or her future kills will be more north than average.
441
+
442
+ Perhaps even more interesting, the location of the next crime adapted much more quickly than the estimated location of the criminal. Again, this is sensible: a gradual shift in the location of crimes is much more indicative of a behavior-based shift than one that is residence-based.
443
+
444
+ # 6 Result Discussion
445
+
446
+ To evaluate our models, including various choices for our distance decay function, we look at case study data (see Appendix A) for actual apprehended serial killers: Peter Sutcliffe and Chester Turner.
447
+
448
+ # 6.1 Metrics
449
+
450
+ We use two metrics in quantifying the quality of our results. The first is Error Distance and the second is Search Cost.
451
+
452
+ # Error Distance
453
+
454
+ This meaures the distance between the calculated most probable spot and the actual spot "as the crow flies". When searching for the criminal, the error distance tells us how far away our best guess was from the actual location. When predicting the next crime it tells us how far away the crime happened from the location where we allocated the most resources.
455
+
456
+ # Search Cost
457
+
458
+ This measures how many grid squares would need to be searched out of the total before the correct spot is found if we go through the squares in the order of their probability. When searching for the criminal this tells us how much area would need to be searched to find the criminal. We assume no preference over locations of the same color and thus this may vary in actual police searching where the initial search direction may vary. When predicting the location of the next crime it tells us what percentage of our resources were wasted on locations that had higher priority than the actual location but saw no actual crime. This metric provides a much more realistic assessment of the quality of a prediction; however, in some instances the Error Distance still provides useful information.
459
+
460
+ # 6.2 Procedure
461
+
462
+ We use the model without the time-dependent weighting first. Thus for these calculations the times of the killings are irrelevant to the calculated probability distributions for the criminal's location and the location of his next crime; however, the times are used to calculate when the next crime will occur.
463
+
464
+ We begin by removing the most recent crime from our datasets. This emulates the position the police would be in while searching for the criminal. Next we calculate the criminal location probability distribution using each of our decay functions. The quality of these calculations is assessed using both of our metrics, comparing the predicted locations to the actual known locations of the killers' home. Then we calculate the next crime location probability distribution for each decay function. These results are also assessed using the metrics. Finally, we repeat the calculations using the model with time dependent weights. The results is presented below. Plots can be found in the Appendix.
465
+
466
+ <table><tr><td>Method</td><td>Criminal Home Uncoupled</td><td>Criminal Home Coupled</td><td>Next Crime Uncoupled</td><td>Next Crime Coupled</td></tr><tr><td colspan="5">Sutcliffe</td></tr><tr><td>Linear</td><td>18.17/17.88</td><td>19.26/29.49</td><td>3.99/9.47</td><td>3.41/8.73</td></tr><tr><td>Neg. Exp.</td><td>15.97/30.24</td><td>17.05/30.24</td><td>0.27/1.46</td><td>0.26/1.46</td></tr><tr><td>Trunc. Neg. Exp.</td><td>14.44/24.32</td><td>15.53/24.32</td><td>1.98/8.00</td><td>1.65/7.28</td></tr><tr><td>Plateau Neg. Exp.</td><td>15.00/25.07</td><td>16.33/25.07</td><td>1.25/5.70</td><td>1.04/4.98</td></tr><tr><td>Normal</td><td>15.00/22.83</td><td>15.99/22.92</td><td>0.81/4.27</td><td>0.63/3.58</td></tr><tr><td colspan="5">Chester Turner</td></tr><tr><td>Linear</td><td>58.64/5.52</td><td>60.12/5.92</td><td>40.90/4.12</td><td>44.46/5.10</td></tr><tr><td>Neg. Exp.</td><td>59.80/5.07</td><td>60.79/5.07</td><td>41.06/3.12</td><td>45.93/3.26</td></tr><tr><td>Trunc. Neg. Exp.</td><td>59.80/4.38</td><td>60.79/4.38</td><td>40.68/3.46</td><td>45.26/4.12</td></tr><tr><td>Plateau Neg. Exp.</td><td>59.80/4.48</td><td>60.79/4.48</td><td>40.79/3.26</td><td>45.58/3.46</td></tr><tr><td>Normal</td><td>66.04/4.46</td><td>66.70/4.46</td><td>40.83/3.26</td><td>43.38/3.46</td></tr></table>
467
+
468
+ Table 1: Search Costs / Error Distance using various functions for both the Sutcliffe and Chester Turner data.
469
+
470
+ # 7 Conclusion
471
+
472
+ All examined distance decay functions tend to do a good job when examining "good" data, which is data in which the next kill or killer location tends to be in the general vicinity of the killings. In the case of Peter Sutcliffe, our algorithm predicted Sutcliffe's location rather accurately on any distance decay function and guessed the location of the next crime almost exactly in many cases.
473
+
474
+ Unfortunately, the data can be misleading. In the case of Chester Turner, who exclusively committed murders to the east of his home, the algorithm unsurprisingly failed to guess his home address accurately. This is unavoidable: there is no method (as far as we can tell) to accurately guess the killer's "hub" location if it is sufficiently removed from the locations of the crimes themselves.
475
+
476
+ In either case, it seems that the guessing accuracy for the next crime location is much better than that for the location of the criminal. While this result is somewhat surprising, as the guess for the next crime location is itself based on the estimated killer location, it is still possible to explain. One simple explanation which can be visually verified of this phenomenon is that killers tended to cluster their killings closely, allowing for easier guessing for the next crime. Another possible explanation is that the increased number of calculations required to generate the estimated next location resulted in significantly smaller random fluctuations, meaning that the data is "smoother" and therefore more likely to behave well.
477
+
478
+ Contrary to our expectations, the addition of time coupling generally, but not always, decreased our performance. While we don't currently have any explanation much better than "bad luck" for this behavior, this result is something that should
479
+
480
+ definitely be examined before serious use of this tool by the police. It seems that time coupling only gives superior performance when the spatial and temporal aspects of the previous crimes is correlated. Without time coupling, our model misses this pattern in the data, but the time coupled version finds it.
481
+
482
+ Even with the model's flaws, we believe that it can be a useful tool for any police station to employ. If the model managed to give us such accurate data for predicting the whereabouts of Sutcliffe's next victim, it is not unreasonable to believe that the tool could repeat this behavior for larger datasets. With this predictive power in hand, the police can better prepare for upcoming attacks and apprehend the criminal.
483
+
484
+ # 8 Future Research
485
+
486
+ # 8.1 Incorporating Statistics and Landscapes
487
+
488
+ Although our model avoids human bias toward overconfidence and predisposed predictions, one might argue "information that a person, animal or institution knows about a physical or social environment" (Gigerenzer & Selton, 2001, 187) gives humans the benefit for predicting locations; however, we would like to incorporate that information into our model: the landscape, personality profile, population density. Eventually, one may be able to also specify social personality regions - for example, rich areas versus poor, known drug-trafficking areas, etc. We believe this augmentation would not only lead to better predictions by our model but also give it a clearer advantage over human predictors. In such cases as we are considering, it is not immediately obvious where the serial criminal lives or will commit their next crime. Thus, we support probabilities and statistics against human bias.
489
+
490
+ Different personality types of criminals (e.g., team killers, sexually motivated killers, burglars, etc.) also commit crimes at different distances from their base. Team killers are less likely to kill women and live in a more geographically localized area; sexually motivated killers are more likely to use a more personal killing method, such as strangulation, and prefer strangers more, but tended to be more geographically stable as compared to non-sexually motivated killers (Arndt et al., 2004). These personality inferences may be made by surveying crime scenes. In our model, therefore, we could have optional inputs for use by police to specify personality traits, which we could then use to alter our parameters automatically to predict more probable distances with more ease for the police.
491
+
492
+ <table><tr><td>Serial Killer</td><td>Predicted Crime</td><td>Actual Crime</td></tr><tr><td>Sutcliffe</td><td>09-Dec-1980</td><td>17-Nov-1980</td></tr><tr><td>Zodiac Killer</td><td>30-Dec-1969</td><td>11-Oct-1969</td></tr><tr><td>Jack the Ripper</td><td>07-Oct-1888</td><td>09-Nov-1888</td></tr><tr><td>Grim Sleeper</td><td>24-Feb-2005</td><td>01-Jan-2007</td></tr><tr><td>Chester Turner</td><td>01-Jan-1999</td><td>06-Apr-1998</td></tr></table>
493
+
494
+ # Appendix A
495
+
496
+ Serial Killer Case Study Data
497
+
498
+ <table><tr><td>Serial Killer</td><td>Latitude</td><td>Longitude</td><td>Date of Killing</td></tr><tr><td rowspan="18">Sutcliffe</td><td>53°49&#x27;22.72&quot;N</td><td>1°34&#x27;38.03&quot;W</td><td>November 17 1980, 9:25 pm</td></tr><tr><td>53°48&#x27;30.95&quot;N</td><td>1°40&#x27;18.26&quot;W</td><td>August 20 1980, 11:00 pm</td></tr><tr><td>53°48&#x27;47.58&quot;N</td><td>1°34&#x27;3.23&quot;W</td><td>September 2 1979, 1:00 am</td></tr><tr><td>53°42&#x27;40.28&quot;N</td><td>1°52&#x27;18.99&quot;W</td><td>April 4 1979, 11:55 pm</td></tr><tr><td>53°27&#x27;46.12&quot;N</td><td>2°13&#x27;35.21&quot;W</td><td>May 16 1978, 11:00 pm</td></tr><tr><td>53°39&#x27;17.04&quot;N</td><td>1°46&#x27;46.20&quot;W</td><td>January 31 1978, 9:25 pm</td></tr><tr><td>53°47&#x27;57.94&quot;N</td><td>1°45&#x27;59.09&quot;W</td><td>January 21 1978, 9:30 pm</td></tr><tr><td>53°25&#x27;57.94&quot;N</td><td>2°15&#x27;2.47&quot;W</td><td>October 1 1977, 9:30 pm</td></tr><tr><td>53°49&#x27;4.56&quot;N</td><td>1°31&#x27;58.99&quot;W</td><td>June 26 1977, 2:15 am</td></tr><tr><td>53°48&#x27;38.70&quot;N</td><td>1°45&#x27;49.79&quot;W</td><td>April 23 1977, 11:15 pm</td></tr><tr><td>53°50&#x27;1.26&quot;N</td><td>1°30&#x27;9.15&quot;W</td><td>February 5 1977, 11:30 pm</td></tr><tr><td>53°48&#x27;28.75&quot;N</td><td>1°31&#x27;58.58&quot;W</td><td>January 20 1976, 7:30 pm</td></tr><tr><td>53°52&#x27;7.83&quot;N</td><td>1°54&#x27;28.10&quot;W</td><td>July 5 1975, 1:30 am</td></tr><tr><td>53°49&#x27;7.15&quot;N</td><td>1°32&#x27;31.71&quot;W</td><td>October 30 1975, 1:30 am</td></tr><tr><td>53°48&#x27;55.72&quot;N</td><td>1°32&#x27;28.77&quot;W</td><td>December 14 1977, 8:30 pm</td></tr><tr><td>53°47&#x27;12.39&quot;N</td><td>1°43&#x27;47.22&quot;W</td><td>July 10 1977, 3:20 am</td></tr><tr><td>53°43&#x27;48.65&quot;N</td><td>1°51&#x27;54.77&quot;W</td><td>August 15 1975, 11:45 pm</td></tr><tr><td>53°54&#x27;52.78&quot;N</td><td>1°56&#x27;16.61&quot;W</td><td>August 27 1975, 10:30 pm</td></tr><tr><td rowspan="13">Chester Turner</td><td>33°56&#x27;36.66&quot;N</td><td>118°16&#x27;50.04&quot;W</td><td>March 9, 1987</td></tr><tr><td>33°56&#x27;24.32&quot;N</td><td>118°16&#x27;49.98&quot;W</td><td>October 29 1987</td></tr><tr><td>33°56&#x27;49.26&quot;N</td><td>118°16&#x27;57.66&quot;W</td><td>January 20 1989</td></tr><tr><td>33°57&#x27;23.37&quot;N</td><td>118°16&#x27;57.85&quot;W</td><td>September 23 1989</td></tr><tr><td>33°56&#x27;50.59&quot;N</td><td>118°16&#x27;50.55&quot;W</td><td>September 30 1992</td></tr><tr><td>33°56&#x27;50.59&quot;N</td><td>118°16&#x27;50.55&quot;W</td><td>November 16 1992</td></tr><tr><td>33°56&#x27;53.00&quot;N</td><td>118°16&#x27;57.77&quot;W</td><td>December 16 1992</td></tr><tr><td>33°58&#x27;7.02&quot;N</td><td>118°16&#x27;57.78&quot;W</td><td>April 2 1993</td></tr><tr><td>33°58&#x27;40.96&quot;N</td><td>118°17&#x27;5.57&quot;W</td><td>May 16 1993</td></tr><tr><td>33°58&#x27;0.20&quot;N</td><td>118°16&#x27;59.76&quot;W</td><td>February 12 1995</td></tr><tr><td>33°56&#x27;56.43&quot;N</td><td>118°16&#x27;42.38&quot;W</td><td>November 6 1996</td></tr><tr><td>34° 2&#x27;56.29&quot;N</td><td>118°15&#x27;20.32&quot;W</td><td>February 3 1998</td></tr><tr><td>34° 2&#x27;31.22&quot;N</td><td>118°14&#x27;26.07&quot;W</td><td>April 6 1998</td></tr><tr><td rowspan="4">Zodiac Killer</td><td>38° 5&#x27;49.94&quot;N</td><td>122° 8&#x27;59.18&quot;W</td><td>December 20 1968, 11:15 pm</td></tr><tr><td>38° 7&#x27;12.90&quot;N</td><td>122°11&#x27;30.51&quot;W</td><td>July 4 1969, 11:55 pm</td></tr><tr><td>38° 34&#x27;9.57&quot;N</td><td>122°14&#x27;7.71&quot;W</td><td>September 27 1969, 6:15 pm</td></tr><tr><td>37° 47&#x27;19.34&quot;N</td><td>122°27&#x27;25.98&quot;W</td><td>October 11, 1969, 9:55 pm</td></tr><tr><td rowspan="12">Grim Sleeper</td><td>34° 0&#x27;17.48&quot;N</td><td>118°19&#x27;17.57&quot;W</td><td>August 12 1986</td></tr><tr><td>34° 0&#x27;14.98&quot;N</td><td>118°18&#x27;35.63&quot;W</td><td>September 11 1988</td></tr><tr><td>33° 59&#x27;37.53&quot;N</td><td>118°15&#x27;4.83&quot;W</td><td>January 10 1987</td></tr><tr><td>33° 58&#x27;41.68&quot;N</td><td>118°17&#x27;33.75&quot;W</td><td>August 10 1985</td></tr><tr><td>33° 58&#x27;32.96&quot;N</td><td>118°18&#x27;7.76&quot;W</td><td>August 14 1986</td></tr><tr><td>33° 57&#x27;54.10&quot;N</td><td>118°19&#x27;0.30&quot;W</td><td>March 9 2002</td></tr><tr><td>33° 57&#x27;27.36&quot;N</td><td>118°18&#x27;31.85&quot;W</td><td>November 1 1987</td></tr><tr><td>33° 57&#x27;7.36&quot;N</td><td>118°18&#x27;31.24&quot;W</td><td>April 16 1987</td></tr><tr><td>33° 56&#x27;50.31&quot;N</td><td>118°18&#x27;30.33&quot;W</td><td>January 1 2007</td></tr><tr><td>33° 57&#x27;17.11&quot;N</td><td>118°17&#x27;55.40&quot;W</td><td>November 20 1988</td></tr><tr><td>33° 56&#x27;41.05&quot;N</td><td>118°19&#x27;2.66&quot;W</td><td>January 30 1988</td></tr><tr><td>33° 56&#x27;19.68&quot;N</td><td>118°18&#x27;15.39&quot;W</td><td>July 11 2003</td></tr><tr><td rowspan="5">Jack the Ripper</td><td>51°31&#x27;12.17&quot;N</td><td>0° 3&#x27;37.34&quot;W</td><td>August 31 1988</td></tr><tr><td>51°31&#x27;13.51&quot;N</td><td>0° 4&#x27;20.70&quot;W</td><td>September 8 1988</td></tr><tr><td>51°30&#x27;49.90&quot;N</td><td>0° 3&#x27;57.16&quot;W</td><td>September 30 1988</td></tr><tr><td>51°30&#x27;49.13&quot;N</td><td>0° 4&#x27;39.64&quot;W</td><td>September 30 1988</td></tr><tr><td>51°31&#x27;7.29&quot;N</td><td>0° 4&#x27;30.32&quot;W</td><td>November 9 1988</td></tr></table>
499
+
500
+ Data collected via GoogleEarth™ and CommunityWalk.
501
+
502
+ # Model Plots for Sutcliffe
503
+
504
+ Here we show our presented functions for distance decay with their plots for Sutcliffe and Chester Turner. These are here to illustrate the differences between the different functions.
505
+
506
+ ![](images/4fcc12b640c0d7107f10934019b6c22204058e4f95fc092b32ef6167e983351c.jpg)
507
+ (g) Criminal Location Probability Plot of Sutcliffe using Linear decay function.
508
+
509
+ ![](images/e14583fd26fd11f3149f95a4ca14e6141235dee7d8d25137b1866c3c0cb446a6.jpg)
510
+ (h) Next Crime Probability Plot of Sutcliffe using Linear decay function.
511
+
512
+ ![](images/456a2206cae475b9036d3a1444b3a4bd4fd1553033d8e7811aa3ed3738a41d41.jpg)
513
+ (i) Criminal Location Probability Plot of Sut- (j) Next Crime Probability Plot of Sutcliffe using cliff using Negative Exponential decay function. Negative Exponential decay function.
514
+
515
+ ![](images/c00929fa158ca1f380a76c07117b95100b9ce21970009549ce593bc7173a7714.jpg)
516
+
517
+ ![](images/edb204df7f9cdc68b784077cd4ad02bd8020c811da47b80dd91472cd887285e1.jpg)
518
+ (k) Criminal Location Probability Plot of Sut- (l) Criminal Location Probability Plot of Sut-cliffe using Truncated Negative Exponential de- cliffe using Truncated Negative Exponential decay function.
519
+
520
+ ![](images/be52f12f3878f0373a8cc1bec634a0b4ce426cb5904c4306f906d970efae7448.jpg)
521
+
522
+ ![](images/229411b6551d4c63109947eecc53a00a2129e0bce80871862b76eb211045ce7c.jpg)
523
+ (m) Criminal Location Probability Plot of Sut- (n) Next Crime Probability Plot of Sutcliffe using cliff useg Plateaud Negative Exponential decay Plateaued Negative Exponential decay function. function.
524
+
525
+ ![](images/0f23f3170e42e994780f3858eb4c1e07b6c6cc27fb7b6ab0fde0c90b715819b1.jpg)
526
+
527
+ ![](images/6da5e6b0aa3ee360f3705bb7c7f3b75e9496ded981af0f72f29cb2adc836d8b4.jpg)
528
+ (o) Criminal Location Probability Plot of Sutcliffe using Normal decay function.
529
+
530
+ ![](images/a72136901502f5054d6e0167c4db35be509ac83cfa7194d11b956248563c5b6d.jpg)
531
+ (p) Next Crime Probability Plot of Sutcliffe using Normal function.
532
+
533
+ ![](images/b234c31b7853fdc77671644442582e1a35f451aac5d4b1de7c839b58ccaa7728.jpg)
534
+ (q) Criminal Location Probability Plot of Sutcliffe using Negative Exponential decay function, time weighted.
535
+
536
+ ![](images/3b3c04574b66d3609bb9972f14b0fd9d362e533a46dce9a661da84f88bd6c558.jpg)
537
+ (r) Next Crime Probability Plot of Sutcliffe using Negative Exponential function, time weighted.
538
+
539
+ # Model Plots for Chester Turner
540
+
541
+ ![](images/a752eab91eae1b5be24334656242cd72f24ff201bba6be2a6b13332e13a8378f.jpg)
542
+ (s) Criminal Location Probability Plot of Sut- (t) Next Crime Probability Plot of Sutcliffe using cliff use ng Negative Exponential decay function, Negative Exponential function, time weighted. time weighted.
543
+
544
+ ![](images/a365643ffe8f37aadf2fbeadab1a79d17c9cf379629a0c46d73752d94c16d25a.jpg)
545
+
546
+ # References
547
+
548
+ Arndt, W. B., Hietpas, T., & Kim, J. (2004). Critical characteristics of male serial murderers. *American Journal of Criminal Justice*, 29(1), 171-131.
549
+ Brantingham, P., & Brantingham, P. (1981). *Environmental Criminology*. Beverly Hills, CA: Sage Publications.
550
+ Gigerenzer, G., & Selton, R. (2001). Bounded rationality. London: MIT Press.
551
+ Hickey, E. W. (2002). Serial murderers and their victims. Belmont, CA: Wadsworth, 3 ed.
552
+ Kent, J. D. (2003). Using Functional Distance Measures When Calibrating Journey-to-Crime Distance Decay Algorithms. Ph.D. thesis, Louisiana State University.
553
+ Laukkanen, M., & Santtila, P. (2006). Predicting the residential location of a serial commercial robber. *Forensic Science International*, 157, 71-82.
554
+ Levine, N. (2006). Crime mapping and the crimestat program. Geographical Analysis, 38(1), 41-56.
555
+ Newton, M. (1990). *Hunting Humans*. Port Townsend, WA: Loompanics Unlimited.
556
+ Rhodes, W., & Conly, C. (1981). Crime and Mobility: An Empirical Study. Environmental Criminology. Prospect Heights, IL: Waveland Press, Inc.
557
+ Santtila, P., Laukkanen, M., & Zappala, A. (2007). Crime behaviors and distance travelled in homocides and rapes. Journal of Investigative Psychology and Offender Profiling, 4, 1-15.
558
+ Snook, B., Taylor, P. J., & Bennell, C. (2004). Geographic profiling: The fast, frugal, and accurate way. Applied Cognitive Psychology, 18, 105-121.
MCM/2010/B/8449/8449.md ADDED
@@ -0,0 +1,1597 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # The Hunt for Serial Criminals
2
+
3
+ Team 8449
4
+
5
+ # Contents
6
+
7
+ 0. Abstract 2
8
+ 1. Executive Summary 3
9
+ 2. Statement of the problem and approach - Hunting Serial Criminals 5
10
+
11
+ 2.1 Survey of Previous Research : Environmental Criminology 6
12
+ 2.2 Assumptions 6
13
+ 2.3 Propositions and Foundation 8
14
+
15
+ 3.Methods 11
16
+
17
+ 3.1 Construction of the Map - Geographical Method 11
18
+ 3.2 Static and Dynamic - Risk Intensity Method 18
19
+
20
+ 4. Simulation Results and Discussion 30
21
+
22
+ 4.1 Results of the Geographical Method 31
23
+ 4.2 Results of the Risk Intensity, "Static" and "Dynamic", Method 36
24
+ 4.3 Discussion 37
25
+
26
+ 4.3.1 Sensitivity and Robustness Testing 37
27
+ 4.3.2 Accuracy of the Prediction 40
28
+ 4.3.3 Combination of the Two Methods 40
29
+
30
+ 5. Strength and Weakness of the Model 41
31
+ 6. Conclusion and Recommendation 41
32
+
33
+ 7. Appendices 43
34
+
35
+ A Bibliography 43
36
+ B Data 45
37
+ C Code 46
38
+
39
+ # 0. Abstract
40
+
41
+ The advent of computer and technological progress has introduced a new stage in the development of criminology. Investigators can now use computational techniques of geographic profiling in order to determine the patterns of movement of their suspects.
42
+
43
+ We propose a model that aims to predict areas with high probability of being the next on the criminal's target list. We have assumed that serial crimes are instrumental rather than expressive, thus ensuring that the criminal follows a predictable pattern of movement. We also assume that this pattern is characterized by a certain stability and continuity which facilitates a correlation with the actions of other criminals in that area.
44
+
45
+ Our model first uses an initial "geographical method" which reduces the areas under consideration based on parameters such as location coordinates, area, population and criminal rate, as well as the history and psychological value variables which are derived from a specific criminal pattern. This input is used to determine the shape of a Gaussian 2D function showing the distribution of the areas with the highest probability of becoming the location of a future crime. We improve these results by using the risk intensity method, a combination of two schemes, a "static" and a "dynamic" one. The static method consists of first generating the risk intensities of different locations based on variables such as crime rates and distances from the anchor point, by using tools such as the distance-decay function. We then assign crime coefficients, which indicate the extent to which the crime can be categorized as murder, rape, arson or robbery. In the dynamic model, we categorize the static parameters into homotypic, heterotypic and cumulative types by computing the mean and covariance matrix of these parameters. We apply different algorithms: logistic regression, linear regression and nearest neighbor algorithm respectively to these types and then weigh them differently to obtain a parameter probability. This is then combined with the results of the static process to generate the probability of a crime at a certain location.
46
+
47
+ We tested our model using examples from different categories of serial crimes: robbery, murder, arson, which demonstrated distinct criminal patterns. The surfaces generated using the geographic method and the final predicted probabilities generally agreed with our expectations of areas where the criminal will attack again. The test for sensitivity suggested that parameters such as crime rate or population density (area and location) are well taken into consideration by our model. Small changes in location, however, affected to a significant extent our results, probably because the differences in coordinates of the locations were not large to begin with.
48
+
49
+ By analyzing the accuracy of our results, we conclude that this model is an efficient way of minimizing the range of possible crime locations. By taking into account all the variables that can be quantified, the geographical and the risk intensity methods achieve their goal of assigning probabilities to high-risk areas. However, evidence such as similarities of a criminal's victims was not taken into consideration. Furthermore, the model can only be applied to criminals who observe a predictable pattern of movement, in spite of the randomness parameters introduced in the model.
50
+
51
+ # 1. Executive Summary
52
+
53
+ The model we propose proves to be a generally efficient tool in criminal investigation. Once a sequence of crimes attributed to the same suspect is given, the model offers a good estimate of the possible areas where the next crime might be committed. This area is determined using an initial geographic method which indicates high-risk areas. The second part of the model (risk-intensity method) considers both static variables such as crime rate or population density of the location and variables characteristic to the movement pattern of that particular serial criminal. The areas are eventually assigned probabilities based on all the relevant parameters. The area with the higher probability is the one where the future crime is most likely to occur.
54
+
55
+ The tests performed to check that the model works are a good indication of its efficiency. The model was used to predict future crime locations for different serial crime categories, such as murder, arson and robbery. Therefore, the model could be used in the future for any type of crime of serial nature. Moreover, the tests covered spatial patterns ranging from offenses occurring in multiple states to crimes happening in different and not necessarily neighbor counties. A potential limitation of the model could be serial crimes committed in different countries, since this possibility was excluded in the assumptions.
56
+
57
+ The model was also tested for its response to small changes in parameters. The result is that these parameters are well taken into account in predicting the place where the next crime is committed. However, law enforcement officers might want to consider other pieces of evidence such as characteristics of the victims which could suggest the places where the criminal prefers to attack. Our model does not take into account such aspects, but it covers all the variables that can be quantified.
58
+
59
+ One situation in which this model cannot output accurate prediction is the expressive serial crimes, which are defined as more spontaneous and emotional than instrumental crimes. It is not surprising that these crimes are more difficult to predict. Under such conditions, the prediction made by the model is either too broad or cannot fully cover the high-risk area. Therefore, when we deal with expressive serial criminals, we need to consider more parameters.
60
+
61
+ Some degree of randomness already exists in the model, however we do not expect it to be able to predict the actions of a disorganized serial criminal. The model might not be able to even constrain the range of locations to be considered.
62
+
63
+ We conclude that our model could be a great aid to law enforcement officers in their criminal investigations. Given that the assumptions stated above are satisfied, the prediction provided by the model could limit the range of locations from 5 locations to 2 or 3 areas by using a geographical and risk intensity method. Other pieces of evidence could further give a more exact estimate of the future crime location, thus simplifying the work of law enforcement officers to a considerable extent.
64
+
65
+ # 2. Statement of the problem and approach - Hunting Serial Criminals
66
+
67
+ Criminology faces a difficult task in today's world. Not only does this scientific study have to control crime rates, but it also has to deal with serial criminals. This is a category of offenders who generate tremendous fear in numerous communities and require significant resources and effort from police, courts and prisons. In order to do that, criminology developed a series of methods called geographic profiling, which uses the locations of a sequence of crimes in order to determine what the most probable area of residence of the offender might be. The process of crime analysis has used the traditional method of pin mapping for more than a century. However, since the early 1990s, the increase in the speed of computers has allowed more and more police departments and institutions to make use of hardware and software that is the basis of crime mapping nowadays [1].
68
+
69
+ While the residence of a serial criminal is important in criminal investigations, a potentially more crucial prediction would be that of the possible location of the next crime. The problem here lies in identifying the spatial patterns of serial criminals and has been the focus of many research studies throughout the time.
70
+
71
+ The results of most of these studies agree on the fact that humans seem to follow predictable patterns of movements. Each individual possesses an awareness space, which includes their home, work, school, shopping areas or the commuting routes between these points. This space contains, for most people, an anchor point, which represents the most important place in a person's spatial life. Research studies have shown that this anchor point is, for the vast majority of people, their residence. It is thus not surprising that, in generating a geographical profile, we keep in mind the offender's anchor point as a point of interest with a potential influence on the prediction of the next crime's location [2].
72
+
73
+ Serial crime is usually defined as crime of a repetitive or serial nature and can be one of the following: serial murder, serial rape, serial arson or even serial robbery. While these offenses share characteristics such as the extent to which they affect communities, they prove to be sometimes different in the behavioral profile of the criminal. It can then be inferred that
74
+
75
+ different types of serial crime may at times generate different geographical profiles in terms of the residence or activity space of a criminal. For example, locations with high population density prove to be more prone to property crimes such as serial robberies. On the other hand, rapists seem to prefer both low-density areas, which are characterized by less surveillance and high-density areas, where they have a bigger chance of finding a suitable target.
76
+
77
+ # 2.1 Survey of Previous Research : Environmental criminology
78
+
79
+ 1. Studies in this field focus on spatial patterns in offender and target movement on the basis of broader social routines. This theory has first been introduced by professors Paula and Patricia Brantingham at Simon Fraser University in the 1980s and has since shifted in a way the focus of the study of criminology to the environment and spatial patterns that influence criminal activity [1]. Their work has dealt into trying to predict both the most important place in a criminal's life (the anchor point) and the most vulnerable "hot spots" where the next crime could take place.
80
+ 2. The geographic profiling technology was developed by Dr. Kim Rossmo at Simon Fraser University based on the theory of environmental criminology proposed by the Brantingsham [3]. By analyzing journey-to-crime models, this technology uses sets of linked crime locations in order to indicate possible activity nodes of the criminal and thus make criminal investigations more efficient.
81
+
82
+ # 2.2 Assumptions
83
+
84
+ Several key assumptions were necessary in order to streamline our model:
85
+
86
+ 1. The serial criminal follows a predictable pattern of movement, which identifies their residence as an important spot on the geographical profile. There exists a buffer zone, centered at the criminal's home, in which crimes are less likely to occur because of the risk associated with the proximity of the residence, as well as a distance decay
87
+
88
+ function meaning that the criminal will not reach out too far away from his awareness space [2]. While we include a randomness variable in our model, this cannot account for the actions of any disorganized criminal.
89
+
90
+ 2. The sequence of crimes under consideration has been attributed to a single suspect using pieces of evidence that the creators of the geographical profile need not necessarily be aware of. This is necessary because otherwise we cannot be sure we are tracking the same criminal and thus the same pattern of movement.
91
+ 3. Criminals whose activity takes place in their residence or in a single familiar place in their awareness space are not taken into consideration in our model. Examples of this include rapists who lure their victims to their homes or nurses who kill their patients at their workplace. The serial crimes with a single crime location are difficult to identify by law enforcement officers and are beyond the scope of the problem.
92
+ 4. Suspects are unlikely to use air travel due to the high chance of being apprehended. This assumption eliminates the possibility of a suspect traveling from one state to another in a matter of hours and influences the method of estimating distances between crimes.
93
+ 5. Country borders cannot be crossed and state borders are not likely to be crossed in the case of past crimes committed within the same state. Although this assumption excludes situations that have been observed in the past, it nonetheless illustrates the patterns of the majority of serial crimes and helps predicts these cases more accurately.
94
+ 6. The official statistics (population, crime rates, etc.) are accurate and correct enough for the purpose of this study.
95
+ 7. The suspect does not have the knowledge of being investigated (or wanted) by the police nor does he feel the urgency of being tracked, or anything that might lead the suspect to digress from his usual criminal behavioral patterns. This is an essential assumption in trying to predict his future actions.
96
+
97
+ 8. Serial crimes are instrumental rather than expressive crimes.
98
+
99
+ Definition 1.1 Expressive crimes (or affective crimes) are “more spontaneous, emotional, and impulsive crimes that are done in anger. These include domestic violence, some forms of rape, and assaults” [2].
100
+
101
+ Definition 1.2 Instrumental crimes are crimes committed in order to achieve a goal, such as money, status, or other personal gain [4]. Since expressive crimes are by definition more emotional (and hence less rational) in nature, it is almost impossible that a serial criminal's act is expressive, unless he or she suffers from some mental disorder (we exclude the discussion of such crimes by Assumption 8).
102
+
103
+ # 2.3 Propositions and Foundation
104
+
105
+ # Stability of Criminal Behavior
106
+
107
+ "Insanity is doing the same thing over and over again and expecting different results." Albert Einstein (1879-1955)
108
+
109
+ The intriguing words of wisdom from Einstein suggest a sarcastic yet common truth: whatever a person considers extraordinary of himself can often be routine work in the eyes of others. The same truth applies in the psychological analysis of serial criminals: he might consider what he does creative or different than everyone else, but in reality their crimes are likely to be similar to those committed by other offenders. This form of similarity facilitates our approach. By studying the similarities between a certain criminal's behaviors and those of others, we are able to construct mathematical models that help us predict this behavior.
110
+
111
+ In the sense of criminology, the stability of criminal behavior is defined as “the persistence in a behavior or style of interacting over time. There are two components important to the stability of criminal behavior: time and ‘persistence in a behavior or a style of interacting’.” [5]
112
+
113
+ Indeed, in study after study, "the variable that emerges as the strongest predictor of future criminal behavior is past criminal and delinquent behavior." [6]. There are mainly two types of stability defined within the realm of criminology:
114
+
115
+ 1. Normative Stability: The preservation of a set of individual ranks on a quality within a constant population over a specified amount of time.
116
+ 2. Molar Stability: The persistence of a behavior or behavioral orientation as expressed in the rate of change in that quality for an age-homogenous cohort over a specified period of time.
117
+
118
+ The first type is rather easy to interpret and states the nature of stability in certain, if not all, characteristics of the serial crimes one commits. The second type, on the other hand, considers the usually left out situation where the criminal attentively alters his behavior in a series of crimes. However, the property states that the stability still holds in the way the behavior changes. The stability property enables our model to make assumptions on the future behavior of the criminal based on an aggregation of all previous crimes he or she has made.
119
+
120
+ # Continuity of Criminal Behavior
121
+
122
+ "You can travel along 10,000 miles and still stay where you are." Harry Chaplin, Sequel For many people, continuity seems similar with stability. In criminology terms, stability refers to the "relative consistency over time in rankings of delinquency and crime."[6] On the other hand, continuity reflects the "strength of previously achieved states, and therefore the probability of their repetition; it implies sameness, familiarity, and predictability."[7]
123
+
124
+ For example, stability suggests that it is not quite possible for a serial burglar with no records of murder to suddenly kill someone, although it is possible for him to develop that taste gradually (as denoted by the Molar Stability). Stability also suggests the general accordance of the crimes made by the same person, while continuity emphasizes more on the sequence in which the crimes are committed. For instance, given a different sequence of three serial criminals, we might achieve different prediction results over where the next crime is most probably going to happen.
125
+
126
+ One example of continuity is the case of Theodore Robert ("Ted Bundy"), who committed over 30 murders between 1973 and 1978. He started off his murders from Washington State where he went to college, and then to Idaho, Utah and Colorado. In Utah he was noticed by the police where they started to investigate him. In the end he was caught in Florida. From Figure XX we can see that he followed a clear path from Northwest to Southeast, which makes his track much easier to be predicted.
127
+
128
+ ![](images/1a2e056dc9910c58c58e51da5c01d9e3cbd0ac5f47d165e292bd56f1cb03bbb2.jpg)
129
+ Figure 0. Ted Bundy's Continuous Criminal Pattern
130
+
131
+ There are generally speaking three types of continuity that can be represented by the behavior of the criminal:
132
+
133
+ 1. Homotypic Continuity: Continuity “over time in the same types of behaviors, such as hitting, kicking, and punching, or traits, such as intelligence or impulsivity.”[6]
134
+ 2. Heterotypic Continuity: Continuity in which "behaviors or traits take different forms over time, but are caused by the same underlying characteristics"[6].
135
+ 3. Cumulative Continuity: Continuity in which someone's earlier social behavior might affect his later behavior, and the influence is cumulated over rage.
136
+
137
+ In the first continuity type, there are strong correlations between the serial crimes in the criminal's style, object (victims), psychological states, etc. In the second type of continuity, the criminal's style and object (victims) might be different from case to case, but there are always some initial factors that are the same about the crimes he chooses to make (for example, what kind of crime would he usually commit). In the third case, the criminal actually goes through a "learning" process where his experience accumulates
138
+
139
+ # 3. Methods
140
+
141
+ # 3.1 Construction of the map - Geographical Method
142
+
143
+ The first scheme we use in finding the possible locations and the possible home locations is constructing a map based on the existing crime locations and other characteristics of this particular criminal. The output of this method includes the potential target locations and the possible home location of the criminal.
144
+
145
+ In this construction, our input includes the existing crime locations which are described by latitude and longitude, as well as other characteristics of the location, such as population, criminal rate, history value, psychological value based on the criminal and randomness. Definitions of these other characteristics and how to calculate them will be explained in a later section. Locations help us locate the center of the "criminal circles", areas where the crime is more likely to happen. Using the other characteristics, we determine the radius of these "criminal circles". With these areas, we can then get a set of possible locations where the next crime may happen and the possible home locations of the criminal.
146
+
147
+ We call this method the geographic method since we are using geographical ranges to determine the possible target locations. The detailed steps are as follows. We first locate the existing crime locations on the map. They are going be the centers of the "crime circles". Then we find the longest distance between two crime spots. Thirdly, we determine the radius of the "crime circles" based on the behavior of the criminal. Fourth, we select the locations
148
+
149
+ within these “crime circles” and identify these locations as the potential target locations. Last, we locate the possible home location in the intersection of these crime circles (the intersection area based on the radium will then naturally be based on those parameters determining the radius).
150
+
151
+ The figure below shows the process of this geographical method.
152
+
153
+ ![](images/59d67359b9bfc11cd3c58513eb53350c538cdb74db98dca6e4924663488d8ad1.jpg)
154
+ Figure 1. The Process of the Geographical Method
155
+
156
+ Previous studies have found that time is a commodity so that crimes often occur in nearby areas. The history of serial crimes reveals the inverse relationship between distance from a previous crime spot and the probability that the next crime happens at that distance. In other words, the probability that some later crime happens in some area will be lower as the
157
+
158
+ distance from the previous crime location becomes larger [1]. Therefore, we can use a 2D Gaussian function (though the name is 2D, it is actually a 3D figure in the space) to determine the probability of the next crime happening in an area.
159
+
160
+ # Definition 3.1.1
161
+
162
+ We define "criminal circles" as some 2D Gaussian functions. Therefore, "criminal circles" are not real 2D circles on some plane, but rather some mountain-shapes existing in the 3D space. The 2D Gaussian function used is,
163
+
164
+ $$
165
+ \mathbf {f} (\mathbf {x}, \mathbf {y}) = \alpha \mathbf {e} ^ {- \left(\frac {(\mathbf {x} - \mathbf {x} _ {0}) ^ {2}}{2 \sigma_ {\mathbf {x}} ^ {2}} + \frac {(\mathbf {y} - \mathbf {y} _ {0}) ^ {2}}{2 \sigma_ {\mathbf {y}} ^ {2}}\right)},
166
+ $$
167
+
168
+ where $\alpha$ is the peak height of the Gaussian function, the height for the center location, $(x_0, y_0)$ is the location of the center points, and $\sigma_x$ and $\sigma_y$ are the standard deviation of the x's and y's respectively, so basically they tell how much the Gaussian function will spread out. The height of these Gaussian functions represents the probability level of points in the x-y plane (so it is not necessarily between 0 and 1 but rather some number whose relative value to the other numbers determines whether the probability is higher or lower than the others').
169
+
170
+ # Definition 3.1.2
171
+
172
+ We define $\mathrm{H}$ to be the history value of one location based on the criminal. $\mathrm{H}$ equals the number of times the criminal has committed crimes in that location.
173
+
174
+ # Definition 3.1.3
175
+
176
+ We define Psy to be the psychological value of the criminal. Psy is a number between 0 and 1.
177
+
178
+ $$
179
+ \operatorname {P s y} = 1 - \frac {\text {t h e n u m b e r o f c i t i e s t h e c r i m i n a l h a s c o n d u c t e d c r i m e s} - 1}{\text {t h e n u m b e r o f c r i m e s h e h a s c o n d u c t e d} - 1}.
180
+ $$
181
+
182
+ For example, if some serial killer has killed 10 people and he conducted all his 10 crimes in the same city, his $\mathrm{Psy} = 1 - \frac{1 - 1}{10 - 1} = 1$ . However, if another serial killer has murdered 10 people but committed all these crimes in different cities, his $\mathrm{Psy} = 1 - \frac{10 - 1}{10 - 1} = 0$ . Therefore we can see that the higher the value is, the more the criminal prefers to conduct crimes in the
183
+
184
+ same location.
185
+
186
+ # Definition 3.1.4
187
+
188
+ We define $\mathsf{c} = (\mathsf{La}, \mathsf{Lo}, \mathsf{A}, \mathsf{P}, \mathsf{Cr}, \mathsf{H}, \mathsf{Psy})$ , where $\mathsf{c}$ is a set of parameters associated with a previous crime location, $\mathsf{La}$ is the latitude of the location, $\mathsf{Lo}$ is the longitude of the location, $\mathsf{A}$ is the total area of this location, $\mathsf{P}$ is the population of the location, $\mathsf{Cr}$ is the criminal rate of the location, $\mathsf{H}$ is the historical value of the location based on the criminal, and $\mathsf{Psy}$ is the psychological value of the criminal.
189
+
190
+ The first two parameters in c: La and Lo are used to determine the center of these "criminal circles". The centers of the criminal circles are just the previous crime locations since this is where the crime was committed by this criminal. Therefore,
191
+
192
+ $$
193
+ \text {C e n t e r} = (\mathrm {x} _ {0}, \mathrm {y} _ {0}) = (\mathrm {L a}, \mathrm {L o})
194
+ $$
195
+
196
+ One thing we should notice is that the latitude of the location in the western hemisphere goes up from right to left instead of left to right. We should take that into consideration when we graph the Gaussian function in order to make it agree with the real locations of these spots on the geographical map.
197
+
198
+ The Gaussian function then becomes
199
+
200
+ $$
201
+ \mathrm {f (x , y) = a e ^ {- (\frac {(x - L a) ^ {2}}{2 \sigma_ {x} ^ {2}} + \frac {(y - L o) ^ {2}}{2 \sigma_ {y} ^ {2}})}}
202
+ $$
203
+
204
+ Next, we need to figure out how to determine the height of the Gaussian function.
205
+
206
+ In high population density areas, due to congestion, people cared less about others and usually, most residents in these areas are people with low incomes and of low social status. As a result, the empirical data shows that criminal rates in every category of violent crimes (serial crimes include, as mentioned in the Introduction, serial murders, serial rapes, serial arson, and some serial robberies and are all violent crimes) are higher than in other areas [1].
207
+
208
+ # Statement 3.1.5
209
+
210
+ The probability that one particular violent crime happened in one location depends positively
211
+
212
+ on the population density of that location.
213
+
214
+ Therefore, in our model, we use the population density $= \frac{P}{A}$ and make sure it is positively related to the probability of the next crime.
215
+
216
+ Another parameter that affects the height of the Gaussian function is the general criminal rate. The population density should have some kind of weight on the general criminal rate since the population density usually serves as a parameter in finding the rate of violent crimes.
217
+
218
+ Thus, we set the height of the Gaussian function, $\alpha = \frac{\mathrm{P}}{\mathrm{A}}\times \mathrm{Cr}$ . Then the Gaussian function becomes,
219
+
220
+ $$
221
+ \mathrm {f (x , y) = \frac {P}{A} \times C r \times e ^ {- (\frac {(x - L a) ^ {2}}{2 \sigma_ {x} ^ {2}} + \frac {(y - L o) ^ {2}}{2 \sigma_ {y} ^ {2}})}}
222
+ $$
223
+
224
+ Our next step is therefore to determine how spread-out the Gaussian function should be.
225
+
226
+ # Assumption 3.1.6
227
+
228
+ The behavior of one particular criminal is either to commit crimes in one or several locations familiar to him or to commit crimes in random locations. In other words, the criminal either has "preferred" locations (the location where they always commit crimes) or they prefer to change locations every time (the criminal does not have "preferred" locations).
229
+
230
+ With this assumption, we can then use the history value of the location and the psychological value of the criminal to construct the standard deviations of the Gaussian function.
231
+
232
+ Since Psy measures whether the criminal has or does not have a "preferred" location and its value ranges from 0 to 1, and H measures how many times the criminal has conducted crimes in this city, these parameters can be used to determine the extent to which the Gaussian function should be spreading out. If the criminal prefers to visit the same place and he has visited this place for several times, there should be a higher possibility that he will come back again. If the criminal prefers to visit the same place but he rarely visited this
233
+
234
+ location in the past, there should be a lower possibility that he will conduct his next crime in this location. If the criminal acts randomly (he prefers not to visit the same place) and he has visited some location many times, he may then not want to come back to the same place again. If the criminal acts randomly and he has just visited some place for just one or two times, he probably will come back.
235
+
236
+ As a result, we can then express the standard deviation as the product of Psy and H. Also, without loss of generality, $\sigma_{\mathrm{x}} = \sigma_{\mathrm{y}}$ . Therefore, we have
237
+
238
+ $$
239
+ \sigma_ {\mathrm {x}} = \sigma_ {\mathrm {y}} = \frac {1}{\mathrm {P s y} \times \mathrm {H}}
240
+ $$
241
+
242
+ and the Gaussian function turns out to be
243
+
244
+ $$
245
+ \begin{array}{l} f (x, y) = \frac {P}{A} \times C r \times e ^ {- \left(\frac {(x - L a) ^ {2}}{2 / (P s y \times H) ^ {2}} + \frac {(y - L o) ^ {2}}{2 / (P s y \times H) ^ {2}}\right)} \\ = \frac {\mathrm {P}}{\mathrm {A}} \times \mathrm {C r} \times \mathrm {e} ^ {- \left(\frac {(\mathrm {x} - \mathrm {L a}) ^ {2} + (\mathrm {y} - \mathrm {L o}) ^ {2}}{2 / (\mathrm {P s y} \times \mathrm {H}) ^ {2}}\right)} \\ \end{array}
246
+ $$
247
+
248
+ From previous studies, we know the special mean and standard distance of the crime sites are used to predict the location of the next crime as the next crime will be within this distance. Thus in the last step, we add this weight to this Gaussian function to make it more accurate.
249
+
250
+ # Statement 3.1.7
251
+
252
+ The special mean and standard distance of the crime sites in a connected series are used to establish the most probable region for the next offense occurrence. [2]
253
+
254
+ Then we need to add more weight to the areas within the average criminal distance. Let $\bar{\mathbf{r}}$ denote the average criminal distance so our model will then become:
255
+
256
+ $$
257
+ \mathrm {f} (\mathrm {x}, \mathrm {y}) = \left\{ \begin{array}{l l} \frac {\mathrm {P}}{\mathrm {A}} \times \mathrm {C r} \times \mathrm {e} ^ {- \left(\frac {(\mathrm {x} - \mathrm {L a}) ^ {2} + (\mathrm {y} - \mathrm {L o}) ^ {2}}{2 / (\mathrm {P s y} \times \mathrm {H}) ^ {2}}\right)} \times 1. 1 & \text {w h e n | x - L a | < \bar {r}} \text {a n d | y - L o | < \bar {r}} \\ \frac {\mathrm {P}}{\mathrm {A}} \times \mathrm {C r} \times \mathrm {e} ^ {- \left(\frac {(\mathrm {x} - \mathrm {L a}) ^ {2} + (\mathrm {y} - \mathrm {L o}) ^ {2}}{2 / (\mathrm {P s y} \times \mathrm {H}) ^ {2}}\right)} & \text {o t h e r w i s e} \end{array} \right.
258
+ $$
259
+
260
+ The serial murderer usually premeditates his crimes, often fantasizing and planning the crime in every aspect, with the possible exception of the specific victim. This type of killer needs a lot of time to plan their next crime and they are more picky next time they commit a
261
+
262
+ crime. Serial murders usually have an emotional cooling-off period between homicides. It can be days, weeks, or months. The type of multiple murders which happen in just one night or in one week are not considered as serial murders since the criminals are lacking the emotional cooling-off period. The choices for their victims are random and the locations for crimes are usually at the same place or nearby locations [8].
263
+
264
+ Serial robberies are always armed bank or shop robberies which, like serial murders, require careful planning. For serial rapes, the planning may not be as careful as for serial murders, but the criminals require an emotional cooling-off period as well. For serial arson, the crimes usually have an emotional cooling-off period too.
265
+
266
+ Therefore, we can see that cooling-off is a substantial characteristic of serial criminals. Due to the psychological needs associated with it, time intervals between serial crimes are not negligible. We can then assume that it is likely that the serial criminals will go back to their anchor point after one crime and depart from this point for the next crime as well.
267
+
268
+ Thus, we have the corollary below deduced from Statement 231.7.
269
+
270
+ # Corollary 3.1.8
271
+
272
+ We can use the special and standard distance of the crimes from the home location to determine the possible target locations for the next crime.
273
+
274
+ As a result of Corollary 3.1.8, we know that within an average distance of crimes from this anchor point (usually the home), the next crime will be most likely to happen. So the area from the center of the home to the average criminal distance should be weighted more than the others. However, we need to consider a buffer zone as well. The buffer zone is defined as a small area around the criminal's home location in which the criminal prefers not to commit any crime. Since the buffer zone is really small, we just ignore it in this method. Therefore, if we denote $(\mathbf{x}^*, \mathbf{y}^*)$ as the home location, the area $(\mathbf{x} - \mathbf{x}^*)^2 + (\mathbf{y} - \mathbf{y}^*)^2 \leq R$ should be weighted more in the next target probability, where $R$ is the range we should use and it is based on $\bar{\mathbf{r}}$ (some function of $\bar{\mathbf{r}}$ ). We can use this as a "learning parameter": we do not use this parameter in building our models to find the next possible locations. However, we find $R$ 's value every time after we catch the criminal and find his home location. After many times
275
+
276
+ of application, we can then get a function of $\mathbf{R}$ in terms of $\bar{\mathbf{r}}$ and then $\mathbf{R}$ will help us in the future prediction.
277
+
278
+ # 3.2 Static and Dynamic - The Risk Intensity Method
279
+
280
+ In the first part, we greatly simplified the problem by narrowing down the targeting areas from a general set of locations as well as reducing the size of each unit (atomic) area that we are going to focus on. In the second part, we will synthesize the prediction results from two perspectives (two sources of information) in general:
281
+
282
+ 1. The properties of each atomic area (whether it's a city, a county, or a region in a larger metropolitan region) which was constructed from the first graph. These properties include the statistical data of the area of parameters related to our analysis as well as the geographical information of and between these locations
283
+ 2. All the information of the previous records of the serial crimes, including the types of crimes, the location and information about the victims
284
+
285
+ Predictions targeted at the behavior of human beings can be really hard to make due to human nature. However, as mentioned previously in the paper, there has been a substantial amount of criminological studies on the stabilities and continuities of serial crimes which facilitates our modeling. After taking the information from the two areas mentioned above, we decide to take two approaches to the final result:
286
+
287
+ 1. The statistics (parameters) of the locations visited by the criminal provides us with plenty of useful information about the patterns of the criminal. We can use this information to figure out the pattern of the criminal and predict the probability for the next crime to happen in a particular location by examining the agreement of the parameters of that location with the pattern of the parameters we observed in the criminal's previous crimes.
288
+ 2. Each criminal, no matter how different from other criminals, has some correlation with other criminals that commit the same kind of crimes in the same location. Therefore it is sensible to calculate the risk intensity of all the possible locations we
289
+
290
+ are interested in and correlate them with the criminal's tendency to commit a certain type of crime in the area to generate the probability for the next crime to occur in that location.
291
+
292
+ At last, combine the results of these two analyses to synthesize the final probability for the next crime to happen in any possible location determined in geographical method.
293
+
294
+ # How to find the pattern of a serial criminal?
295
+
296
+ According to our assumptions of stability and continuity, the suspect's crimes observe certain perceptible similarities. Since these similarities usually fall in one or more categories of crimes, it is sensible to interpret that characteristic as a mixture of different types of "ideal" crimes. Each "ideal" exclusively represents one specific type of crime (murder, rape, etc.). In the real world, however, crimes are more often consisted of only one type of these "ideal" categories. For example, murder usually comes along with rape or burglaries, etc. Thus we will need to reconstruct the serial crimes made by a criminal from the matching results of those committed by him and the typical "ideal" crime.
297
+
298
+ Categorizing is not only important in the deciding which types of crimes the criminal tends to make, but it is also essential in predicting the sequence of the crimes (specifically the next crime). From the categories of the properties of continuity we can see that there are three basic types of continuous sequences of serial crimes: homotypic, heterotypic and cumulative. Therefore we also need an "ideal" model for each of these types. Here we are not simply classifying the entire series of crimes into one of these models. Instead, we fit different characters which are related to the crimes into those model types.
299
+
300
+ Another aspect in the analysis of the previous data of the series of crimes is of more importance than the analysis of any static variables of the locations: the type of crimes committed by the suspect. Often times we observe rapes that come together with murders, burglaries that comes together with murders, etc. Figure 2 illustrates the design flow or this part of algorithm.
301
+
302
+ ![](images/78969e729713c5a0544d68d8fe77df8dd694178d0a671ce166444f8f248f5a2b.jpg)
303
+ Figure 2. The Process of the Risk Intensity Method
304
+
305
+ # Setting Up
306
+
307
+ In order to set up the model, we need to introduce certain tools that will help us.
308
+
309
+ 1. $\mathbf{L} = \{\mathbf{L}_1, \mathbf{L}_2, \mathbf{L}_3, \dots, \mathbf{L}_k\}$ is the given set of spots (areas, cities, etc) of the unit (atomic) region of previous and next possible crime locations in the graph constructed in geographical method.
310
+ 2. $\mathbf{S} = \{\mathbf{S}_1, \mathbf{S}_2, \mathbf{S}_3, \dots, \mathbf{S}_n\}$ is the given set of spots of the unit regions where the previous crimes are committed. The locations are stored in order of the sequence of the crimes, and repetitions are allowed.
311
+ 3. For each element $S_i$ in $S$ , there exists a vector $P_i \{P_{i1}, P_{i2}, P_{i3}, \ldots, P_{im}\}$ of parameters of normalized variables $P = \{P1, P2, P3, \ldots, Pm\}$ ( $0 \leq Pi \leq 1$ ) related to each element of $S$ (each atomic location), where the variables are the statistical data of interest to us of that location. All the variables are normalized so that
312
+
313
+ $$
314
+ 0 \leq \mathrm {P i j} \leq 1 (\mathrm {i} \in [ 1, \mathrm {n} ], \mathrm {j} \in [ 1, \mathrm {m} ])
315
+ $$
316
+
317
+ 4. For each element $S_i$ in $S$ , there exists a subset $V_i \{V_{i1}, V_{i2}, V_{i3}, V_{i4}\}$ in which $V_{ij}$ contains the information of the $j^{th}$ victim in the $i^{th}$ location in the serial crimes.
318
+ 5. $\mathrm{D} = \{\mathrm{D}_1, \mathrm{D}_2, \mathrm{D}_3, \dots, \mathrm{D}_{\mathrm{n}-1}\}$ are the set of distances between adjacent crime locations. Also calculate $\mathrm{E_d} = \text{mean}\{\mathrm{D}\}$ and $\mathrm{SD_d}$ which is the standard deviation of the distances in $\mathrm{D}$ .
319
+ 6. $\mathrm{DL} = \{\mathrm{DL}_1, \mathrm{DL}_2, \mathrm{DL}_3, \ldots, \mathrm{DL}_k\}$ where $\mathrm{DL}_i$ is the distance of location $L_i$ in $L$ to the last visited crime location.
320
+
321
+ # Stage One: Calculating the Parameter Probability
322
+
323
+ # 1. Categorizing the variables
324
+
325
+ From the foundation part, we can basically categorize the static parameters into three categories: the ones that stay stable over the course of the serial crime; the ones that change in a constant rate over the course of the serial crime; and the ones that do not observe any rules suggested by criminology. The first type is static, which we can treat as constants; the
326
+
327
+ second type is cumulative, which mostly alters in a linear fashion. The third part is rather random, yet we will still be able to determine roughly the trend of the variables by training the data.
328
+
329
+ Therefore to categorize the variables, we would need to calculate the Mean vector and the Covariance matrix of the variables:
330
+
331
+ $$
332
+ \mathrm {E} = \left\{\mathrm {E} _ {1}, \mathrm {E} _ {2}, \mathrm {E} _ {3}, \dots , \mathrm {E} _ {\mathrm {m}} \right\},
333
+ $$
334
+
335
+ where $\mathbf{E}_{\mathrm{i}}$ is the average of the each variable within the entire set S for $1\leq \mathrm{i}\leq \mathrm{m}$ ..
336
+
337
+ $$
338
+ \mathrm {E i} = \frac {(\mathrm {P 1 i} + \mathrm {P 2 i} + \mathrm {P 3 i} + \cdots + \mathrm {P n i})}{\mathrm {n}}
339
+ $$
340
+
341
+ And a covariance matrix C{C1,C2,C3,...,Cm}, where
342
+
343
+ $$
344
+ \mathrm {C i} = \{\mathrm {C i} 1, \mathrm {C i} 2, \mathrm {C i} 3, \dots , \mathrm {C i m} \} (\mathrm {i} \in [ 1, \mathrm {m} ])
345
+ $$
346
+
347
+ And $\mathrm{Cij} = \mathrm{Cov}(\mathrm{Pi},\mathrm{Pj})(\mathrm{i},\mathrm{j}\in [1,\mathrm{m}])$
348
+
349
+ We denote this matrix $\mathbf{C}$ as the main covariance matrix.
350
+
351
+ The really useful information contained in the main covariance matrix is its diagonal, which represents the standard deviation of the variables we are interested in.
352
+
353
+ 1. Denote the vector $\mathrm{SD} = \{\mathrm{C}_{11}, \mathrm{C}_{22}, \mathrm{C}_{33}, \dots, \mathrm{C}_{\mathrm{mm}}\}$ which is the standard deviation of the variables according to the normalized data. We take all the elements in SD and take out those with values $\leq \mathrm{r1}$ , where $\mathrm{r1}$ is defined as the upper limit of the 10% smallest elements in SD, to form a new array $\mathrm{Ps} = \{\mathrm{Ps1}, \mathrm{Ps2}, \mathrm{Ps3}, \dots, \mathrm{Ps}_{(\mathrm{ls})}\}$ where $\mathrm{ls}$ is the length of array Ps, which is a collection of all the variables which stay stable over the course of the serial crime. And for each element in Ps, store the average of that variable: $\overline{\mathrm{Psi}}$ .
354
+ 2. After excluding the stable elements in SD from set P, perform linear regression on the rest of the variables in S.
355
+
356
+ Those with norm $\leq r2$ (where $r2$ is defined as the upper limit of the $40\%$ smallest elements in P) will be recorded in the vector $\mathrm{Pc} = \{\mathrm{Pc1}, \mathrm{Pc2}, \mathrm{Pc3}, \dots, \mathrm{Pc}_{(\mathrm{lc})}\}$ where each lc is the length of array Ps, which is a collection of all the related variables whose value accumulates in a linear fashion over the course of the crime. Besides the vector Pc, for each element $\mathrm{Pc_i}$ in Pc, we maintain the linear fit function of that variable:
357
+
358
+ $$
359
+ \mathrm {f (n)} = \alpha \times \mathrm {n} + \epsilon
360
+ $$
361
+
362
+ where $\alpha$ is the coefficient, $n$ is the ranking of the crime in the sequence, and $\epsilon$ is the correction coefficient. Also store the standard deviation for the cumulative variables:
363
+
364
+ $$
365
+ \mathrm {S D} _ {\mathrm {c}} = \left\{\mathrm {S D} _ {1}, \mathrm {S D} _ {2}, \mathrm {S D} _ {3}, \dots , \mathrm {S D} _ {\left(\mathrm {l c}\right)} \right\}.
366
+ $$
367
+
368
+ 3. After the first two types of variables are determined, store the rest of the variables in $\mathrm{Pr} = \{\mathrm{Pr1},\mathrm{Pr2},\mathrm{Pr3},\dots,\mathrm{Pr}_{(\mathrm{lr})}\}$ where $\mathrm{lr}$ is the length of the array $\mathrm{Pr}$ .
369
+
370
+ # 2. Calculate the Parameter Probability based on the patterns of variables
371
+
372
+ Now that we have successfully categorized the three types of variables, we can make future prediction of those variables based on their different types. In predicting the next possible locations of the crime, each possible location can either have the next crime happen there or not.
373
+
374
+ 1. Therefore for each location, the determination can be viewed as a Bernoulli process for the stable type of variables since the variables usually have a fixed relationship with the probability of the occurrence of the crime. Therefore we include a Logistic Regression process with the static variables stored in Ps.
375
+
376
+ Denote $z = \beta_0 + \beta_1 x_1 + \beta_2 x_2 + \dots + \beta_k x_k$ , where $x_1, x_2, \ldots, x_k$ are the stable variables in Ps. And
377
+
378
+ $$
379
+ \beta_ {\mathrm {i}} = \frac {1}{\mathrm {S D} (\mathbf {x} _ {\mathrm {i}})} \left(\mathrm {i} \in [ 1, \mathrm {l s} ]\right)
380
+ $$
381
+
382
+ So that the greater stability $\mathbf{x_i}$ has, the bigger $\beta_{\mathrm{i}}$ will be. Also $\beta_0$ is defined as:
383
+
384
+ $$
385
+ \beta_ {0} = - (\beta_ {1} \overline {{\mathbf {x}}} _ {1} + \beta_ {2} \overline {{\mathbf {x}}} _ {2} + \dots + \beta_ {\mathrm {k}} \overline {{\mathbf {x}}} _ {2})
386
+ $$
387
+
388
+ Then according to Logistic Regression, define:
389
+
390
+ $$
391
+ \mathrm {f (z) = \frac {1}{1 + e ^ {- z}}}
392
+ $$
393
+
394
+ Then calculate the $f(z)$ 's for all the atomic regions determined in geographical part using their corresponding variables in Pi, and store them in the new set $Lz = \{Lz_1, Lz_2, Lz_3, \ldots, Lz_k\}$ . After that normalize the coefficients in $Lz$ in scale such that:
395
+
396
+ $$
397
+ \sum_ {\mathrm {i} = 1} ^ {\mathrm {k}} \mathrm {L z _ {i}} = 1
398
+ $$
399
+
400
+ 2. For the cumulative variables, the best prediction of a certain variable will be determined by its corresponding linear function. Hence for a variable $\mathrm{Pc_i}$ in $\mathrm{Pc}$ , suppose the number of known murder is $\mathbf{N}$ , make the next prediction of the mean
401
+
402
+ $$
403
+ \operatorname {E c} _ {\mathrm {i}} = \operatorname {f} (\mathrm {N}) = \alpha \times \mathrm {N} + \epsilon
404
+ $$
405
+
406
+ Then for the same reason as explained above, the determination can be viewed as a Bernoulli process. Therefore assign the probability of the crime happening in location $\mathrm{Lc_i}$ determined from variable $\mathrm{Pc_i}$ as a normal distribution:
407
+
408
+ $$
409
+ \mathrm {L c} _ {\mathrm {i}} \sim \mathrm {N o} (\mathrm {E c} _ {\mathrm {i}}, \mathrm {S D c} _ {\mathrm {i}}), (\mathrm {i} \in [ 1, \mathrm {l c} ])
410
+ $$
411
+
412
+ Similarly with the previous case, normalize the coefficients in Lc such that:
413
+
414
+ $$
415
+ \sum_ {\mathrm {i} = 1} ^ {\mathrm {k}} \mathrm {L c} _ {\mathrm {i}} = 1
416
+ $$
417
+
418
+ 3. For the random variables, we know from the continuity property that there are still inner connections between them and crime probability. Here we use the nearest neighbor algorithm in determining the best possible distribution of the same type of variables in all possible locations in $L$ . Since $\mathrm{Pi} \in [0,1]$ in $P$ , it is possible to divide the region $[0,1]$ evenly into ten segments, and then take 10 most recent cases of the variable in $\mathrm{Pr}$ , and categorize the prediction into the most populated section of $[0,1]$ . If a tie occurs, then take the average of the sections as a prediction to retrieve the average $\mathrm{Er}_{\mathrm{i}}$ . Again, for the same reason explained earlier, the determination is still a Bernoulli process, which observes a normal distribution (storing the values in $L_r$ ):
419
+
420
+ $$
421
+ \mathrm {L r _ {i}} \sim \mathrm {N o (E r _ {i} , 0 . 5) , (i \in [ 1 , l r ])}
422
+ $$
423
+
424
+ Similarly, normalize the coefficients in $\mathrm{Lr}$ such that:
425
+
426
+ $$
427
+ \sum_ {\mathrm {i} = 1} ^ {\mathrm {k}} \mathrm {L r} _ {\mathrm {i}} = 1
428
+ $$
429
+
430
+ Since stable and cumulative variables are much more precise in the predictions, they are weighed more in the final calculation. Therefore for every possible location in the map Li in L, calculate the Parameter Coefficient $\mathrm{PP_i}$ :
431
+
432
+ $$
433
+ \mathrm {PP} _ {\mathrm {i}} = \mathrm {L} s _ {\mathrm {i}} \times 45 \% + \mathrm {L} c _ {\mathrm {i}} \times 45 \% + \mathrm {L} r _ {\mathrm {i}} \times 10 \%
434
+ $$
435
+
436
+ # Stage Two: Calculating the Risk-measured Probability
437
+
438
+ Besides calculating the deterministic statistical data from the locations, it is equally important to find patterns within the crimes themselves in the series. One important measurement is the type of crimes committed in the series and the number of victims in each crime. From previous analysis we have found out that criminals, no matter how different they try to be from others, have some basic similarities with those who commit the same type of crimes with them. Therefore we can divide the criminal into composites of several types of "ideal" criminals who are purely dedicated to one type of crime.
439
+
440
+ Also since the criminal may gradually develop a "taste" for a certain kind of crime, the crimes that happened more recently have more significance than those happened earlier. According to our previous definition, for location Si in set S, there exists a vector V that records the number of victims of different type of crimes in this event. Therefore we create another vector $\mathrm{C} = \{\mathrm{C}1, \mathrm{C}2, \mathrm{C}3, \mathrm{C}4\}$ where C1 is the coefficient of murder, C2 for rape, C3 for arson and C4 for burglary. Then
441
+
442
+ $$
443
+ \mathrm {C 1} = \mathrm {r} ^ {(\mathrm {n - 1})} \mathrm {V} _ {1 1} + \mathrm {r} ^ {(\mathrm {n - 2})} \mathrm {V} _ {2 1} + \mathrm {r} ^ {(\mathrm {n - 3})} \mathrm {V} _ {3 1} + \dots + \mathrm {r} \mathrm {V} _ {(\mathrm {n - 1}) 1} + \mathrm {V} _ {\mathrm {n 1}}
444
+ $$
445
+
446
+ Similarly,
447
+
448
+ $$
449
+ \mathrm {C 2 = r ^ {(n - 1)} V _ {1 2} + r ^ {(n - 2)} V _ {2 2} + r ^ {(n - 3)} V _ {3 2} + \cdots + r V _ {(n - 1) 2} + V _ {n 2}}
450
+ $$
451
+
452
+ $$
453
+ \mathrm {C 3 = r ^ {(n - 1)} V _ {1 3} + r ^ {(n - 2)} V _ {2 3} + r ^ {(n - 3)} V _ {3 3} + \cdots + r V _ {(n - 1) 3} + V _ {n 3}}
454
+ $$
455
+
456
+ $$
457
+ \mathrm {C 4 = r ^ {(n - 1)} V _ {1 4} + r ^ {(n - 2)} V _ {2 4} + r ^ {(n - 3)} V _ {3 4} + \cdots + r V _ {(n - 1) 4} + V _ {n 4}}
458
+ $$
459
+
460
+ After calculating the coefficients, we can calculate the risk of the next crime according to different weights of C1, C2, C3 and C4.
461
+
462
+ # 1. Generating the Risk Intensities
463
+
464
+ A second part in the model is trying to restrict the areas of interest identified in part A by assigning risk intensities to each of them. These risks are static in that they only depend on the crime type and characteristics of the locations under consideration and not on the criminal's pattern (see Figure 2).
465
+
466
+ ![](images/0115ca6a2b2cd08ca91ba868c1bec4d018d9a8d4324b5f081eba72bbe1df37e1.jpg)
467
+ Figure 3. The process of the static method
468
+
469
+ The intensity risks will further translate into the probability of these areas to become the target of the next crime. In order to be able to identify the risk associated with a particular region, we need to find the variables that can significantly influence this risk. Different variables characteristic to the location under consideration are used according to the nature of the crime. These variables, along with our assumptions, are listed below.
470
+
471
+ # 1. Murder
472
+
473
+ - Murder rate
474
+ - Economic status: communities with the highest levels of crime also have the highest rates of poverty. There is no suggestion of the fact that poverty in itself causes crime, but inequality does. Due to the limited data that can be found on this topic, it is agreed that this aspect has already been taken into consideration in determining the murder rate.
475
+ - Ethnic/racial heterogeneity: Due to limited statistics available on this subject, it is agreed that this aspect can be associated with other factors such as income
476
+
477
+ or unemployment rate which have already been taken into consideration in determining the murder rate.
478
+
479
+ - Distance from anchor point: for murder, trends in serial criminal activity show the need for a buffer zone, because we are not dealing with murders committed exclusively at home or at the criminal's anchor point
480
+
481
+ The risk intensity is thus defined as:
482
+
483
+ $$
484
+ \sigma_ {m u r d e r} = e ^ {m r} + d e c,
485
+ $$
486
+
487
+ where $\sigma_{\text{murder}}$ is the risk intensity associated with murder,
488
+
489
+ $\mathrm{mr} =$ murder rate of the location
490
+
491
+ $\mathrm{dec} =$ distance-decay function defined as:
492
+
493
+ $$
494
+ d e c = \left\{ \begin{array}{l} 0, i f d i s w i t h i n a s m a l l r a n g e (b u f f e r z o n e) \\ \frac {\bar {r}}{\sqrt {d}}, o t h e r w i s e \end{array} \right.
495
+ $$
496
+
497
+ with $\bar{r}$ the mean of the distances between the given cities
498
+
499
+ and the distance from the anchor point to the location being
500
+
501
+ considered
502
+
503
+ The formula uses the exponential for the murder rate of the location because we consider this to be the most important factor (at this stage) in determining the risk intensity of that location. The distance-decay function used illustrates a buffer zone whose radius needs to be determined using the criminal's psychological profile and patterns.
504
+
505
+ # 2. Rape
506
+
507
+ Rape rate
508
+ - Ethnic diversity
509
+ - Distance from anchor point: similar to murder, trends in serial criminal activity show the need for a buffer zone, because we are not dealing with murders committed exclusively at home or at the criminal's anchor point
510
+
511
+ The risk intensity is thus defined as:
512
+
513
+ $$
514
+ \sigma_ {r a p e} = e ^ {r r} + d e c,
515
+ $$
516
+
517
+ where $\sigma_{\text{rape}}$ is the risk intensity associated with rape,
518
+
519
+ rr = rape rate of the location
520
+
521
+ $\mathrm{dec} =$ distance-decay function defined as:
522
+
523
+ $$
524
+ d e c = \left\{ \begin{array}{l l} 0, i f d i s w i t h i n a s m a l l r a n g e (b u f f e r z o n e) \\ \frac {\bar {r}}{\sqrt {d}}, o t h e r w i s e \end{array} \right.
525
+ $$
526
+
527
+ with $\bar{r}$ the mean of the distances between the given cities
528
+
529
+ and the distance from the anchor point to the location being considered
530
+
531
+ 3. Arson
532
+
533
+ - Arson rate
534
+ - Distance from anchor point: serial arsonists are usually not particularly mobile; most of their crimes occur within 1-2 miles of their residence
535
+
536
+ The risk intensity is thus defined as:
537
+
538
+ $$
539
+ \sigma_ {a r s o n} = e ^ {a r} + d e c,
540
+ $$
541
+
542
+ Where $\sigma_{\text{arson}}$ is the risk intensity associated with arson,
543
+
544
+ $\operatorname{ar} =$ arson rate of the location
545
+
546
+ $\mathrm{dec} =$ distance-decay function defined as:
547
+
548
+ $$
549
+ d e c = \frac {\bar {r}}{\sqrt {d} + \varepsilon} \quad \text {w i t h} \bar {r} \text {t h e m e a n o f t h e d i s t a n c e s b e t w e e n t h e g i v e n c i t i e s}
550
+ $$
551
+
552
+ and d the distance from the anchor point to the location ordered
553
+
554
+ The formula uses the exponential for the arson rate because we consider this to be the most important factor in determining the risk intensity of the region. We also consider a distance-decay function, where we need the positive constant $\varepsilon$ in order to avoid zero terms in the denominator.
555
+
556
+ 4. Robbery
557
+
558
+ - Robbery rate
559
+ - Distance from anchor point: in the serial robbery case, studies do not show any consistent trend in how distance from the criminal's familiar place might influence the risk of a particular area.
560
+
561
+ The risk intensity is thus defined as:
562
+
563
+ $$
564
+ \bullet \sigma_ {r o b b e r y} = e ^ {b r},
565
+ $$
566
+
567
+ where $\sigma_{robbery}$ is the risk intensity associated with robbery
568
+
569
+ and $\mathrm{br} =$ arson rate of the location
570
+
571
+ The expression of the risk intensity for robbery of a location only includes the robbery rate as a variable, since other parameters can either not be easy to find or do not show a clear influence on the risk.
572
+
573
+ # Remark
574
+
575
+ Not mentioned in the above variables is the urban design or traffic pattern of a location, given that the physical layout of streets in an area can have an impact on criminal activity. For example, predictable street grid networks are more likely to attract crime, while more "organic" street layouts are generally considered safer. However, this data is considerably difficult to come across, both at the state and county level.
576
+
577
+ Once we identify the nature of the crime, the model assigns risk intensities to the locations identified by the geographical method. These intensities are then normalized to provide values in the interval $(0, 1)$ for the risk of each location:
578
+
579
+ $$
580
+ r i s k _ {i} = \frac {\sigma_ {i}}{\sum_ {i = 1} ^ {n} \sigma_ {i}}
581
+ $$
582
+
583
+ where $\sigma_{i} =$ risk intensity of location i
584
+
585
+ $\mathbf{n} =$ number of locations under consideration.
586
+
587
+ # 2. Calculating the Risk-measured Probability
588
+
589
+ Given the risk intensities of the four types of crimes of an atomic location Li in set L $(\mathbf{R}_{\mathrm{i1}},\mathbf{R}_{\mathrm{i2}},$ $\mathbf{R_{i3}}$ , $\mathbf{R_{i4}}$ ) , C1, C2, C3, C4, we can calculate the Crime Coefficients $\mathbf{CC_i}$ for each atomic location:
590
+
591
+ $$
592
+ \mathrm {C C} _ {\mathrm {i}} = \sum_ {\mathrm {j} = 1} ^ {4} \mathrm {R j i} \times \mathrm {C j}
593
+ $$
594
+
595
+ After calculating $\mathrm{CC_i}$ for all $\mathrm{i} \in [1, \mathrm{k}]$ , normalize the crime coefficients of the cities to $\mathrm{RC_i}$ so that
596
+
597
+ $$
598
+ \sum_ {\mathrm {i} = 1} ^ {\mathrm {n}} \mathrm {R C} _ {\mathrm {i}} = 1
599
+ $$
600
+
601
+ # Stage Three: Integration
602
+
603
+ Since the predictions based on crime types and the predictions based on the patterns of the statistical parameters of the locations are of equal importance, the final prediction should be given equal dependence on the results of these two methods. So the combined prediction results $\mathrm{Cp_i}$ for the $\mathrm{L_i}$ in set $\mathrm{L}$ is:
604
+
605
+ $$
606
+ \mathrm {C p _ {i}} = \frac {\left(\mathrm {P P _ {i}} + \mathrm {R C _ {i}}\right)}{2}
607
+ $$
608
+
609
+ There is, however, another important factor to be taken into consideration: the distance decay principle where the possibilities decay with respect to the distance from the current crime location to the possible target location. Given that we've calculated the mean $\mathrm{E_d}$ and the standard deviation $\mathrm{SD_d}$ of the distances travelled between cities in the course of the serial crime, we can add the distance weight to the probabilities previously calculated for each atomic location Li. Also according to the "buffer zone" effect, the criminal is less willing to commit crime in the same locations. According to the set DL defined above, we create a new set DP in which its element $\mathrm{DP_i}$ is defined as:
610
+
611
+ $$
612
+ \mathrm {D P} _ {\mathrm {i}} = \left\{ \begin{array}{l l} 1 & \mathrm {D L} _ {\mathrm {i}} < \mathrm {E} _ {\mathrm {d}} \\ \sim \mathrm {N o} (\mathrm {E} _ {\mathrm {d}}, \mathrm {S D} _ {\mathrm {d}}) & \mathrm {D L} _ {\mathrm {i}} > \mathrm {E} _ {\mathrm {d}} \end{array} \right.
613
+ $$
614
+
615
+ Therefore the final probability of function for each location $\mathrm{Li}$ is $p_i$ :
616
+
617
+ $$
618
+ p _ {i} = \left\{ \begin{array}{l} C p _ {i} \times D P _ {i} i \neq 0; \\ C p _ {i} \times 0. 5 i = 0. \end{array} \right.
619
+ $$
620
+
621
+ After that normalize the probabilities $\mathfrak{p}_{\mathrm{i}}$ such that:
622
+
623
+ $$
624
+ \sum_ {\mathrm {i} = 1} ^ {\mathrm {k}} \mathrm {p i} = 1
625
+ $$
626
+
627
+ # 4. Simulation Results and Discussion
628
+
629
+ We implemented the methods and algorithms described above using Matlab and generated the graphs using the same program. We stored the output from each step described above in order to note the results of each method.
630
+
631
+ # 4.1 Results of the Geographical Method
632
+
633
+ The geographical method resulted in reasonably smoothed Gaussian surfaces indicating areas with high crime probability based on past crime locations and some characteristics of these areas.
634
+
635
+ We first considered the Gary Latray case, a serial bank robbery criminal who was recently apprehended and whose activity across the states of Pennsylvania, Maryland, Ohio and Virginia is particularly interesting. We notice that the graph generated in the geographical method for this situation illustrates a sharp peak at one of the locations visited by the criminal. The other peaks are smooth and distributed along the other areas where he committed the crime (Figure).
636
+
637
+ ![](images/9e4b2abdc50882aacfa18eb558a23c03fff03a009a258a4e981fef863c44c834.jpg)
638
+ Figure 4. Distribution of high-risk areas in the Gary Latray robbery case using the geographic method
639
+
640
+ ![](images/255e94fe7c751fb66e8c545b32d1edd473c1756062cb83b2782f8b62b1767cf3.jpg)
641
+ Figures and show the distribution of the potential target locations, with yellow areas suggesting a high crime risk. The red point corresponds to the peak mentioned above.
642
+
643
+ ![](images/1de70fe0be220510495a7b97b553516a33c179d02367c85ea0bff54c6bdb495e.jpg)
644
+ Figure 5. Distribution of potential target areas in the Gary Latray robbery case
645
+ Figure 6. Distribution of potential target areas in the Gary Latray robbery case
646
+
647
+ We also tested our model on a serial rape and murder case which occurred in Florida in 2003-2004. The murderer – a female – committed the crimes in several counties in the state. However, she visited Marion County 3 times out of the seven she was convicted for. The geographical method generated the following graph for this case:
648
+
649
+ ![](images/b9c17fceb5e1a7661869dbd3cf48f837892a79e1ed3c87ff7474bf838d60b9cb.jpg)
650
+ Figure 7. Distribution of high-risk areas in the Florida murder case using the geographic method
651
+
652
+ Note that he red peak on this graph is centered on the coordinates of Marion county, which agrees with our expectation of this area having a higher crime probability for this particular offender. The criminal circles essential for generating the shape can also be seen from the graph.
653
+
654
+ The distribution of potential target locations can also be observed in the picture on the next page:
655
+
656
+ ![](images/3fbde09e6ecd92be5325f505ba2e207b6d84453882aa508bbe6d3aa168576e28.jpg)
657
+ Figure 8. Distribution of potential target areas in the Florida murder case
658
+
659
+ The geographical model was further tested for a serial arson situation. Thomas A. Sweatt terrorized the Washington DC area, Maryland and Virginia with the numerous fires set there before being caught in West Virginia, which can be considered his "anchor point" as discussed in this paper. The results of this test were that the high risk areas would be closer to DC and Maryland (see figure). This confirms our expectations since these locations are the ones where he set up most of the fires. We also note the higher density of these states as opposed to Virginia.
660
+
661
+ ![](images/fb7aeeb4d2b1c18d0dd05a770c3c27123feb6a8f091341a83012c242bd3d8d99.jpg)
662
+ Figure 9. Distribution of high-risk areas in the Thomas Sweatt arson case using the geographic method
663
+
664
+ Note that the criminal circles are again conspicuous the generated Gaussian shape. A 2D view of the graph is provided in the picture below:
665
+
666
+ ![](images/cc75d219bf0cc329ae8d3e665ba153dd0379ad8d67f5a55b0a8971259b586294.jpg)
667
+ Figure 10. Distribution of potential target areas in the Sweatt arson case
668
+
669
+ # 4.2 Results of the Risk Intensity, "static" and "dynamic", Methods
670
+
671
+ The output of the "dynamic method" is the parameter probability (PPI), generated by categorizing the parameters as homotypic, heterotypic or cumulative.
672
+
673
+ For the Gary Latray robbery case, the model generates the probabilities below:
674
+
675
+ <table><tr><td></td><td>East Franklin</td><td>St. Clairsville</td><td>Oakland</td><td>Harrisonburg</td><td>Belpre</td></tr><tr><td>Parameter probabilities</td><td>0.0921</td><td>0.2087</td><td>0.1812</td><td>0.3086</td><td>0.2095</td></tr></table>
676
+
677
+ For the same robbery case, the "static" method generates risk intensities associated with the locations under consideration:
678
+
679
+ <table><tr><td></td><td>East Franklin</td><td>St. Clairsville</td><td>Oakland</td><td>Harrisonburg</td><td>Belpre</td></tr><tr><td>Riskintentisites</td><td>1.8035</td><td>2.2638</td><td>21.2636</td><td>22.2755</td><td>8.0614</td></tr></table>
680
+
681
+ Note the considerable greater risk intensities for Oakland, MD and Harrisonburg, VA, which are most likely due to the high crime rates, population densities and influenced by their distance to the "anchor point".
682
+
683
+ However, these intensities are not our final result. By combining them with the robbery crime coefficient and the static method, we get the probabilities of the locations to be the next target for this criminal (see next page):
684
+
685
+ <table><tr><td></td><td>East Franklin</td><td>St. Clairsville</td><td>Oakland</td><td>Harrisonburg</td><td>Belpre</td></tr><tr><td>Final Probability Prediction</td><td>0.0838</td><td>0.2837</td><td>0.2850</td><td>0.0122</td><td>0.3352</td></tr></table>
686
+
687
+ Therefore, by considering all parameters and the criminal's sequence of actions, we predict the Belpre area to be the most probable future target, followed closely by Oakland and St. Clairsville.
688
+
689
+ # 4.3 Discussion
690
+
691
+ # 4.3.1 Sensitivity and Robustness Testing
692
+
693
+ A discussion of the quality of the model cannot be complete without mentioning robustness and sensitivity analysis. While robustness is used to determine whether the model will break down in extreme cases, sensitivity measures the effect of small changes in parameters. For a good model, this will induce small changes in the output data.
694
+
695
+ We tested our model for sensitivity by using the Gary Latray serial robbery example and varying the value of one parameter at a time.
696
+
697
+ We first tested the model's response for a minor change in the population on location 5 (Belpre, OH). This change (from 44015 to 40000 inhabitants) further resulted in a change in the population density. The results were as follows (see next page):
698
+
699
+ <table><tr><td></td><td>East Franklin</td><td>St. Clairsville</td><td>Oakland</td><td>Harrisonburg</td><td>Belpre</td></tr><tr><td>Initial Probability Prediction</td><td>0.0838</td><td>0.2837</td><td>0.2850</td><td>0.0122</td><td>0.3352</td></tr><tr><td>Minor change population prediction</td><td>0.0852</td><td>0.2924</td><td>0.2739</td><td>0.0143</td><td>0.3343</td></tr></table>
700
+
701
+ Note that there are small changes in probabilities and the highest-probability location does not change. This suggests small changes in population do not affect the output to a great extent.
702
+
703
+ The test for a minor change in the coordinates of Location 5 (by 1 degree), however, translated in a significant change in probability:
704
+
705
+ <table><tr><td></td><td>East Franklin</td><td>St. Clairsville</td><td>Oakland</td><td>Harrisonburg</td><td>Belpre</td></tr><tr><td>Initial Probability Prediction</td><td>0.0838</td><td>0.2837</td><td>0.2850</td><td>0.0122</td><td>0.3352</td></tr><tr><td>Minor change location prediction</td><td>0.0300</td><td>0.1289</td><td>0.5266</td><td>0.0091</td><td>0.3054</td></tr></table>
706
+
707
+ The location with the highest probability is no longer Belpre, OH but the third location (Oakland, MD), which suggests that geographic changes affect our output greatly.
708
+
709
+ We performed a similar test for a minor change in area (area of location 5 was
710
+
711
+ changed from $3.5 \, \text{sq mi}$ to $4 \, \text{sq mi}$ ), and noticed a small change in the final probabilities:
712
+
713
+ <table><tr><td></td><td>East Franklin</td><td>St. Clairsville</td><td>Oakland</td><td>Harrisonburg</td><td>Belpre</td></tr><tr><td>Initial Probability Prediction</td><td>0.0838</td><td>0.2837</td><td>0.2850</td><td>0.0122</td><td>0.3352</td></tr><tr><td>Minor change area prediction</td><td>0.0838</td><td>0.2843</td><td>0.2853</td><td>0.0122</td><td>0.3343</td></tr></table>
714
+
715
+ For a change in crime rate, we obtained similar data and this concluded the analysis of sensitivity of our model:
716
+
717
+ <table><tr><td></td><td>East Franklin</td><td>St. Clairsville</td><td>Oakland</td><td>Harrisonburg</td><td>Belpre</td></tr><tr><td>Initial Probability Prediction</td><td>0.0838</td><td>0.2837</td><td>0.2850</td><td>0.0122</td><td>0.3352</td></tr><tr><td>Minor change crime rate prediction</td><td>0.0810</td><td>0.2668</td><td>0.3067</td><td>0.0083</td><td>0.3373</td></tr></table>
718
+
719
+ An analysis of the robustness of the model was achieved using the Kansas City murder and rape example, where all crimes were committed inside the same city (data on the exact location of the crimes is very difficult to find). We tested this example by using as input only the variables associated with Kansas City. Our results were consistent with the expectation that the model would assign full chances to Kansas as being the next crime location as well.
720
+
721
+ # 4.3.2 Accuracy of the Prediction
722
+
723
+ In this section, we are going to discuss the accuracy of our model. The two main interests are whether the model is able to highlight the next murder area and whether the high-risk area has been minimized.
724
+
725
+ In the Florida serial murders case, we used the geographical method to predict the next crime. Since the murderer Aileen Wuornos was caught after the seventh murder, there is actually no "next crime". However, we can input the first six murders' information into our model and check if the seventh murder's location has been covered in our prediction. As showed in 3.1, the seventh murder's location, Dixie County, has been well covered in our prediction, being highlighted as a high-risk area.
726
+
727
+ As we noticed in the graphs above, the high-risk area has been constrained to an area about $10,000\mathrm{km}^2$ , equivalently about 5 counties in Florida. However, with the idea of "home location" and the increase in the number of data used, the range can be minimized to about 2 to 3 counties.
728
+
729
+ With the application of the second method, the range of high-risk area can be further minimized. As in the serial robberies case discussed above, after the implementation of the geographic method, five locations are highlighted as of high risk. Then we further use the risk intensity method and the final probability shows that three cities are of clearly higher risk as opposed to others: St. Clairsville, Oakland, and Belpre (the real last robbery occurs in Belpre, which has a higher risk probability). We can see that with the implementation of the second method the range of the next possible locations has been constrained even further, to 3 locations.
730
+
731
+ # 4.3.3 Combination of the Two Methods
732
+
733
+ We integrate the two methods, the geographical one and the risk intensity one, by using the first method as a preliminary step for the second one. In fact, the first method generates a map for the second one: when we calculate the risk intensity, we are only concerned about the locations that appear in the first map.
734
+
735
+ # 5. Strength and Weakness of the Model
736
+
737
+ This model is considerably accurate. With the combination of the two methods, the geographic method and the risk intensity one, we can minimize the range of possible locations to about 2 or 3 areas. With the testing of existing data, our model successfully highlighted the location for the next crime. Most of the variables that could influence the location of the next crime on the offender's target list are taken into consideration. Therefore, the model achieves the goal of predicting the location of the next crime in serial crimes and in minimizing the number of possible locations.
738
+
739
+ On the other hand, our model did not take into account the fact that sometimes the boundary of administrative regions might impede with the investigations ("edge effects"). Moreover, pieces of evidence such as similarities of the victims are not represented in the determination process, although they could affect the spatial patterns of a criminal. Another weakness of our model can be the fact that small changes in the geographic coordinates influence to a certain extent our output. However, the reason for this could be that the input geographic coordinates did not differ greatly.
740
+
741
+ We should also keep in mind the assumptions we made in order to generate this model. For instance, it is essential to assume that the criminal follows a predictable pattern of movement or that there is some kind of correlation between his actions and those of other criminals that commit the same kind of crime in that location (see risk intensity method assumptions).
742
+
743
+ # 6. Conclusion and Recommendation
744
+
745
+ In this paper, we have developed and discussed our model for locating the next possible locations for serial crimes. Our model takes into account the statistical information of previous crimes, such as the geographical coordinates, population densities, criminal rates, etc, as well as the criminal's previous crime behavior. We consider the crime pattern of the
746
+
747
+ criminal in order to calculate the risk intensities for different areas. Our model involves the implementation of the Gaussian distribution, distance decay function, logistic regression, linear regression and nearest neighbor algorithm based on the statistical investigation of the areas and the criminal's previous records.
748
+
749
+ Our main assumption in building this model is that serial crimes are instrumental rather than expressive, thus ensuring that the criminal follows a predictable pattern of movement. We also assume that this pattern is characterized by certain stability and continuity properties which facilitate a correlation between the behavior of the suspect and that of the other criminals in concerned areas.
750
+
751
+ During the simulation process, we used the data from previous serial crimes to test and verify the prediction of our model about the next crime location. Our model's predictions match the actual location of the next crime location in the sequences we tested. It is proved that the combination of the first and the second method increases the accuracy of the model's prediction in general while at the same time decreases the amount of work needed for the separate implementation of the two methods. The first model simplifies the tasks for the second method by narrowing the range of next possible crime locations. The second method further identifies the risk intensity as well as the criminal's pattern which are then combined in synthesizing the best probable location(s).
752
+
753
+ Admittedly, our algorithm for the next crime location is only guaranteed to work for those criminals with a predictable pattern of movement. For the other criminals, however, our model is not guaranteed to determine an accurate and exact pattern of the criminal. Neither is it possible under such circumstances to minimize the range of location to a small set of areas. Also, the introduction of the home location in our model needs a considerable amount of data for the purpose of the training of the variables in the model. Therefore an aggregation of criminal data is required for the home location "learning" process to work.
754
+
755
+ In general, our model achieves the goal of predicting the criminal's next possible target locations and minimizing the range to two or three most probable areas. It is able to assist law enforcement officers in locating the next crime and putting the serial criminal in control.
756
+
757
+ # 8. Appendices
758
+
759
+ # A. Bibliography
760
+
761
+ [1] D. Kim Rossmo. Geographic Profiling. CRC Press, 1999
762
+ [2] Kim Michelle Lersch. Space, time, and crime. Carolina Academic Press, 2004
763
+ [3] Environmental Criminology Research Inc. Geographic Profiling: an effective tool for Serial Crime Investigation
764
+ [4] Terance D. Miethe, Richard C. McCorkle. Crime Profiles : The Anatomy of Dangerous Persons. Oxford University Press, 2001
765
+ [5] Loeber, R. Criminal Behaviour and Mental. Health, 3, 492-523. 1982
766
+ [6] John Paul Wright, Stephen G. Tibbets, Leah E. Daigle. Criminals in the Making. Sage Publications, Inc., 2008
767
+ [7] Larry J. Siegel, Joseph J. Senna. Introduction to Criminal Justice. Wadsworth Publishing, 2007
768
+ [8] FBI. Serial Murder — Multi-Disciplinary Perspectives for Investigators.
769
+
770
+ # B. Data
771
+
772
+ resource: http://www.fbi.gov/ucr/cius2008/offenses/index.html)
773
+
774
+ Florida Murder Case:
775
+
776
+ <table><tr><td>Places visited</td><td>Latitude</td><td>Longitude</td><td>Population</td><td>Area (square miles)</td><td>Density (/square miles)</td><td>Crime rate (%)</td><td>Number of times visited</td></tr><tr><td>Volusia</td><td>29°10&#x27;</td><td>81°31&#x27;</td><td>498,036</td><td>1,432</td><td>401</td><td>0.57</td><td>1</td></tr><tr><td>Marion</td><td>29°16&#x27;</td><td>82°07&#x27;</td><td>329,628</td><td>1,663</td><td>163</td><td>1.35</td><td>3</td></tr><tr><td>Citrus</td><td>28°53&#x27;</td><td>82°31&#x27;</td><td>141,416</td><td>773</td><td>202</td><td>0.26</td><td>1</td></tr><tr><td>Pasco</td><td>28°19&#x27;</td><td>82°20&#x27;</td><td>471,028</td><td>868</td><td>464</td><td>0.43</td><td>1</td></tr><tr><td>Dixie</td><td>28°25&#x27;</td><td>82°18&#x27;</td><td>14,957</td><td>864</td><td>21</td><td>0.095</td><td>1</td></tr></table>
777
+
778
+ Kansas City Murder and Rapist Case:
779
+
780
+ <table><tr><td>Places visited</td><td>Latitude</td><td>Longitude</td><td>Population</td><td>Area (square miles)</td><td>Density (/square miles)</td><td>Arson rate (%)</td><td>Number of times visited</td></tr><tr><td>Washington DC</td><td>38°51&#x27;</td><td>77°02&#x27;</td><td>591,833</td><td>68.3</td><td>9,776.40</td><td>0.015</td><td>21</td></tr></table>
781
+
782
+ Thomas A. Sweatt Arson Case:
783
+
784
+ <table><tr><td>Places visited</td><td>Latitude</td><td>Longitude</td><td>Population</td><td>Area (square miles)</td><td>Density (/square miles)</td><td>Arson rate (%)</td><td>Number of times visited</td></tr><tr><td>Washington DC</td><td>38°51&#x27;</td><td>77°02&#x27;</td><td>591,833</td><td>68.3</td><td>9,776.40</td><td>0.015</td><td>21</td></tr><tr><td>Maryland</td><td>39°01&#x27;</td><td>77°01&#x27;</td><td>5,633,597</td><td>12,407</td><td>541.9</td><td>0.0041</td><td>19</td></tr><tr><td>Virginia</td><td>38°52&#x27;</td><td>77°06&#x27;</td><td>7,769,089</td><td>42,774.20</td><td>193</td><td>0.0028</td><td>5</td></tr></table>
785
+
786
+ Gary Latray Robbery Case:
787
+
788
+ <table><tr><td>Places visited</td><td>Latitude</td><td>Longitude</td><td>Population</td><td>Area (square miles)</td><td>Density (/square miles)</td><td>Crime rate (%)</td><td>Number of times visited</td></tr><tr><td>East Franklin, PA</td><td>40°88&#x27;</td><td>79°49&#x27;</td><td>3,900</td><td>31.5</td><td>126.5</td><td>0.59</td><td>1</td></tr><tr><td>St. Clairsville, OH</td><td>40°04&#x27;</td><td>80°54&#x27;</td><td>5,057</td><td>2.2</td><td>2,354</td><td>1.46</td><td>1</td></tr><tr><td>Oakland, MD</td><td>39°24&#x27;</td><td>79°24&#x27;</td><td>1,930</td><td>2.1</td><td>915.7</td><td>3.05</td><td>1</td></tr><tr><td>Harrisonburg, VA</td><td>38°26&#x27;</td><td>78°52&#x27;</td><td>44,015</td><td>17.6</td><td>2,559</td><td>3.1</td><td>1</td></tr><tr><td>Belpre, OH</td><td>39°16&#x27;</td><td>81°35&#x27;</td><td>6,660</td><td>3.6</td><td>1,889.60</td><td>2.08</td><td>1</td></tr></table>
789
+
790
+ # C. Code
791
+
792
+ (Codes are arranged alphabetically according to their name)
793
+
794
+ # 1. FunctionOne
795
+
796
+ function $\mathrm{f} =$ functionOne(P,A,Cr,x,y,La,Lo,Psy,H);
797
+
798
+ $\mathrm{f} = (\mathrm{P / A}).^{*}\mathrm{Cr}.^{*}\exp \left(-((x - La).^{\wedge}2 + (y - Lo).^{\wedge}2)\right) / (2. / ((\mathrm{Psy}.^{*}\mathrm{H}).^{\wedge}2)))^{*}1.1;\% (2. / (\mathrm{Psy}.^{*}\mathrm{H}).^{\wedge}2)).^{*}1.1$
799
+
800
+ # 2. FunctionTwo
801
+
802
+ function $\mathrm{f} =$ functionTwo(P,A,Cr,x,y,La,Lo,Psy,H);
803
+
804
+ $\mathrm{f} = (\mathrm{P / A}).^{*}\mathrm{Cr}.^{*}\exp \left(-\left(\mathrm{x - La}\right).^{\wedge 2} + \left(\mathrm{y - Lo}\right).^{\wedge 2}\right) / (2. / \left(\left(\mathrm{Psy}.^{*}\mathrm{H}\right).^{\wedge 2}\right));\% (2. / (\mathrm{Psy}.^{*}\mathrm{H}).^{\wedge 2})).^{*}1.1;$
805
+
806
+ # 3. Location
807
+
808
+ function location $=$ location(degree,minute);
809
+
810
+ location $=$ degree+minute./60;
811
+
812
+ # 4. Logistic Regression
813
+
814
+ function $\mathrm{f} =$ LogisticRegression(z);
815
+
816
+ $\mathrm{f} = 1. / (1 + \exp (-z));$
817
+
818
+ # 5. Main
819
+
820
+ StateName = ['DC', 'Maryland', 'VA'];
821
+
822
+ LongitudeDegree = [77 77 77];
823
+
824
+ LongitudeMinute = [02 01 06];
825
+
826
+ LatitudeDegree = [38 39 38];
827
+
828
+ LatitudeMinute $= [510152]$
829
+
830
+ LongitudeLocation $=$ location(LongitudeDegree, LongitudeMinute);
831
+
832
+ LatitudeLocation $=$ location(LatitudeDegree, LatitudeMinute);
833
+
834
+ Population = [591233 5633597 7769089];
835
+
836
+ $\mathrm{CrimesStat} = [89231219]$
837
+
838
+ Area = [68.3 12407 42774.2];
839
+
840
+ $\mathrm{Crimes} = [21195]$
841
+
842
+ CrimeStat = [89 231 219];
843
+
844
+ CrimeRate = CrimeStat./Population;
845
+
846
+ PsycValue $= 1$ -(Crimes-1)./sum(Crimes);
847
+
848
+ $\mathbf{H} = \mathbf{C}$ rimes;
849
+
850
+ LocationSize = length(LongitudeLocation);
851
+
852
+ Distances=zeros(Size);
853
+ for $\mathrm{i} = 1:1$ :LocationSize for $\mathrm{j} = 1:1$ LocationSize Distances(i,j) sqrt((LongitudeLocation(i)-LongitudeLocation(j)).^2+(LatitudeLocation(i)-LatitudeLocation(j)).^2); end;
854
+ end;
855
+
856
+ ```matlab
857
+ DistanceAverage = mean(mean(Distances));
858
+ Xradius = max(LatitudeLocation) - min(LatitudeLocation);
859
+ Yradius = max(LongitudeLocation) - min(LongitudeLocation);
860
+ Xmin = min(LatitudeLocation) - Xradius; Xmax = max(LatitudeLocation) + Xradius;
861
+ Ymin = min(LongitudeLocation) - Yradius; Ymax = max(LongitudeLocation) + Yradius;
862
+ ```
863
+
864
+ ```matlab
865
+ Xrange = lifespan(Xmin,Xmax,1e2); Yrange = lifespan(Ymin,Ymax,1e2); map = meshgrid(Xrange,Yrange); Value = zeros(length(Xrange),length(Yrange)); Value1 = Value;
866
+ ```
867
+
868
+ for $\mathrm{i} = 1:1:1\mathrm{e}2$
869
+ for $\mathrm{j} = 1:1:1\mathrm{e}2$
870
+ for $\mathrm{k} = 1:1:$ LocationSize if (abs(Xrange(i)-LatitudeLocation(k))<DistanceAverage && abs(Yrange(j)-LongitudeLocation(k))<DistanceAverage) Value(i,j) $=$ Value(i,j)+functionOne(Population(k),Area(k),CrimeRate(k),Xrange(i),Yrange(j),LatitudeLocation(k),Long gitudeLocation(k),PsycValue(k),Crimes(k)); else Value(i,j)= Value(i,j)+functionTwo(Population(k),Area(k),CrimeRate(k),Xrange(i),Yrange(j),LatitudeLocation(k),Lon gitudeLocation(k),PsycValue(k),Crimes(k)); end; end; end;
871
+ end;
872
+
873
+ ```matlab
874
+ figure(1)
875
+ meshc(Xrange,Yrange,Value);
876
+ xlabel('Latitude');ylabel('Longitude');
877
+ figure(2)
878
+ surf(Xrange,Yrange,Value);
879
+ xlabel('Latitude');ylabel('Longitude');
880
+ ```
881
+
882
+ 6. Murder
883
+
884
+ County = ['Volusia' 'Marion' 'Citrus' 'Pasco' 'Marion' 'Marion' 'Dixie'];
885
+
886
+ LatitudeDegree = [29 29 28 28 28];
887
+
888
+ LatitudeMinute $=$ [10 16 53 19 25];
889
+
890
+ LongitudeDegree = [81 82 82 82 82];
891
+
892
+ LongitudeMinute $=$ [31 07 31 20 18];
893
+
894
+ LatitudeLocation $=$ location(LatitudeDegree,LatitudeMinute);
895
+
896
+ LongitudeLocation $=$ location(LongitudeDegree,LongitudeMinute);
897
+
898
+ Population = [498036 329628 141416 471028 14957];
899
+
900
+ CrimeRate = [.57 1.35 .26 .43 .095]/100;
901
+
902
+ Area = [1432 1663 773 868 864];
903
+
904
+ \[
905
+ \text{Crimes} = \begin{bmatrix} 1 & 3 & 1 & 1 & 1 \end{bmatrix}
906
+ \]
907
+
908
+ PsycValue = 1 - Crimes/sum(Crimes);
909
+
910
+ $\mathrm{H} = \mathrm{Crimes}$
911
+
912
+ LocationSize = length(LongitudeLocation);
913
+
914
+ Distances=zeros(Size);
915
+
916
+ for $\mathrm{i} = 1 : 1$ : LocationSize
917
+
918
+ for $j = 1:1$ :LocationSize
919
+
920
+ Distances(i,j)
921
+
922
+ sqrt((LongitudeLocation(i)-LongitudeLocation(j)).^2+(LatitudeLocation(i)-LatitudeLocation(j)).^2); end;
923
+
924
+ end;
925
+
926
+ DistanceAverage = mean(mean(Distances));
927
+
928
+ Xradius = max(LatitudeLocation) - min(LatitudeLocation);
929
+
930
+ Yradius $=$ max(LongitudeLocation)-min(LongitudeLocation);
931
+
932
+ Xmin = min(LatitudeLocation) - Xradius; Xmax = max(LatitudeLocation) + Xradius;
933
+
934
+ Ymin = min(LongitudeLocation) - Yradius; Ymax = max(LongitudeLocation) + Yradius;
935
+
936
+ Xrange = linspace(Xmin, Xmax, 1e2); Yrange = linspace(Ymin, Ymax, 1e2);
937
+
938
+ map = meshgrid(Xrange, Yrange);
939
+
940
+ Value $=$ zeros(length(Xrange),length(Yrange));
941
+
942
+ Value1 = Value;
943
+
944
+ for $\mathrm{i} = 1:1:1\mathrm{e}2$
945
+
946
+ for $j = 1:1:1e2$
947
+
948
+ for $\mathrm{k} = 1 : 1$ : LocationSize
949
+
950
+ if
951
+
952
+ (abs(Xrange(i)-LatitudeLocation(k))<DistanceAverage
953
+
954
+ &&
955
+
956
+ abs(Yrange(j)-LongitudeLocation(k))<DistanceAverage)
957
+
958
+ Value(i,j)
959
+
960
+ =
961
+
962
+ Value(i,j) + functionOne(Population(k), Area(k), CrimeRate(k), Xrange(i), Yrange(j), LatitudeLocation(k), LongitudeLocation(k), PsycValue(k), Crimes(k));
963
+
964
+ else
965
+
966
+ Value(i,j)=
967
+
968
+ Value(i,j) + functionTwo(Population(k), Area(k), CrimeRate(k), Xrange(i), Yrange(j), LatitudeLocation(k), LongitudeLocation(k), PsycValue(k), Crimes(k));
969
+
970
+ end;
971
+
972
+ end;
973
+
974
+ end;
975
+
976
+ end;
977
+
978
+ figure(1)
979
+
980
+ meshc(Xrange,Yrange,Value);
981
+
982
+ xlabel('Latitude');ylabel('Longitude');
983
+
984
+ figure(2)
985
+
986
+ surf(Xrange,Yrange,Value);
987
+
988
+ xlabel('Latitude');ylabel('Longitude');
989
+
990
+ # 7. Murder Calc
991
+
992
+ County = ['Volusia' 'Marion' 'Citrus' 'Pasco' 'Marion' 'Marion' 'Dixie'];
993
+
994
+ LatitudeDegree = [29 29 28 28 29 29];
995
+
996
+ LatitudeMinute $=$ [10 16 53 19 16 16];
997
+
998
+ LongitudeDegree = [81 82 82 82 82 82 82];
999
+
1000
+ LongitudeMinute $=$ [31 07 31 20 07 07];
1001
+
1002
+ LatitudeLocation $=$ location(LatitudeDegree,LatitudeMinute);
1003
+
1004
+ LongitudeLocation $=$ location(LongitudeDegree,LongitudeMinute);
1005
+
1006
+ Population = [498036 329628 141416 471028 329628 329628];
1007
+
1008
+ CrimeRate = [.57 1.35 .26 .43 1.35 1.35] / 100;
1009
+
1010
+ Area = [1432 1663 773 868 1663 1663];
1011
+
1012
+ Density $=$ Population./Area;
1013
+
1014
+ $\% \text{Crimes} = [13111]$
1015
+
1016
+ TotalSize = length(LatitudeLocation);
1017
+
1018
+ SampleSize= TotalSize-1;
1019
+
1020
+ PopulationSample = Population(1:SampleSize);
1021
+
1022
+ CrimeRateSample = CrimeRate(1:SampleSize);
1023
+
1024
+ AreaSample = Area(1:SampleSize);
1025
+
1026
+ DensitySample = Density(1:SampleSize);
1027
+
1028
+ NormPopulationSample = normalize(PopulationSample);
1029
+
1030
+ NormCrimeRateSample = normalize(CrimeRateSample);
1031
+
1032
+ NormAreaSample = normalize(AreaSample);
1033
+
1034
+ NormDensitySample = normalize(DenseSample);
1035
+
1036
+ MeanPopulationSample = mean(PopulationSample);
1037
+
1038
+ ```txt
1039
+ MeanCrimeRateSample = mean(CrimeRateSample);
1040
+ MeanAreaSample = mean(AreaSample);
1041
+ MeanDensitySample = mean(DensitySample);
1042
+ ```
1043
+
1044
+ ```objectivec
1045
+ StdPopulationSample = std(PopulationSample);
1046
+ StdCrimeRateSample = std(CrimeRateSample);
1047
+ StdAreaSample = std(AreaSample);
1048
+ StdDensitySample = mean(DensitySample);
1049
+ x = 1:1:SampleSize;
1050
+ ```
1051
+
1052
+ [PopulationP,PopulationS] $\equiv$ polyfit(x,PopulationSample,1);
1053
+ [CrimeRateP,CrimeRateS] $\equiv$ polyfit(x,CrimeRateSample,1);
1054
+ [AreaP,AreaS] $\equiv$ polyfit(x,AreaSample,1);
1055
+ [DensityP,DensityS] $\equiv$ polyfit(x,DensitySample,1);
1056
+
1057
+ $\mathbf{M} = [\mathrm{NormPopulationSample}'\mathrm{NormCrimeRateSample}'\mathrm{NormAreaSample}'\mathrm{NormDensitySample}]';$ $\mathbf{N} = \operatorname {cov}(\mathbf{M})$
1058
+
1059
+ Distances $=$ zeros(TotalSize);
1060
+ for $\mathrm{i} = 1:1$ :TotalSize, for $\mathrm{j} = 1:1$ TotalSize,
1061
+
1062
+ Distances(i,j) $=$ sqrt((LatitudeLocation(i)-LatitudeLocation(j))^2+(LongitudeLocation(i)-LongitudeLocation (j))^2); end;
1063
+ end;
1064
+ MeanDistance $=$ mean(mean(Distances)); StdDistance $=$ mean(std(Distances));
1065
+ for i=1:1:TotalSize Next $\equiv$ sqrt((LatitudeLocation(i)-LatitudeLocation(SampleSize)).^2+(LongitudeLocation(i)-LongitudeLocation(SampleSize)).^2); AdjacentDistances(i) $=$ Next;
1066
+ end;
1067
+
1068
+ ```javascript
1069
+ RiskIntensity = MurderRisk(CrimeRate, StdDistance, AdjacentDistances);
1070
+ PR = normalize(RiskIntensity);
1071
+ ```
1072
+
1073
+ ```matlab
1074
+ AreaMeanPrediction = MeanAreaSample;
1075
+ AreaStdPrediction = StdAreaSample;
1076
+ AreaPrediction = normpdf(Area, AreaMeanPrediction, AreaStdPrediction)
1077
+ NormAreaPrediction = normalize(AreaPrediction);
1078
+ ```
1079
+
1080
+ ```txt
1081
+ PopulationMeanPrediction = polyval(PopulationP, TotalSize);
1082
+ ```
1083
+
1084
+ PopulationStdPrediction = StdPopulationSample;
1085
+
1086
+ PopulationPrediction = normpdf(Population, PopulationMeanPrediction, PopulationStdPrediction);
1087
+
1088
+ NormPopulationPrediction = normalize(PopulationPrediction);
1089
+
1090
+ DensityMeanPrediction = polyval(DensityP, TotalSize);
1091
+
1092
+ DensityStdPrediction = StdDensitySample;
1093
+
1094
+ DensityPrediction = normpdf(Density, DensityMeanPrediction, DensityStdPrediction);
1095
+
1096
+ NormDensityPrediction = normalize(DensityPrediction);
1097
+
1098
+ CrimeRateMeanPrediction = polyval(CrimeRateP, TotalSize);
1099
+
1100
+ CrimeRateStdPrediction = StdCrimeRateSample;
1101
+
1102
+ CrimeRatePrediction = normpdf(CrimeRate, CrimeRateMeanPrediction, CrimeRateStdPrediction);
1103
+
1104
+ NormCrimeRatePrediction = normalize(CrimeRatePrediction);
1105
+
1106
+ PP = normalize(NormAreaPrediction*0.1+(NormPopulationPrediction +
1107
+
1108
+ NormDensityPrediction+NormCrimeRatePrediction)*0.9);
1109
+
1110
+ $\mathrm{P} = (\mathrm{PP} + \mathrm{PR}) / 2;$
1111
+
1112
+ $\%$ LocationSize $=$ length(LongitudeLocation);
1113
+
1114
+ % Distances=zeros(LocationSize);
1115
+
1116
+ %
1117
+
1118
+ $\%$ for $\mathrm{i} = 1:1:$ LocationSize
1119
+
1120
+ $\%$ for $\mathrm{j} = 1:1:\mathrm{LocationSize}$
1121
+
1122
+ $\%$ $\mathrm{Distances(i,j)} =$
1123
+
1124
+ sqrt((LongitudeLocation(i)-LongitudeLocation(j)).^2+(LatitudeLocation(i)-LatitudeLocation(j)).^2);
1125
+
1126
+ $\%$ end;
1127
+
1128
+ $\%$ end;
1129
+
1130
+ %
1131
+
1132
+ $\%$ DistanceAverage $=$ mean(mean(Distances));
1133
+
1134
+ $\%$ Xradius = max(LatitudeLocation) - min(LatitudeLocation);
1135
+
1136
+ $\%$ Yradius $=$ max(LongitudeLocation)-min(LongitudeLocation);
1137
+
1138
+ $\% \mathrm{Xmin} = \mathrm{min}(\mathrm{LatitudeLocation}) - \mathrm{Xradius}; \mathrm{Xmax} = \mathrm{max}(\mathrm{LatitudeLocation}) + \mathrm{Xradius};$
1139
+
1140
+ $\% \mathrm{Ymin} = \min (\mathrm{LongitudeLocation}) - \mathrm{Yradius};\mathrm{Ymax} = \max (\mathrm{LongitudeLocation}) + \mathrm{Yradius};$
1141
+
1142
+ $\%$
1143
+
1144
+ $\%$ Xrange = linspace(Xmin,Xmax,1e2); Yrange = linspace(Ymin,Ymax,1e2);
1145
+
1146
+ $\%$ map $=$ meshgrid(Xrange,Yrange);
1147
+
1148
+ $\%$ Value $=$ zeros(length(Xrange),length(Yrange));
1149
+
1150
+ % Value1 = Value;
1151
+
1152
+ %
1153
+
1154
+ $\%$ for $\mathrm{i} = 1:1:1\mathrm{e}2$
1155
+
1156
+ $\%$ for $\mathrm{j} = 1:1:1\mathrm{e}2$
1157
+
1158
+ $\%$ for $\mathrm{k} = 1:1:\mathrm{LocationSize}$
1159
+
1160
+ $\%$ if (abs(Xrange(i)-LatitudeLocation(k))<DistanceAverage &&
1161
+
1162
+ abs(Yrange(j)-LongitudeLocation(k))<DistanceAverage)
1163
+
1164
+ $\%$ Value(i,j) $=$ Value(i,j)+functionOne(Population(k),Area(k),CrimeRate(k),Xrange(i),Yrange(j),LatitudeLocation(k),LongitudeLocation(k),PsycValue(k),Crimes(k));
1165
+
1166
+ $\%$ else Value(i,j)=Value(i,j)+functionTwo(Population(k),Area(k),CrimeRate(k),Xrange(i),Yrange(j),LatitudeLocation(k),LongitudeLocation(k),PsycValue(k),Crimes(k));
1167
+
1168
+ $\%$ end;
1169
+
1170
+ $\%$ end;
1171
+
1172
+ $\%$ end;
1173
+
1174
+ $\%$ end;
1175
+
1176
+ ```txt
1177
+ % figure(1)
1178
+ ```
1179
+
1180
+ ```matlab
1181
+ % meshc(Xrange,Yrange,Value);
1182
+ ```
1183
+
1184
+ $\%$ xlabel('Latitude');ylabel('Longitude');
1185
+
1186
+ ```txt
1187
+ % figure(2)
1188
+ ```
1189
+
1190
+ ```txt
1191
+ % surf(Xrange,Yrange,Value);
1192
+ ```
1193
+
1194
+ $\%$ xlabel('Latitude');ylabel('Longitude');
1195
+
1196
+ ```txt
1197
+ 8. Murder Risk
1198
+ ```
1199
+
1200
+ function output $=$ MurderRisk(mr,StdD,d);
1201
+
1202
+ $\mathrm{dec} = \mathrm{zeros}(1,\mathrm{length}(\mathrm{d}))$
1203
+
1204
+ for $i = 1:1$ :length(d)
1205
+
1206
+ if $(\mathrm{d(i)} < 0.7^{*}\mathrm{StdD})$
1207
+
1208
+ $\mathrm{dec(i) = 0}$
1209
+
1210
+ else $\mathrm{dec(i) = (StdD. / sqrt(d(i)))}$
1211
+
1212
+ ```txt
1213
+ end;
1214
+ ```
1215
+
1216
+ ```txt
1217
+ end;
1218
+ ```
1219
+
1220
+ output $=$ exp(mr)+dec;
1221
+
1222
+ ```txt
1223
+ 9. Normalize
1224
+ ```
1225
+
1226
+ ```matlab
1227
+ function output = normalize(x);
1228
+ ```
1229
+
1230
+ output $=$ x./sum(x);
1231
+
1232
+ ```txt
1233
+ 10. Prob Calc
1234
+ ```
1235
+
1236
+ ```javascript
1237
+ LongitudeDegree = [79 80 78 79 81];
1238
+ ```
1239
+
1240
+ LongitudeMinute $= [4954522435]$
1241
+
1242
+ ```javascript
1243
+ LatitudeDegree = [40 40 38 39 39];
1244
+ ```
1245
+
1246
+ LatitudeMinute $=$ [88 04 26 24 16];
1247
+
1248
+ LongitudeLocation $=$ location(LongitudeDegree,LongitudeMinute);
1249
+
1250
+ LatitudeLocation $=$ location(LatitudeDegree,LatitudeMinute);
1251
+
1252
+ ```javascript
1253
+ Population = [3900 9057 44015 1930 6660];
1254
+ ```
1255
+
1256
+ ```javascript
1257
+ Area = [31.5 2.2 17.6 2.1 3.5];
1258
+ ```
1259
+
1260
+ ```javascript
1261
+ Density = [126.4 2354.2 2559.0 915.7 1889.6];
1262
+ ```
1263
+
1264
+ Crimes = [2374 136659139];
1265
+
1266
+ CrimeRate = Crimes*100./Population;
1267
+
1268
+ SizeOfSample = length(Population) - 1;
1269
+
1270
+ TotalSize = length(Population);
1271
+
1272
+ PopulationSample = (Population(1:SizeOfSample));
1273
+
1274
+ PopMean = mean(PopulationSample);
1275
+
1276
+ AreaSample = (Area(1:SizeOfSample));
1277
+
1278
+ AreaMean = mean(AreaSample);
1279
+
1280
+ DensitySample = (Density(1:SizeOfSample));
1281
+
1282
+ DensityMean = mean(DensitySample);
1283
+
1284
+ CrimeRateSample = (CrimeRate(1:SizeOfSample));
1285
+
1286
+ CrimeRateMean = mean(CrimeRateSample);
1287
+
1288
+ NormPopulationSample = normalize(PopulationSample);
1289
+
1290
+ NormAreaSample = normalize(AreaSample);
1291
+
1292
+ NormDensitySample = normalize(DenseSample);
1293
+
1294
+ NormCrimeRateSample = normalize(CrimeRateSample);
1295
+
1296
+ M = [NormPopulationSample' NormAreaSample' NormDensitySample' NormCrimeRateSample'];
1297
+
1298
+ $\mathrm{N} = 1\mathrm{e}4^{*}\mathrm{cov}(\mathbf{M})$
1299
+
1300
+ CovDetermin $=$ zeros(1,SizeOfSample);
1301
+
1302
+ $\mathrm{x} = 1:1$ :SizeOfSample;
1303
+
1304
+ $\mathrm{X} = 1:1:\mathrm{TotalSize};$
1305
+
1306
+ for $i = 1:1$ :SizeOfSample
1307
+
1308
+ CovDetermin(i) = N(i,i);
1309
+
1310
+ end;
1311
+
1312
+ $\mathrm{minStd} = \mathrm{min}(\mathrm{CovDetermin})$
1313
+
1314
+ $\mathrm{minPos} = ((\mathrm{CovDetermin - min}(\mathrm{CovDetermin})) == 0)$
1315
+
1316
+ [PopulationP,PopulationS,PopulationMu] = polyfit(x,PopulationSample,1);
1317
+
1318
+ [AreaP,AreaS,AreaMu] = polyfit(x,AreaSample,1);
1319
+
1320
+ [ \text{[DensityP, DensityS, DensityMu]} = \text{polyfit(x, DensitySample, 1)}; ]
1321
+
1322
+ $\%$ [CrimeRateP,CrimeRateS,CrimeRateMu] $=$ polyfit(x,CrimeRateSample,1);
1323
+
1324
+ [populationP,PopulationS,PopulationMu] = polyfit(x,NormPopulationSample,1);
1325
+
1326
+ [areaP,AreaS,AreaMu] = polyfit(x,NormAreaSample,1);
1327
+
1328
+ [densityP, DensityS, DensityMu] = polyfit(x, NormDensitySample, 1);
1329
+
1330
+ beta1 $= 1 / \mathrm{minStd}$
1331
+
1332
+ beta0 = -(beta1 * CrimeRateMean);
1333
+
1334
+ for $\mathrm{i} = 1 : 1$ : TotalSize
1335
+
1336
+ $\mathrm{z(i) =}$ beta0+beta1\*CrimeRate(i);
1337
+
1338
+ end;
1339
+
1340
+ $\mathrm{Ps} =$ normalize(LogisticRegression(z));
1341
+
1342
+ StdVector = [std(PopulationSample), std(AreaSample), std(DensitySample)];
1343
+
1344
+ PopPredictMean = polyval(PopulationP, TotalSize);
1345
+
1346
+ AreaPredictMean = polyval(AreaP, TotalSize);
1347
+
1348
+ DensityPredictMean = polyval(DensityP, TotalSize);
1349
+
1350
+ for $i = 1:1$ :TotalSize
1351
+
1352
+ p1(i) = normpdf(Population(i), PopPredictMean, StdVector(1));
1353
+
1354
+ p2(i) = normpdf(Area(i), AreaPredictMean, StdVector(2));
1355
+
1356
+ end;
1357
+
1358
+ $\mathrm{Pc} =$ normalize(normalize(p1)+normalize(p2));
1359
+
1360
+ RVMean $=$ mean(DensitySample(SizeOfSample/2:end));
1361
+
1362
+ for $i = 1:1$ :TotalSize
1363
+
1364
+ $\mathrm{p3(i) =}$ normpdf(Density(i),RVMLean,StdVector(3));
1365
+
1366
+ end;
1367
+
1368
+ $\mathrm{Pr} =$ normalize(p3);
1369
+
1370
+ $\mathrm{PP} = \mathrm{Pr}^{*}0.4 + \mathrm{Pc}^{*}0.3 + \mathrm{Ps}^{*}0.3$
1371
+
1372
+ RiskIntensity = RobberyRisk(CrimeRate);
1373
+
1374
+ PR = normalize(RiskIntensity);
1375
+
1376
+ $\mathrm{P} = (\mathrm{PR} + \mathrm{PP}) / 2$
1377
+
1378
+ Distances $=$ zeros(TotalSize);
1379
+
1380
+ for $i = 1:1$ :TotalSize,
1381
+
1382
+ for $j = 1:1$ : TotalSize,
1383
+
1384
+ Distances(i,j) $=$ sqrt((LatitudeLocation(i)-LatitudeLocation(j))^2+(LongitudeLocation(i)-LongitudeLocation(j))^2);
1385
+
1386
+ end;
1387
+
1388
+ end;
1389
+
1390
+ MeanDistance $=$ mean(mean(Distances));
1391
+
1392
+ StdDistance $=$ mean(std(Distances));
1393
+
1394
+ for $\mathrm{i} = 1 : 1$ : TotalSize
1395
+
1396
+ Next sqrt((LatitudeLocation(i)-LatitudeLocation(SizeOfSample)).^2 +
1397
+
1398
+ (LongitudeLocation(i)-LongitudeLocation(SizeOfSample)).^2);
1399
+
1400
+ if (Next>MeanDistance)
1401
+
1402
+ NextDistance(i) = Next;
1403
+
1404
+ elseif (Next==0)
1405
+
1406
+ NextDistance(i)=-MeanDistance;
1407
+
1408
+ elseif(Next $< 0.7^{*}$ MeanDistance) NextDistance(i)=0; else NextDistance(i)=MeanDistance; end;
1409
+ end;
1410
+
1411
+ $\mathrm{DP} =$ normpdf(NextDistance,MeanDistance,StdDistance);
1412
+ $\mathrm{P} = \mathrm{P}.\ast \mathrm{DP}$ $\mathfrak{p} =$ normalize(P);
1413
+
1414
+ ```txt
1415
+ 11.Robbery
1416
+ LongitudeDegree = [79 80 79 78 81];
1417
+ LongitudeMinute = [49 54 24 52 35];
1418
+ LatitudeDegree = [40 40 39 38 39];
1419
+ LatitudeMinute = [88 04 24 26 16];
1420
+ ```
1421
+
1422
+ LongitudeLocation $=$ location(LongitudeDegree,LongitudeMinute); LatitudeLocation $=$ location(LatitudeDegree,LatitudeMinute);
1423
+
1424
+ ```txt
1425
+ Population = [3900 9057 1930 44015 6660];
1426
+ Area = [31.5 2.2 2.1 17.6 3.6];
1427
+ Density = [126.4 2354.2 915.7 2559.0 1889.6];
1428
+ Crimes = [23 74 59 1366 139];
1429
+ CrimeRate = Crimes./Population;
1430
+ PsycValue = [1 1 1 1 1];
1431
+ H = [1 1 1 1 1];
1432
+ ```
1433
+
1434
+ LocationSize $=$ length(LongitudeLocation);
1435
+ Distances=zeros(LocationSize);
1436
+ for $\mathrm{i} = 1:1$ :LocationSize for $\mathrm{j} = 1:1$ LocationSize Distances(i,j)
1437
+ sqrt((LongitudeLocation(i)-LongitudeLocation(j)).^2+(LatitudeLocation(i)-LatitudeLocation(j)).^2); end;
1438
+ end;
1439
+
1440
+ ```matlab
1441
+ DistanceAverage = mean(mean(Distances));
1442
+ Xradius = max(LatitudeLocation) - min(LatitudeLocation);
1443
+ Yradius = max(LongitudeLocation) - min(LongitudeLocation);
1444
+ Xmin = min(LatitudeLocation) - Xradius; Xmax = max(LatitudeLocation) + Xradius;
1445
+ Ymin = min(LongitudeLocation) - Yradius; Ymax = max(LongitudeLocation) + Yradius;
1446
+ ```
1447
+
1448
+ ```javascript
1449
+ Xrange = linspace(Xmin, Xmax, 1e2); Yrange = linspace(Ymin, Ymax, 1e2);
1450
+ ```
1451
+
1452
+ map $=$ meshgrid(Xrange,Yrange);
1453
+ Value $=$ zeros(length(Xrange),length(Yrange));
1454
+ Value1 $=$ Value;
1455
+
1456
+ ```matlab
1457
+ for i = 1:1:1e2
1458
+ for j=1:1:1e2
1459
+ for k = 1:1:LocationSize
1460
+ if (abs(Xrange(i)-LatitudeLocation(k))<DistanceAverage && abs(Yrange(j)-LongitudeLocation(k))<DistanceAverage)
1461
+ Value(i,j)
1462
+ Value(i,j)+functionOne(Population(k),Area(k),CrimeRate(k),Xrange(i),Yrange(j),LatitudeLocation(k),LongitudeLocation(k),PsycValue(k),Crimes(k));
1463
+ else
1464
+ Value(i,j)=
1465
+ ```
1466
+
1467
+ ```javascript
1468
+ Value(i,j) + functionTwo(Population(k), Area(k), CrimeRate(k), Xrange(i), Yrange(j), LatitudeLocation(k), LongitudeLocation(k), PsycValue(k), Crimes(k));
1469
+ ```
1470
+
1471
+ ```txt
1472
+ end;
1473
+ end;
1474
+ end;
1475
+ end;
1476
+ ```
1477
+
1478
+ ```matlab
1479
+ figure(1)
1480
+ meshc(Xrange,Yrange,Value);
1481
+ xlabel('Latitude');ylabel('Longitude');
1482
+ figure(2)
1483
+ surfc(Xrange,Yrange,Value);
1484
+ xlabel('Latitude');ylabel('Longitude');
1485
+ ```
1486
+
1487
+ 12. Robbery Risk function output $=$ RobberyRisk(br); output $=$ exp(br);
1488
+
1489
+ ```javascript
1490
+ 13. Simulation Home = 'Volusia';
1491
+ ```
1492
+
1493
+ County $=$ str2mat('Volusia', 'Marion', 'Citrus', 'Pascal', 'Marion', 'Marion');
1494
+ $\mathbf{t} =$ linspace(0,2*pi,1e5);
1495
+ LocationCalc $= @$ (Degree, Minute) Degree+Minute./60;
1496
+ CircleCalcX $= @(\mathrm{x},\mathrm{r})$ $\mathrm{x + r^{*}cos(t)}$
1497
+ CircleCalcY $= @(\mathrm{y},\mathrm{r})$ $\mathrm{y + r^{*}sin(t)}$
1498
+
1499
+ ```javascript
1500
+ CountyLocationYDegree = [29 29 28 28 29 29];
1501
+ CountyLocationYMinute = [10 16 53 19 16 16];
1502
+ CountyLocationXDegree = [81 82 82 82 82 82];
1503
+ CountyLocationXMinute = [31 07 31 20 07 07];
1504
+ Population = [498036 329628 141416 471028 329628 329628];
1505
+ ```
1506
+
1507
+ CriminalRate=[0.57 1.35 0.26 0.43 1.35 1.35];
1508
+
1509
+ XLocation = LocationCalc(CountyLocationXDegree, CountyLocationXMinute);
1510
+
1511
+ YLocation = LocationCalc(CountyLocationYDegree, CountyLocationYMinute);
1512
+
1513
+ LocationSize = length(XLocation);
1514
+
1515
+ Distances $=$ zeros(Size);
1516
+
1517
+ for $\mathrm{i} = 1 : 1$ : LocationSize,
1518
+
1519
+ for $j = 1:1$ : LocationSize,
1520
+
1521
+ Distances(i,j) $\equiv$ sqrt((XLocation(i)-XLocation(j))^2+(YLocation(i)-YLocation(j))^2);
1522
+
1523
+ end;
1524
+
1525
+ end;
1526
+
1527
+ MaxDistance $=$ max(max(Distances));
1528
+
1529
+ Xmin = min(XLocation) - MaxDistance; Xmax = max(XLocation) + MaxDistance;
1530
+
1531
+ Ymin = min(YLocation) - MaxDistance; Ymax = max(YLocation) + MaxDistance;
1532
+
1533
+ Xrange= linspace(Xmax,Xmin,1e2);
1534
+
1535
+ Yrange= linspace(Ymin,Ymax,1e2);
1536
+
1537
+ Xaverage=(Xmin+Xmax)/2;
1538
+
1539
+ Yaverage $=$ (Ymin+Ymax)/2;
1540
+
1541
+ MaxRange=max(Ymax-Ymin,Xmax-Xmin);
1542
+
1543
+ map $=$ meshgrid(Xrange,Yrange);
1544
+
1545
+ mapValue = meshGrid(Xrange, Yrange);
1546
+
1547
+ SpotValue = meshGrid(Xrange, Yrange);
1548
+
1549
+ Xinterval $=$ Xrange(2)-Xrange(1);
1550
+
1551
+ Yinterval $=$ Yrange(2)-Yrange(1);
1552
+
1553
+ for $\mathrm{i} = 1:1:1\mathrm{e}2$
1554
+
1555
+ for $j = 1:1:1e2$
1556
+
1557
+ Coverage $= \left( {\left( \text{Xrange}\left( \mathrm{i}\right) - \mathrm{{XLocation}}\right) .{\land 2} + \left( {\text{Yrange}\left( \mathrm{j}\right) - \text{YLocation}}\right) .{\land 2}}\right) - \operatorname{MaxDistance}{\land 2} > 0$ ;
1558
+
1559
+ mapValue(i,j) = sum(Coverage);
1560
+
1561
+ $\%$ for $\mathrm{k} = 1:1:\mathrm{LocationSize}$
1562
+
1563
+ $\%$
1564
+
1565
+ ifsqrt((Xrange(i)-XLocation(k)).^2+(Yrange(j)-YLocation(k)).^2)<=3\*sqrt(Xinterval^2+Yinterval^2))
1566
+
1567
+ $\%$ SpotValue(i,j) = 10;
1568
+
1569
+ $\%$ else SpotValue(i,j) $\equiv$ mapValue(i,j);
1570
+
1571
+ $\%$ end;
1572
+
1573
+ $\%$ end;
1574
+
1575
+ end;
1576
+
1577
+ end;
1578
+
1579
+ $\% \mathrm{x} =$ zeros(length(t),LocationSize);
1580
+ $\% \mathrm{y} =$ zeros(length(t),LocationSize);
1581
+
1582
+ figure(1)
1583
+ axis([Xaverage-MaxRange/2) (Xaverage+MaxRange/2)
1584
+ (Yaverage-MaxRange/2)
1585
+ (Yaverage+MaxRange/2));
1586
+ for $i = 1:1:$ LocationSize
1587
+ $\mathbf{x} =$ CircleCalcX(XLocation(i),MaxDistance);
1588
+ $\mathbf{y} =$ CircleCalcY(YLocation(i),MaxDistance); contourf(x,y)
1589
+ plot(x,y);
1590
+ end;
1591
+ text(XLocation,YLocation,County);
1592
+ plot(XLocation,YLocation,'');
1593
+ ```txt
1594
+ surf(Xrange,Yrange, mapValue);
1595
+ figure(2)
1596
+ hold on
1597
+ ```
MCM/2010/B/8479/8479.md ADDED
@@ -0,0 +1,476 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ A killer is at large, and the police need our help! Lovers of mystery fiction might jump at the chance to engage in a hot pursuit of forensic clues and shady characters, but our reality is not so glamorous. Our task: given a set of crime site locations and times, narrow down a killer's location and try to predict where the next murder might be.
2
+
3
+ We present two mathematical models to assist in the investigation. The models take the locations of previous crime sites and the times that they occurred as variables. These data are few compared to the seemingly infinite possibilities of the criminal's actions, but we know at least one thing: the criminal will do his best to avoid being caught. We infer then, in accordance with past research, that he will kill where he knows the surroundings. This motivates the assumption that the killer is more likely to kill closer to his home or to places where he has killed before. We further assume that the future locations of attacks depend only on the location of the killer's home and the location and relative time of past crimes.
4
+
5
+ The first of the two models, the Centrographic model, builds upon earlier work by LeBeau. It uses an analogy to kinematics to derive the location and movement of the "centroid" of a series of crime sites. We consider the change in the location of the centroid over time to predict future crime site locations. In other words, we model the series to have a "velocity" and "momentum" and use this information to predict the future location of the centroid.
6
+
7
+ Our second model, the Rational Serial Criminal model, uses methods from microeconomic theory to model the criminal as a rational agent. This model formalizes the process of looking at a series of crime locations and assessing where the criminal prefers to find victims or dump bodies. Using data on a series of crimes, the model outputs contour surfaces representing areas where the killer is more or less likely to live. From here, we extend the model to predict where a killer is likely to strike next.
8
+
9
+ We run simulations on crime site data for different serial killers and compare our location predictions to the actual next kill in the series. The centrographic model performs well as measured by the error distance. Unfortunately, the model is sensitive to a parameter that must be determined for each individual series. The rational choice model also performs well predicting home location as measured by the Hit Score $\%$ ,a metric which compares the efficiency of a search informed by prediction to a random search.
10
+
11
+ Our model could be another useful tool for police chasing a serial killer. While we do not incorporate data such as population density or geographic features, we believe that police officers will have no trouble filling in these gaps. The model provides a good starting point when data are too few to utilize more complex methods.
12
+
13
+ # From Kills to Kilometers Using Centrographic Techniques and Rational Choice Theory for Geographical Profiling of Serial Killers
14
+
15
+ February 22, 2010
16
+
17
+ # Abstract
18
+
19
+ We present two mathematical models to assist in the investigation of a serial killer. The models take the locations of previous crime sites and the order that they occurred as variables. These data are few compared to the seemingly infinite possibilities of the criminal's actions, but at least one thing is known: the criminal will do his best to avoid being caught. We infer then, in accordance with past research, that he will kill where he knows the surroundings. This motivates the assumption that the killer is more likely to kill closer to his home or places he has killed before. We further assume that the future locations of attacks depend only on the location of the killer's home and the location and relative time of past crimes.
20
+
21
+ The first of the two models, the "centrographic" model, builds upon earlier work by LeBeau. It uses an analogy to kinematics to derive the location and movement of the "centroid" of a series of crime sites. We consider the change in the location of the centroid over time to predict future crime site locations. In other words, we model the series to have a "velocity" and "momentum" and use this information to predict the future location of the centroid.
22
+
23
+ Our second model, the Rational Serial Criminal model, uses methods from microeconomic theory to model the criminal as a rational agent. This model formalizes the process of looking at a series of crime locations and assessing where the criminal prefers to find victims or dump bodies. Using data on a series of crimes, the model outputs contoured surfaces representing areas where the killer is more or less likely to live. From here, we extend the model to predict where a killer is likely to strike next.
24
+
25
+ We run simulations on crime site data for different serial killers and compare our location predictions to the actual next kill in the series. The centrographic model performs well as measured by the error distance. Unfortunately, the model is sensitive to a parameter that must be determined for each individual series. The rational choice model also performs well predicting home location as measured by the Hit Score $\%$ , a metric which compares the efficiency of a search informed by prediction to a random search.
26
+
27
+ # 1 Introduction
28
+
29
+ When a serial killer is on the prowl<sup>1</sup>, the police deploy every available resource to stop the killings and bring the murderer to justice. Serial killers provide an interesting challenge to geographic profilers, due to the relative paucity of data on these criminals and their unique geographic behavior. Statistical prediction methods in widespread use for other types of serial crimes such as burglaries [8] or auto thefts [39] are less useful for finding serial killers, as these methods rely on a large initial data set to generate predictions.
30
+
31
+ We attempt to predict a serial killer's future behavior and location data on their crime sites and initial assumptions about their behavior. In Section 3, we adapt two complementary models to predict future crime sites and point to the location of a serial killer's anchor point [32].
32
+
33
+ We tested our models by implementing each one in a computer simulation, and running them on partial data sets from two historical serial killers. We describe our data collection process and provide examples in Section 5. We compared our models' predictions for the locations of later crimes to the actual locations where later crimes in each series occurred. We describe the results of these simulations in Section 6. Finally, in Section 7, we note some shortcomings of our approach and propose directions for further research.
34
+
35
+ # 2 Background Information
36
+
37
+ # 2.1 Serial Killing
38
+
39
+ The FBI defines serial murder as three or more killings with "cooling-off" periods in between [32, 27]. Experience shows that the behavior of such serial criminals follows some predictable patterns. Specifically, serial killers tend to operate near their homes, in areas with which they are familiar [32]. For example, one study of 126 U.S. serial killers found that $89\%$ of them lived within a circle drawn around the locations where they disposed of bodies [23]. The inclination of a serial killer to work close to a home base or anchor point [32] often leads to a spatial clustering of their crime sites. Therefore, by looking at the geographic locations of crimes, we can say something about where the criminal who committed them is likely to live or might strike next.
40
+
41
+ In the past two decades, "geographical profiling" has exploded, and many law-enforcement agencies now use spatial modeling techniques in the pursuit of serial criminals [33]. The most popular approach includes the assumption that the criminal lives near his crime scenes, and has not moved place of residence since the crime spree began [32, 33].
42
+
43
+ # 2.2 Previous Work
44
+
45
+ At first blush, criminals seem difficult, if not impossible, to predict. The criminal, however, must follow some sort of thought process in order to commit his acts and, as such, "crime pattern theory combines rational choice, routine activity theory, and environmental principles to explain the distribution of crimes" [32]. From these ideas, geographical profiling was developed. When analyzing serial crimes, few data are available other than that at the crime scene and disposal site of the body. This motivates Lundrigan and Canter's research into a serial killer's disposal site location choice, which statistically analyzed data on apprehended serial killers to determine how a killer's home location related to his body disposal sites [23]. Many factors can influence the serial killer's disposal site selection. In a study conducted in Germany, the median distance between the disposal site and the killer's home decreased as age of killer increased, a killer's higher IQ corresponds to a further distance of disposal site from the home base, though the number of murders and distance between a killer's home and disposal site was not statistically significant [38].
46
+
47
+ Another method of determining the location of a serial killer is to consider the daily routines of the victims. Geographic profilers can use mapping software to determine where victims' routines overlap, and these locations can be interpreted as places where the killer might have seen the victims. These data, in turn, can be used to hypothesize the killer's daily whereabouts [37]. In an age where most people are connected to the rest of the world through cellular telephones and internet, it is possible now more than ever to gather objective data regarding where a person was at a certain time.
48
+
49
+ In modeling serial crime, a primary concern of geographical profiling is the prediction of areas at high risk of future crime, known colloquially as crime "hotspots". Past models for geographical profiling of serial offenders have shown varying results. Due to the nature of serial crimes as discrete events, the majority of methods for displaying and analyzing data on serial criminals utilize a technique known as point mapping [10]. A problem with simple point mapping, however, as described by Chainey et al., is that "it is difficult to clearly identify the location, relative scale, size and shape of hotspots when crime data are presented as points". As such, it becomes necessary to find alternate methods of display and analysis.
50
+
51
+ # 3 Models
52
+
53
+ # 3.1 A Centrographic Model
54
+
55
+ # 3.1.1 Background
56
+
57
+ A standard analytical measure in the geographical profiling toolbox is the spatial mean of point data, obtained by summing the geographic coordinates over all observations independently in each spatial dimension and dividing by the total number of observations [32]. This effectively generates a 'center' to the data
58
+
59
+ set that can be used in conjunction with other methods to describe the concentration of a distribution. A long-utilized and relatively simple tool, the spatial mean remains in widespread use today in geographical profiling due to its ease of use. Despite the simplicity of calculation, previous studies have shown the spatial mean to perform comparably to alternative, more complex algorithms and even human judgment [39].
60
+
61
+ One drawback to the spatial mean is that it doesn't intrinsically use time to describe and forecast an evolving distribution such as is seen with serial killings. To utilize the temporal aspect of such a data set, we must extend the model. One extension that arises naturally is to consider the change in the spatial mean over time [21]. Previous work has used analogy to simple physical phenomenon to come up with parallels to velocity (change in position over time), acceleration (change in velocity over time), and momentum (the product of velocity and mass) [32, 21]. In this model we will use several of these concepts to come up with a method of predicting the locations of future kills or crimes in a spree.
62
+
63
+ # 3.1.2 Assumptions
64
+
65
+ To justify a centrographic approach to geographical profiling, a few assumptions must be made regarding the behavior of serial killers:
66
+
67
+ - A serial killer is more likely to kill in an area he's familiar with, (i. e., a place he lives near or has killed near before.)
68
+ - The distribution of probable locations of next kill changes with time, even if the distribution of known kills does not.
69
+ - The order in which and time at which kill events in a distribution occur are a factor in determining probable locations of next kill, not just the spatial location.
70
+
71
+ # 3.1.3 Model
72
+
73
+ Define $\mathbf{X}_{K_j}$ to be the position vector corresponding to the site of the $j$ -th observed crime, $K_{j}$ . If we consider the reach or area of activity of a serial killer to be represented by the distribution of the observed events, we can define the 'center of mass' or centroid of the area of activity at the time of the of the $j$ -th event, $\mathbf{C}_j$ , to be the geographic mean of $\mathbf{X}_{K_1}\ldots \mathbf{X}_{K_j}$ ,
74
+
75
+ $$
76
+ \mathbf {C} _ {j} = \sum_ {i = 1} ^ {j} \frac {\mathbf {X} _ {K _ {i}}}{j}
77
+ $$
78
+
79
+ Utilizing this, a reasonable estimate for a probable area of future encounter with the criminal, $A$ , can be obtained by finding the mean distance from $\mathbf{C}_j$ to each event $\mathbf{K}_i$ in each spatial direction and defining $A$ to be the area within the polygon formed by extending from the mean this distance in each corresponding direction.
80
+
81
+ Since $\mathbf{C}_j$ is a spatial location that changes with time, it is natural to calculate the change in position per unit time—the geographic analogue of the velocity of $\mathbf{C}$ at the time of the $j$ -th event, $t_j$ [32],
82
+
83
+ $$
84
+ \mathbf {V} _ {C _ {j}} = \frac {d \mathbf {C} _ {j}}{d t},
85
+ $$
86
+
87
+ as well as an analogue of the 'linear momentum' of $\mathbf{C}$ at time $t_j$ [21],
88
+
89
+ $$
90
+ \mathbf {P} _ {C _ {j}} = j \mathbf {V} _ {C _ {j}},
91
+ $$
92
+
93
+ where the 'mass' of the distribution is equal to the current number of serial events, $j$ .
94
+
95
+ The concept of linear momentum $\mathbf{P}_{C_j}$ is important to our model as it allows for a mathematical analysis of the spatial direction in which the kill spree seems to be moving, relating $\mathbf{P}_{C_j}$ to $\mathbf{P}_{C_{j - 1}}$ by considering the law of conservation of momentum. We define the velocity of serial event $K_{j}$ as the distance between event $K_{j}$ and the previous centroid divided by the time between event $K_{j}$ and $K_{j - 1}$
96
+
97
+ $$
98
+ \mathbf {V} _ {K j} = \frac {\mathbf {X} _ {K j} - \mathbf {C} _ {j - 1}}{t _ {j} - t _ {j - 1}}.
99
+ $$
100
+
101
+ It follows from the definition of mass above that the mass of a single event is 1, and thus the momentum of a single event $K_{j}$ is equal to its velocity,
102
+
103
+ $$
104
+ \mathbf {P} _ {K _ {j}} = \mathbf {V} _ {K _ {j}}.
105
+ $$
106
+
107
+ Modeling the state of $C_j$ at time $t_j$ as the result of a completely inelastic collision between $C_{j-1}$ and $K_j$ (i.e., $K_j$ is absorbed and joins $C_{j-1}$ to form $C_j$ ), conservation of momentum tells us that
108
+
109
+ $$
110
+ \mathbf {P} _ {C _ {j}} = j \mathbf {V} _ {C _ {j}} = \mathbf {P} _ {C _ {j - 1}} + \mathbf {P} _ {K _ {j}} = (j - 1) \mathbf {V} _ {C _ {j - 1}} + \mathbf {V} _ {K _ {j}},
111
+ $$
112
+
113
+ or,
114
+
115
+ $$
116
+ \mathbf {V} _ {C _ {j}} = \frac {(j - 1) \mathbf {V} _ {C _ {j - 1}} + \mathbf {V} _ {K _ {j}}}{j},
117
+ $$
118
+
119
+ where it follows trivially that
120
+
121
+ $$
122
+ \mathbf {V} _ {C _ {1}} = 0,
123
+ $$
124
+
125
+ which follows expectations for a spree consisting of only one kill (non-serial).
126
+
127
+ To make a prediction for the next kill site in a series $K_{j+1}$ , we can incorporate both the current centroid $\mathbf{C}_n$ and the current velocity of the centroid $\mathbf{V}_{C_j}$ by defining $A(t)$ as the polygon $A$ as defined above with a center dependent on time according to $\mathbf{V}_{C_n}$ ,
128
+
129
+ $$
130
+ C e n t e r _ {A (t)} = \mathbf {C} _ {j} + f (t) \mathbf {V} _ {C _ {j}} + g (t) \hat {\mathbf {a}} _ {\mathbf {j}},
131
+ $$
132
+
133
+ where $\hat{\mathbf{a}}_{\mathbf{j}}$ is an estimate of the acceleration of $C_j$ based on a linear regression of velocity data obtained from $K_{1}\ldots K_{j}$ and $f(t)$ and $g(t)$ are parameter coefficients chosen to match the constraints:
134
+
135
+ $$
136
+ \lim _ {t \rightarrow \infty} \frac {d (C e n t e r _ {A (t)})}{d t} = 0 \tag {1}
137
+ $$
138
+
139
+ $$
140
+ f (t) \approx t \text {f o r} t < \bar {t} \tag {2}
141
+ $$
142
+
143
+ $$
144
+ g (t) \approx t ^ {2} / 2 \text {f o r} t < \bar {t} \tag {3}
145
+ $$
146
+
147
+ where $\bar{t}$ is the mean time between $K_{i}$ 's.
148
+
149
+ Constraint 1 is necessary to keep our prediction reasonably centered on the observed events over long spans of time, and conditions 2 and 3 are necessary for the kinematic analogue. One possible choice of $f(t)$ and $g(t)$ explored in this paper is
150
+
151
+ $$
152
+ f (t) = \frac {2 t}{1 + e ^ {\beta t}}, \qquad g (t) = \frac {2 t ^ {2}}{(1 + e ^ {\beta t}) ^ {2}}
153
+ $$
154
+
155
+ for some real constant $\beta > 0$ .
156
+
157
+ Note that the final model yields a prediction $A(t)$ that is a function of time, consistent with our assumption of an evolving killer. To get a general prediction, it is possible to evaluate $A(\bar{t})$ , yielding the prediction for where the next kill will occur should it happen after the mean time $\bar{t}$ .
158
+
159
+ # 3.2 A Rational Choice Model
160
+
161
+ # 3.2.1 Background
162
+
163
+ A number of criminologists have suggested that much criminal behavior can be modeled as rational choices made by agents who derive some utility from their crimes [6, 12, 29]. On one hand, criminal activity has historically been seen as defying rationality [12]. If we grant, however, that a given rapist, burglar, or other criminal will rape, burglarize, or otherwise break the law, then their choices of whom to rape or which houses to burglarize may appear rational [16], and may in fact be the result of much planning and deliberation. This makes rational choice theory as implemented in economics particularly well-suited for our purpose here of looking at a collection of crimes to draw inferences about the criminal's future behavior.
164
+
165
+ Previous studies have applied economic theory to criminology by using Expected Utility Theory [40] and Prospect Theory [20] to model a criminal who weighs the benefits to himself of committing a crime against the odds of getting caught and its associated harm. While our model here is based on the idea from microeconomics of using observed revealed preference data to make inferences about unobserved preferences [34], it does not resemble a traditional economic model so nearly as do these earlier works.
166
+
167
+ The microeconomic model seeks to make predictions about the responses of dependent variables (such as quantities consumed of various goods) to possible
168
+
169
+ future values of independent variables (such as prices) based on past observations of both [34]. In this model, the agent selects a bundle of goods to purchase by assigning quantities to each available good; these quantities form a consumption vector. In our model, the agent assigns probabilities to each possible location for his next crime, forming a probability vector. The economic agent faces a scalar budget and a vector of prices. The dot-product of her chosen consumption vector and the environmental price vector must not exceed her budget. She is assumed to maintain a consistent ranking of all possible consumption bundles. For our criminal agent, the analogue of price is perceived risk of getting caught if he commits his next crime in a given location. While we cannot observe this the same way that we observe prices, it is analogous in that it differs across observations, since we expect the risks of getting caught in various locations to change each time the criminal acts (due to increased public awareness or police presence, e.g.). His (unrelated) constraint is that the entries in his probability vector must sum to 1. Rather than ranking consumption bundles, he ranks probability distributions.
170
+
171
+ Choosing a probability distribution as a means of agency has precedent in the notion of a mixed strategy in game theory [15]. We believe this reflects the nature of the actual choices the criminal makes, particularly about location. Geographical profiling expert D. Kim Rossmo notes that "for a direct-contact predatory crime to occur, the paths of the offender and victim must intersect in time and space, within an environment appropriate for criminal activity." [32]. Since the criminal cannot significantly control the flow of potential victims or the suitability of different locations, he does not have complete control over where and when he breaks the law. Rather, when he walks down a particular street, he is choosing to increase the probability that he will commit his next crime there.
172
+
173
+ Since a rational agent must have consistent preferences, if our serial criminal is to evolve, then each crime must change his incentive structure. We model this by including a perceived extra risk of capture at any given potential target as a function of the locations of previous acts: you can never go back to the scene of a perfect crime [11]. Since the overall risk of getting caught should already be a significant factor in the agent's preference for one crime scene over another, it is only this additional risk which we call analogous to price.[2]
174
+
175
+ When we talk about a "rational serial criminal", we do not mean to suggest that there are good reasons for committing series of crimes.3 We merely hypothesize that by assuming this series of crimes to be determined by a consistent preference field [30], we may make useful predictions about the criminal. Additionally, when we talk about "utility" as a function of the probability vector described above, we do not mean to imply that committing crimes fulfills some desire of the criminal's to a variable extent depending on where he does it. The
176
+
177
+ 2One could argue that a fully rational criminal adding up the costs of committing a crime at a particular place would consider the loss in welfare he would experience when it became riskier for him to commit other crimes there in the future. While technically true, we do not consider this point relevant to our analysis, or at all.
178
+
179
+ 3But nor do we deny it.
180
+
181
+ utility function is just a convenient mathematical tool, which is equivalent to making direct inferences from revealed preference [17].
182
+
183
+ To summarize, we construct a model of a rational serial criminal who controls his behavior by determining a probability distribution defined over a bounded geographical range of activity. This is the distribution of a random variable representing the location of his next crime. The model does not include the decision to commit a crime, as we are interested in modeling the behavior of a serial offender in the course of a spree. We assume the distribution to be static over the period between any two crimes, and to change instantaneously at the instant that a crime is committed.
184
+
185
+ # 3.2.2 Rational Choice: Theory
186
+
187
+ Divide the criminal's assumed range into $n$ cells, numbered 1 through $n$ . Included in our data, then, is a vector $\mathbf{K}$ , where each $K_{j} \in \{1, \dots, n\}$ denotes the cell in which the criminal committed his $j^{th}$ crime. Assume the criminal is a rational agent with a utility function
188
+
189
+ $$
190
+ U (\mathbf {p}, \mathbf {c}),
191
+ $$
192
+
193
+ where $\mathbf{p}$ is a vector defined over all cells of the grid such that $p_i =$ the probability that next crime will happen in cell $i$ , and $\mathbf{c}$ is a vector of the same dimension such that $c_{i} =$ the criminal's perception of the increased risk that he will be caught if he commits his next crime in cell $i$ .
194
+
195
+ He chooses $\mathbf{p}$ to maximize $U$ subject to the constraint
196
+
197
+ $$
198
+ \sum_ {i = 1} ^ {n} p _ {i} = 1.
199
+ $$
200
+
201
+ This yields the first-order conditions
202
+
203
+ $$
204
+ \begin{array}{l} \frac {\partial U}{\partial p _ {1}} \quad (p _ {1}, \dots , p _ {n}, c _ {1}, \dots , c _ {n}) = 0 \\ \vdots \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \\ \frac {\partial U}{\partial p _ {n}} \left(p _ {1}, \ldots , p _ {n}, c _ {1}, \ldots , c _ {n}\right) = 0 \\ \end{array}
205
+ $$
206
+
207
+ Assume that $U$ is well-behaved, so that we may invoke the Implicit Function Theorem to define $\mathbf{k}(\mathbf{c})$ such that
208
+
209
+ $$
210
+ \begin{array}{l} p _ {1} = k _ {1} \quad (c _ {1}, \dots , c _ {n}) \\ \vdots \qquad \vdots \qquad \cdot \\ p _ {n} = k _ {n} (c _ {1}, \ldots , c _ {n}) \\ \end{array}
211
+ $$
212
+
213
+ Assume that each time the criminal commits a crime, $\mathbf{c}$ changes, and therefore $\mathbf{k}$ changes. As the function $\mathbf{k}(\mathbf{c})$ is arbitrary, it may contain any number of parameters $\alpha$ : these may reflect any way in which we think the criminal prefers one location for his crime over another. It is these parameters which we hope to find.
214
+
215
+ We begin by assuming a form for $\mathbf{c}(x)$ . It may be as simple as
216
+
217
+ $$
218
+ \mathbf {c} \left(X _ {j}\right) = \left\{ \begin{array}{l l} 1 & \text {i f} i \in X _ {j} \\ 0 & \text {o t h e r w i s e} \end{array} , \right. \tag {4}
219
+ $$
220
+
221
+ where $X_{j} = \{i\}$ the criminal committed one of his first $j$ crimes in cell $i$ .
222
+
223
+ This form captures the belief, "If I commit a crime twice in the same spot, I will be arrested for sure, but otherwise my past crimes have no impact on my chances of being caught." Next, assume some form for the $k_{i}$ functions. For a criminal assumed to prefer working near a home base (allowing for a buffer zone), a possible one-parameter function would be
224
+
225
+ $$
226
+ k _ {i} (\mathbf {c}, \alpha) = \left\{ \begin{array}{l l} 0 & \text {i f} \alpha = i \\ \frac {1 - c _ {i}}{d (\alpha , i) k} & \text {o t h e r w i s e} \end{array} \right. \tag {5}
227
+ $$
228
+
229
+ where $\alpha$ represents the location of a home base, $d$ is a metric, and $\bar{k}$ is a normalization constant. Then for any value(s) of $\alpha$ , we can calculate the probability $P(K_{j}|X_{j-1},\alpha)$ that the rational criminal characterized by the functions $\mathbf{k}$ and $\mathbf{c}$ , having committed $j-1$ crimes in the set of locations $X_{j-1}$ where the first $j-1$ crimes occurred, would have committed his $j^{th}$ crime in the location $K_{j}$ where the $j^{th}$ crime actually took place. Finally, define
230
+
231
+ $$
232
+ P (\mathbf {K} | \alpha) = \bigcap_ {j} k _ {K _ {j}} (\mathbf {c} (X _ {j - 1}), \alpha). \tag {6}
233
+ $$
234
+
235
+ Then $P(\mathbf{K}|\alpha)$ is the probability that the observed set of crimes $\mathbf{K}$ would have been committed by our rational criminal, given $\alpha$ . By evaluating this function over a range of possible values for $\alpha$ , we can get an idea of which parameter values are more probable than others.
236
+
237
+ # 3.2.3 Rational Serial Criminal: Practice
238
+
239
+ The theory above could be used to model any serial criminal who commits crimes at different locations. In the current paper, we are interested in helping a police department catch a serial killer, and in this section we describe our specific implementation of a Rational Serial Criminal (RSC) model for this task.
240
+
241
+ When a serial killer is at large, law enforcement has two goals: to prevent future killings and to find the killer. The rational choice model can help with both of these goals, by making predictions about where the killer lives and about where he may kill again, both based on previous killings. Implementing the model requires making explicit our assumptions about the killer's behavior. For both purposes, we make the following assumptions:
242
+
243
+ - The killer prefers to kill near, but not at, an anchor point $\alpha$ . This anchor point does not change over the course of the spree.
244
+ - The killer's taste for killing in a given location, $k_{i}$ , is a function only of that location's distance from his anchor point and its distance from previous kill sites $X$ .
245
+
246
+ - $k_{i}$ is given by Equations 4 and 5.
247
+
248
+ First, we estimate the likelihood that the killer lives in each grid-cell of the city. We make the following additional assumptions:
249
+
250
+ - The prior odds $P\left( \alpha \right)$ of the killer living in place $\alpha$ are equal for all $\alpha$ .
251
+ - The prior odds $P(K_{j} = i)$ of the killer committing his $j^{th}$ murder in place $i$ are equal for all $i$ .
252
+
253
+ Given these assumptions, Bayes' Formula
254
+
255
+ $$
256
+ P (\alpha = i | \mathbf {K}) = \frac {P (\mathbf {K} | \alpha = i) P (\alpha = i)}{P (\mathbf {K} | \alpha = 1) P (\alpha = 1) + \cdots + P (\mathbf {K} | \alpha = n) P (\alpha = n)}
257
+ $$
258
+
259
+ implies
260
+
261
+ $$
262
+ P (\alpha = i | \mathbf {K}) \propto P (\mathbf {K} | \alpha = i). \tag {7}
263
+ $$
264
+
265
+ Our goal, and perhaps the limit of our abilities, is to rank potential anchor points in order of likelihood. As such, Equation 7 is sufficient to say that the isopleth produced by plotting Equation 6 may be taken as a jeopardy surface, representing the relative likelihood of the killer residing within the region associated with each grid cell [32]. Such a plot could help the police find the killer faster by telling them where to look first [32].
266
+
267
+ Next, we estimate the likelihood that the killer will strike next in each grid-cell. For each grid cell $i$ , this is equivalent to $p_i$ . Again, we assume $P(\alpha | K_1, \ldots, K_{j-1}) \propto P(K_1, \ldots, K_{j-1} | \alpha)$ for all $\alpha$ . Using data from $J - 1$ attacks, we estimate the likelihood that the $J^{th}$ attack will occur in any given cell $i$ by
268
+
269
+ $$
270
+ \hat {p} _ {i} \propto \sum_ {j = 1} ^ {J - 1} \sum_ {\alpha = 1} ^ {n} k _ {i} (\mathbf {c} (X _ {j - 1}), \alpha) P (K _ {1}, \dots , K _ {j - 1} | \alpha),
271
+ $$
272
+
273
+ where $P(K_{1},\ldots ,K_{j - 1}|\alpha)$ is computed by Equation 6. Again, our goal is to rank possible locations for the next attack by likelihood, so we do not bother with normalization constants.
274
+
275
+ # 4 Data
276
+
277
+ # 5 Data
278
+
279
+ To calibrate and test our models, we constructed data sets from data on three real serial killers: David Berkowitz, the "Son of Sam", Angelo Buono and Kenneth Bianchi, the "Hillside Strangler" (police originally thought there was a single killer), and John Allen Muhammad, the "D.C. Sniper". For each one, we began with a list of the killer's victims taken from a Wikipedia [3, 4, 2], and then filled in details for each murder or attempted murder with details found in newspaper articles from the time. The Berkowitz and Sniper data sets included
280
+
281
+ address where people were shot, while the Hillside dataset included locations where strangled bodies were found. For calculations, we converted the street addresses to latitude-longitude coordinate pairs (Table 1).
282
+
283
+ <table><tr><td>Kill#</td><td>Longitude</td><td>Latitude</td><td>RelativeDay</td><td>Victim(s)</td><td>Address</td></tr><tr><td>1</td><td>-73.833846°</td><td>40.847067°</td><td>0</td><td>Donna Lauria and Jody Valenti</td><td>2860 Burhe Ave, Bronx, NY</td></tr><tr><td>2</td><td>-73.804809°</td><td>40.768099°</td><td>25</td><td>Carl Denaro and Rosemary Keenan</td><td>160th St and 33rd Ave, Queens, NY</td></tr><tr><td>3</td><td>-73.706561°</td><td>40.738631°</td><td>121</td><td>Joanne Lomino and Donna Demasi</td><td>83-31 262nd St, Queens, NY</td></tr><tr><td>4</td><td>-73.845048°</td><td>40.718511°</td><td>185</td><td>Christine Freund and John Diel</td><td>1 Station Square, Queens, NY</td></tr><tr><td>5</td><td>-73.847051°</td><td>40.718598°</td><td>222</td><td>Virginia Voskerichian</td><td>4 Dartmouth St, Queens, NY</td></tr><tr><td>6</td><td>-73.835418°</td><td>40.847445°</td><td>262</td><td>Alexander Esau and Valentina Suriani</td><td>1878 Hutchinson River Parkway, Bronx, NY</td></tr><tr><td>7</td><td>-73.771343°</td><td>40.758704°</td><td>332</td><td>Sal Lubo and Judy Placido</td><td>45-39 211th St, Queens, NY</td></tr><tr><td>8</td><td>-74.011712°</td><td>40.612735°</td><td>367</td><td>Stacy Moskowitz and Robert Vi-olante</td><td>86th St and 14th Ave, Brooklyn, NY</td></tr></table>
284
+
285
+ Table 1: Example data set: Berkowitz murders [36, 3]
286
+
287
+ # 6 Results
288
+
289
+ # 6.1 Centrographic Model
290
+
291
+ To test the centrographic model's predictive capabilities, we implemented it via a Python script and ran it on the data from the Hillside kills sequentially, using the first $j - 1$ kills to predict the $j^{th}$ starting with $j = 4$ (Figure 1). As the output from the model changes with time we used the actual time of the subsequent kill $K_{j}$ in evaluating the prediction function. In our simulations we adopted an $\beta$ value of $\beta = \frac{\bar{t}_j}{\ln 15}$ where $\bar{t}_j$ is the mean time between kills up to kill $j$ . In practice a value of $\beta$ must be determined according to the previous crime data for the specific series.
292
+
293
+ To measure accuracy of the model, we calculated the distance between the center of the area of probable next kill and the actual location of the next kill (Table 2). The error distance has been criticized in past works [32], but recent comparisons between the error distance and alternative accuracy measures have shown it to perform comparably [39]. The metric we use in calculating error distance is the L1 or Manhattan metric,
294
+
295
+ $$
296
+ d _ {1} (p, q) = | | p - q | | _ {1} = | p _ {x} - q _ {x} | + | p _ {y} - q _ {y} |
297
+ $$
298
+
299
+ Our choice of the Manhattan metric arises from previous work indicating that it most accurately models actual travel distance in urban settings, a more important factor than strict Euclidean distance in geographical profiling [32].
300
+
301
+ To estimate the acceleration of the spatial centroid we perform a linear regression on the calculated centroidal velocity data for each estimate, in both the longitudinal (Table 3) and latitudinal (Table 4) directions independently.
302
+
303
+ <table><tr><td>Kill#</td><td>Distance from center of predicted area (km)</td></tr><tr><td>4</td><td>19.9032477532</td></tr><tr><td>5</td><td>13.1624956104</td></tr><tr><td>6</td><td>26.1455165479</td></tr><tr><td>7</td><td>12.1278772065</td></tr><tr><td>8</td><td>4.41086910109</td></tr></table>
304
+
305
+ Table 2: Distance between center of predicted area and actual location of subsequent kill, measured with Manhattan metric. Hillside data set [19].
306
+
307
+ <table><tr><td>Kill#</td><td>\( \hat{\mathbf{a}}_{\mathbf{x}} \)</td><td>\( \mathbf{R^2} \)</td><td>p</td></tr><tr><td>4</td><td>0.00119566020314</td><td>1.0</td><td>N/A</td></tr><tr><td>5</td><td>0.00110767788734</td><td>0.99360376805</td><td>0.0720425343571</td></tr><tr><td>6</td><td>0.000787815342892</td><td>0.829782525827</td><td>0.170217474173</td></tr><tr><td>7</td><td>0.000494702112585</td><td>0.592338457701</td><td>0.292588804185</td></tr><tr><td>8</td><td>0.000422872375405</td><td>0.571437451732</td><td>0.236142632648</td></tr></table>
308
+
309
+ Table 3: Linear regression data for the centrographic model, $x$ -direction (longitudinal). Hillside data set [19].
310
+
311
+ <table><tr><td>Victim#</td><td>\( \hat{\mathbf{a}}_{\mathbf{y}} \)</td><td>\( \mathbf{R}^2 \)</td><td>p</td></tr><tr><td>4</td><td>0.000665961218837</td><td>1.0</td><td>N/A</td></tr><tr><td>5</td><td>0.000983419577946</td><td>0.90783947974</td><td>0.275461438702</td></tr><tr><td>6</td><td>0.00105866206185</td><td>0.93384684203</td><td>0.0661531579701</td></tr><tr><td>7</td><td>0.000983719311558</td><td>0.935632003161</td><td>0.0194133564982</td></tr><tr><td>8</td><td>0.000950274912901</td><td>0.943853948827</td><td>0.00464007177548</td></tr></table>
312
+
313
+ Table 4: Linear regression data for the centrographic model, $y$ -direction (latitudinal). Hillside data set [19].
314
+
315
+ # 6.2 Rational Serial Criminal Model
316
+
317
+ Like Rossmo's criminal geographic targeting (CGT) model [32], the RSC model yields a jeopardy surface representing the estimated relative probabilities that the killer lives within each grid cell on a map. For this to be useful, it should allow law enforcement to search for the killer more efficiently. We tested our results using Microsoft Excel macros in the same way that Rossmo tested the
318
+
319
+ ![](images/34e63c213fc3c239c4b0173d6e32cd8cc8ca514242bf52a9fec18254003784a2.jpg)
320
+
321
+ ![](images/507b98bd65e4eb2f0cf8800181c2eae2d1611db160f6097255418d235ae96788.jpg)
322
+
323
+ ![](images/c88d3639a93188b34e18be2edbccc36fff18d22f146a4ae39cfb72a587975b19.jpg)
324
+
325
+ ![](images/216642cbe564ac9232ed74629710ea6779cd8cbcb52f61bbf9d87bed833de6bf.jpg)
326
+
327
+ ![](images/374addaae8c9c7099964b7fcddf060cb92fd6daf044a939350acdce62167734f.jpg)
328
+ Figure 1: Simulation results for the centrographic model with the Hillside Strangler data set [4]. The blue points represent the kills used to calculate the centroid, centroidal velocity and centroidal acceleration. The red diamond represents the estimated area of next probable event (within one mean distance of the centroid along each axis in each direction). The black point represents the actual location of the next event.
329
+
330
+ ![](images/fd22e56e0d50339669ec2902717d2f12a95f35f426908e0bbb00cb227b590aeb.jpg)
331
+ Figure 2: Attack sites, jeopardy surface, and homes of David Berkowitz, the "Son of Sam," using the Berkowitz data set [24]. The ranking of quadrant coloring in order from least probable location to most probable location is: blue, red, green, purple.
332
+
333
+ CGT model: by drawing a box called a "hunting area" around the attack sites, plotting the jeopardy surface based on the attack locations, and then asking the question: if the police were to search every cell in the hunting area in order of highest to lowest on the jeopardy surface, what percentage of the grid would they have to search before they got to the killer? This number is called the Hit Score % [32]. Multiplying this by the size of the hunting area yields the effective search area: the actual area the police would have to search, following the jeopardy surface, before finding the killer. This method of testing allows the modeler some degree of flexibility, in deciding which locations would have been counted as crime sites and in guessing where the killer would have been found.
334
+
335
+ For David Berkowitz, we ran the RSC model with the locations of the eight shooting incidents attributed to him at the time of his capture [24]. The resulting jeopardy surface is shown in Figure 2, with locations of shootings and Berkowitz's last two residences overlaid. While on most occasions he shot or shot at two people, we counted each shooting only once. Berkowitz lived in the Bronx for several years, near his first and sixth crime scenes, but moved to Yonkers before shooting his first victims [25]. He later admitted to having stabbed two people near his Bronx home while living there [25], but newspaper accounts from the time [36, 24] show no indication that the stabbings had been connected to his shootings before his arrest. Therefore, we did not include them in our analysis, as their location would not have been available for consideration by law enforcement at the time. We computed two scores, one using a grid defined by the crime sites to look for Berkowitz former house in the Bronx, and
336
+
337
+ one using a grid contrived to include his Yonkers house to find him there. The results are tabulated in Table 5.
338
+
339
+ <table><tr><td>Home</td><td>Hunting Area</td><td>Hit Score %</td><td>Search Area</td></tr><tr><td>Bronx</td><td>1,414 km2</td><td>18.1%</td><td>255 km2</td></tr><tr><td>Yonkers</td><td>2,514 km2</td><td>63.2%</td><td>1,588 km2</td></tr></table>
340
+
341
+ In October and November of 1977, the bodies of 11 strangled women were discovered in Los Angeles, and were all believed to be the victims of a single "Hillside Strangler" [19]. Using the exact same parameters and assumptions, we ran the RSC model on the 9 of these locations (two of the bodies were found in the same location, which we only counted once; for one body we could not find an exact location of discovery). The resulting jeopardy surface is plotted in Figure 3, and the Hit Score $\%$ and accompanying data for the house of one of the two offenders, Angelo Buono, [7] can be found in Table 6.
342
+
343
+ There were two main differences between this data and the data for the Berkowitz shootings: first, these were locations where bodies were found dumped, not locations where killings actually took place. Second, the hunting zone around these bodies, determined by their location, stretched over twice as far North to South as it did East to West, while the Berkowitz hunting zone was essentially square. Nonetheless, the model still performed reasonably well on the Hillside Strangler data without modification.
344
+
345
+ To test the predictive powers of the RSC model, we used the jeopardy surface with the first six kills from the Berkowitz data set and the modified utility and cost functions (see Section 3) to try to predict the location of the 7th kill (Figure 4). In this case it doesn't make sense to use a Hit Score $\%$ to analyze accuracy as the killer will not necessarily be stationed at the predicted site during the course of an extended search, but it is evident from the jeopardy surface that the model performed favorably, as the kill occurred in one of the regions of highest probability. Of course, this is just one data point, and a month later Berkowitz shot his final victims in Brooklyn: this point would not even have been on our map.
346
+
347
+ Table 5: Performance of RSC model on data for David Berkowitz [24]
348
+
349
+ <table><tr><td>Hunting Area</td><td>Hit Score %</td><td>Search Area</td></tr><tr><td>2,721 km2</td><td>13.9%</td><td>379 km2</td></tr></table>
350
+
351
+ Table 6: Performance of RSC model on data for Buono & Bianchi [19]
352
+
353
+ ![](images/96b9cc0859781fde367ed9740e8a94521800f4eea59c6a1d55fbdf8c89d23ef8.jpg)
354
+ Figure 3: Attack sites, jeopardy surface, and home of Angelo Buono [7], one half of the "Hillside Strangler" duo. The ranking of quadrant coloring in order from least probable location to most probable location is: blue, red, green, purple.
355
+
356
+ ![](images/d19a5eb9e41eb52df1d5e5f7615c4c704c8d75659c2afab6f466ad590b4561e0.jpg)
357
+ Figure 4: Prediction results with the RSC model for the 7th Berkowitz kill using the first six kills from the Berkowitz data [24] to generate the jeopardy surface. The ranking of quadrant coloring in order from least probable location to most probable location is: blue, red, green, purple. The black point is the actual location.
358
+
359
+ # 6.3 Combined Models
360
+
361
+ As a brief example of using the RSC model and the Centrographic model in conjunction with each other, we used each model on the DC Sniper data set [2] and graphed the outputs (Figures 5 and 6).
362
+
363
+ ![](images/204a65c8a7fdd06a4202eb9e93112c3f7e06fca0ca9843745194fabc1241d9f1.jpg)
364
+ Figure 5: Simulation results for the centrographic model with the DC Sniper data [2]. The blue points represent the kills used to calculate the centroid, centroidal velocity and centroidal acceleration. The red diamond represents the estimated area of next probable event (within one mean distance of the centroid along each axis in each direction). The black point represents the actual location of the next event.
365
+
366
+ ![](images/dd5bc91e0cc10b136d30ab0eb101f5f817ebb29cebb0651e37a3456b6ba8cf9d.jpg)
367
+ Figure 6: Prediction results with the RSC model for the last DC Sniper kill using the previous kills from the DC Sniper data [2]. The ranking of quadrant coloring in order from least probable location to most probable location is: blue, red, green, purple. The black point is the actual location.
368
+
369
+ # 7 Discussion
370
+
371
+ Any model of serial killer behavior must be somewhat ad hoc. While Rossmo's macrolevel serial killer dataset lists 225 cases of serial killers in 9 countries going back to the late nineteenth century, of these, he found only 21 to be suitable for testing his model [31].
372
+
373
+ Looking at the data for the Centrographic model, we see in Figure 1 that the predictions seem to get better over time, meaning that as more data become available the model is becoming more accurate. Indeed, the error distances (Table 2) are decreasing with each additional kill, and the mean error distance is only roughly $15.12\mathrm{km}$ , a small distance considering the search area was the city of Los Angeles.
374
+
375
+ A key part of the Centrographic model is its use of a linear regression to estimate the acceleration, $\hat{\mathbf{a}}$ , which brings up a question of how valid a linear regression on the velocity data actually is. Evident in Table 4, our test demonstrated a high correlation coefficient ( $\approx 90\%$ correlation) for the Y-acceleration, $\hat{\mathbf{a}}_{\mathbf{y}}$ , as well as small $p$ -values for the later crimes, implying that the linear regression is a statistically significant model that accounts for a large part of the variation in the data. This indicates the linear regression model to be a mathematically valid approach in this case. However, in the case of the X-acceleration, $\hat{\mathbf{a}}_{\mathbf{x}}$ (Table 3), the correlation coefficient was smaller and the $p$ -values larger, yielding a statistically insignificant model that doesn't account for a large degree of the variation in the data.
376
+
377
+ The problems with the model's acceleration approximation illustrate once again the problems with performing statistical analysis on a small data set. Even though the $\hat{\mathbf{a}}_{\mathbf{x}}$ obtained from regression would seem to account for at least $60\%$ of the data variation, it could just be random chance. If we disregard the question of statistical significance for a second and just look at the correlation, however, it's obvious that the linear model does not seem to be enough in the case of $\hat{\mathbf{a}}_{\mathbf{x}}$ . This indicates that perhaps it is necessary to reconsider the idea of a linear regression to calculate $\hat{\mathbf{a}}$ —perhaps the idea of a centroidal acceleration of a data set is not entirely conducive to geographical profiling, or perhaps even a higher-order regression is necessary, introducing analogues to physical concepts such as jerk (the change in acceleration over time) and jounce (the change in jerk over time) $^4$ .
378
+
379
+ The Centrographic model performs well on the data available to us, but more extensive testing is needed to determine its validity. To improve the model, it would be advisable to conduct future research on many more data sets in order to determine appropriate choices of the $f(t)$ and $g(t)$ functions used. A weakness of the model as it stands is its sensitivity to the choice of $\beta$ , so perhaps our choices of the functions in this simulation were ill-advised. Additionally, the model does not incorporate as much temporal modeling as it could, which could strengthen its predictive powers.
380
+
381
+ The Rational Serial Criminal model performed reasonably well on the data
382
+
383
+ for which we tested it. With the Hillside Strangler data, the Hit Score $\%$ of our jeopardy surface for the location of the killers home was only $13.9\%$ (Table 6), meaning only $13.9\%$ of the hunting area would need to be searched before finding the home of Angelo Buono, a vast reduction. With our choice of probability density function $k$ , the RSC model made predictions qualitatively similar to those made by the CGT model [31]. In the case of the Berkowitz data, however, the Hit Score $\%$ for our prediction for finding Berkowitz's home at the time of the murders was only $63.2\%$ (Table 5)—worse than random chance. This is due to the fact that shortly before the murders took place Berkowitz moved his place of residence. Using Berkowitz's old home, the RSC model yields a Hit Score $\%$ of $18.1\%$ , a much more reasonable figure.
384
+
385
+ Conducting a $\chi^2$ goodness of fit test of the Hit Score $\%$ from the three tests against the uniform probability density function implied that the data are not uniformly distributed at the $95\%$ level of significance. This implies that the RSC model developed is statistically superior to a random search method for the location of the murderer's home.
386
+
387
+ The RSC model's performance in our tests was hampered by a few simplifying assumptions we made:
388
+
389
+ - We assumed the initial probability distribution over the hunting area for the location of a serial killer's home was uniform.
390
+ - We assumed the initial probability distribution over the hunting area for the location of a murder was uniform.
391
+
392
+ Obviously, as in the case of the Berkowitz data, this is not always the case—half of the area included in our simulation was underwater, indicating zero probability of the killer's home being in that area, and zero probability of the next kill occurring in much of that area<sup>5</sup>. To improve the RSC model, it is possible to modify the initial guess of the probability distribution for both of these cases based on simple factors such as population density or geographic landmarks (ie, lakes and harbors).
393
+
394
+ The real strength of our Rational Serial Criminal model is its theoretical indifference to the choice of utility function. This means that it can be adapted to the pursuit of almost any serial criminal. Besides the well-documented tendency for criminals to act locally, police investigating a particular fugitive may have any number of other ideas about the crook's preferences. If they can write these down as functions, the RSC model can help them as they develop a search plan based on their evidence and intuition. Since one might reasonably consider a group engaged in organized crime to be a single agent, one could also use the RSC model to investigate gang behavior, for example.
395
+
396
+ Independent of any specific investigation, we could improve the RSC model significantly by putting other common themes and population trends [28] about crime sites into functional form. For instance, criminals tend to prefer to act in areas which are like the areas they are familiar with [23]. If we included zoning
397
+
398
+ and demographic data in our grid, we could include a term in the $\mathbf{k}$ function which operationalizes this prediction. We could also change the meaning of $p$ to include more sophisticated information about the victims, so that we could model a criminal who tended to find his victims in a particular region but then assaulted them or disposed of their bodies in more diverse locations [13]. This could augment the current method of using GIS software to determine how several victims' daily lives might have led them to interact with a killer, and from this information determining areas where the killer might spend a lot of time [37].
399
+
400
+ In Figures 5 and 6 we see the results of running both the RSC and Centrographic model on the DC Sniper data set. Using both methods generates two zones each with theoretically high probability of encounter with the serial killer in question. Looking at the graphs, we see that the models did not predict the actual crime site. This can be explained by the nature of the data. In the RSC model case, there is a crime site in the same location of a previous crime site. This violates the assumptions of the model. In the centrographic model, the killer took an unexpected turn south, resulting in a crime site far from any of the others. Both of the problems above stem from the size of the hunting area for the DC Sniper, which contains large areas of Maryland and parts of Virginia. This, however, is simply a characteristic of the DC Sniper data that makes it difficult to make predictions regarding his actions in general. Additionally, the DC Sniper data differs from the other sets in that many of the kills took place on the same day at different times, making some of the calculations involved in the centrographic model difficult or nonsensical (ie, predicting infinite velocity). With a more accurate data set (to the time of day, for example) it is possible the model would perform better.
401
+
402
+ To improve the way the models work together, a number of possible adaptations could be made in future models. As the RSC is a resource-intensive method, it is possible that using it for a large area with a fine enough quadrant resolution could be simply too slow. Since the Centrographic model theoretically narrows down the search window, it could be useful to consider using a variable quadrant resolution when analyzing large areas, where the resolution outside the area predicted by the Centrographic model is larger than that inside, yielding a faster runtime while still providing a high level of accuracy in high probability areas. Using the models in the other direction, probability data from the RSC model could be used to create a sort of resistance to the motion of the centroid in the Centrographic model to cause it to stay near regions of higher probability.
403
+
404
+ Overall, this model provides a starting point for a police search. It is meant to be another tool to the police, to give a general guideline of where to allocate police sources when very few data are available. In reality, police experience is impossible to replace with any kind of mathematical method, and when dealing with a subject as unpredictable as a serial murderer, experience is necessary to determine the most efficient way to catch the killer.
405
+
406
+ # 8 Executive Summary
407
+
408
+ This method is a computer program that takes crime site (either body disposal site or scene of the crime) data in absolute coordinates and returns a prediction of the offender's residence and a prediction of the next crime site. This should be used for two things: to narrow the search for the offender in his daily routine and to narrow the search for the next crime site. It is possible that, after discovering a crime site, that there is another crime site in a location that has not yet been discovered. Furthermore, a search that starts in the most probable area and radiates from this center is more likely to find the offender's residence, or next crime site, than a random search would [32]. The method will provide a starting point and an area for a police force to concentrate the efforts of determining both the location of an offender and where the next crime site might be.
409
+
410
+ In employing the method, the police force makes the same assumptions that are involved with developing the program. The assumptions made are:
411
+
412
+ - The series will occur in an area that the offender is familiar with, meaning a place where he lives, has lived, or has committed crimes before.
413
+ - The most probable location of the next crime site can change with time, even without new data being acquired regarding former sites in the series.
414
+ - The order and time at which former crime sites occur affects the location of the predicted next crime site in the series.
415
+ - Given no crime site data, the offender is equally likely to live or commit a crime at any location.
416
+
417
+ In using the program to predict the offender's home, it is assumed that the offender has a home base within or close to the crime site, as opposed to commuting to the area of the crime site. This assumption can be relaxed when predicting where the next crime site might be, as this prediction is made solely on earlier crime site data and thus does not offer additional information. An example of when the method would not apply is when an offender picks up prostitutes in a red light district in order to kill them. If, as in the John Wayne Gacy Jr. case, the crime sites all share a location the model will not apply as no usable predictions could be gleaned from that data.
418
+
419
+ The program utilizes two different types of models to make predictions, a centrographic model and a rational choice model. The centrographic model has an analogy in physics, where the crime sites are treated as point masses in a system, and the "centroid" is determined by finding the center of mass of the system. From this model is output a graph that includes the points of the crime sites and a diamond. Within the diamond lies the prediction for the next crime site. The rational choice model has an analogy in economics. This model uses the idea that the offender is deciding the location of the crime site in order to satisfy his personal goals (mostly of committing another crime) and to minimize his risk of being caught. Thus, it is assumed that the series will continue and that there will not be two crime sites in the same location. There are two types
420
+
421
+ of this model included in the program, one that predicts where the offender lives and the other predicts the location of the next crime site. In both cases, the model outputs a contour plot. In the two cases, the peaks correspond to either high probability of the offender's home base and to the location of the next crime site, respectively. There will always be a valley at the location of a previous crime site. Then, when determining where to search for the next crime site, the contour plot from the rational choice model and the plot from the centrographic model can be utilized in conjunction to narrow the initial search area. A key item to note is that the centrographic model makes use of a parameter, $\beta$ , that is determined by the data of known crime times and must be derived individually for each serial killer. An example value might be $\beta = \frac{\bar{t}}{\ln 15}$ where $\bar{t}$ is the mean time between crimes.
422
+
423
+ The method will always output graphs that can be used to begin a police search, but as it is a computerized process, it only responds to the data set provided. Thus, there is an element of subjectivity in determining which crime sites in a given case are useful for predictions [32]. When predicting where the offender lives, the following should be used as a guideline for inputting crime site data:
424
+
425
+ - A minimum of five crime sites should be used in order to provide a basis for the prediction. Certainly, the program will work with as few as three crime sites, but will not serve to improve the search area greatly. Furthermore, when invoking the program, one assumes that the offender has not changed the home base since the series began. Else, more crime sites are necessary to determine a valid prediction.
426
+ - Only crime sites that are accurately known should be input to the program. For example, using data determined from victims acquaintances as to where the victim was last seen to determine where the offender might have met the victim should not be counted as valid input.
427
+ - If it is clear that two crime sites relate to the same crime (i.e. a site is found where the offender killed the victim, but the body disposal site in a different location) one or the other location should be input in to the model, in accordance with the other crime sites [32].
428
+
429
+ When predicting the location of the next crime site the previous also apply, though several crime sites in the same area at different points in time might be considered as several different sites, as this might be an indication of where the offender's next act in the series might occur.
430
+
431
+ One of the most powerful tools to be incorporated alongside this program is a GIS mapping software. Using such software will retain perspective of the crime site data when considering the predictions set forth from the program. Furthermore, the program does not account for any sort of victim data that might be gathered during the investigation. Considering these two additional elements will lead to more efficient searches.
432
+
433
+ # References
434
+
435
+ [1] Case File - David Berkowitz. http://www.fortunecity.com/roswell/streiber/273/berkowitz.cf.htm.
436
+ [2] Wikipedia: Beltway sniper attacks. http://en.wikipedia.org/wiki/Beltway_sniper_attacks.
437
+ [3] Wikipedia: David Berkowitz. http://en.wikipedia.org/wiki/David_Berkowitz.
438
+ [4] Wikipedia: Hillside Strangler. http://en.wikipedia.org/wiki/Hillside_Strangler.
439
+ [5] AP, Rochester Jury Convicts Parolee in Serial Killings, The New York Times, (14 Dec 1990), p. B2.
440
+ [6] G. S. BECKER, Crime and punishment: An economic approach, The Journal of Political Economy, 76 (1968), pp. 169-217.
441
+ [7] J. BOREN, The Hillside Strangler Trial, Loy. LAL Rev., 33 (2000), pp. 707-1849.
442
+ [8] K. J. BOWERS, S. D. JOHNSON, AND K. PEASE, Prospective Hot-Spotting, British Journal of Criminology, 44 (2004), pp. 641-658.
443
+ [9] D. CANTER, T. COFFEY, M. HUNTLEY, AND C. MISSEN, Predicting serial killers' home base using a decision support system, Journal of Quantitative Criminology, 16 (2000), pp. 457-478.
444
+ [10] S. CHAINey AND J. RATCLIFFE, GIS and Crime Mapping.
445
+ [11] CONCRETE BLONDE, Scene of a perfect crime, in Recollection, Capitol, 1996.
446
+ [12] D. B. CORNISH AND R. V. CLARKE, eds., The Reasoning Criminal, New York: Springer-Verlag, 1986.
447
+ [13] C. DANIELL, Geographic profiling in an operational setting: the challenges and practical considerations, with reference to a series of sexual assaults in bath, england, in Crime Mapping Case Studies, S. Chainey and L. Thompson, eds., Chichester: John Wiley & Sons, 2008.
448
+ [14] F. DOSTOEVSKY, Crime and Punishment, 1866.
449
+ [15] R. GIBBONS, Game Theory for Applied Economists, Princeton: Princeton University Press, 1992.
450
+ [16] T. HIRSCHI, Rational choice and social control theories of crime, in The Reasoning Criminal, D. B. Cornish and R. V. Clarke, eds., New York: Springer-Verlag, 1986.
451
+
452
+ [17] H. S. HOUTHAKKER, Revealed preference and the utility function, Economica, 17 (1950), pp. 159-174.
453
+ [18] T. JOHNSON, E. PORTERFIELD, V. HO, AND H. CASTRO, Victims' families face Green River Killer in court, Seattle-Post Intelligencer, (19 Dec 2003).
454
+ [19] J. JONES, Body of Woman Found in Brush, Los Angeles Times, (14 Nov 1977), p. A1.
455
+ [20] D. KAHNEMAN AND A. TVERSKY, Prospect theory: An analysis of decision under risk, Econometrica, 47 (1979), pp. 263-291.
456
+ [21] J. L. LEBEAU, The Methods and Measures of Centrography and the Spatial Dynamics of Rape, Journal of Quantitative Criminology, 3 (1987).
457
+ [22] N. LEVINE, Crime mapping and the crimestat program, Geographical Analysis, 38 (2006), pp. 41-56.
458
+ [23] S. LUNDRIGAN AND D. CANTER, Spatial Patterns of Serial Murder: An Analysis of Disposal Site Location Choice, Behavioral Sciences and the Law, 19 (2001), pp. 595-610.
459
+ [24] R. D. McFADDEN, .44 Killer wounds 12th and 13th victims, The New York Times, (1 Aug 1977), p. 45.
460
+ [25] C. MONTALDO, David Berkowitz - the son of sam. http://crime.about.com/od/murder/p/sonofsam.htm, 2010.
461
+ [26] S. P. MORSE, Converting addresses to/from latitude/longitude in one step, 2004.
462
+ [27] F. B. OF INVESTIGATION, Serial killers in the United States, 1960 to present, 2003.
463
+ [28] D. O'SULLIVAN AND D. J. UNWIN, Geographic Information Analysis, John Wiley & Sons, Ltd. Chichester, UK, 2003.
464
+ [29] A. R. PIQUERO AND S. G. TIBBETTS, eds., Rational Choice and Criminal Behavior, New York: Routledge, 2002.
465
+ [30] D. Ross, Integrating the dynamics of multi-scale agency, in The Oxford Handbook of Philosophy of Economics, D. Ross and H. Kincaid, eds., New York: Oxford University Press, 2009.
466
+ [31] D. K. Rossmo, Geographic Profiling: Target Patterns of Serial Murderers, PhD thesis, Simon Fraser University, 1995.
467
+ [32] D. K. ROSSMO, Geographic Profiling, Boca Raton: CRC Press, 2000.
468
+
469
+ [33] D. K. ROSSMO AND L. VELARDE, Geographic profiling analysis: principles, methods, and applications, in Crime Mapping Case Studies, S. Chainey and L. Thompson, eds., Chichester: John Wiley & Sons, 2008.
470
+ [34] P. A. SAMUELSON, Foundations of Economic Analysis, Cambridge: Harvard University Press, 1947.
471
+ [35] P. SCHMIDT AND A. D. WITTE, An Economic Analysis of Crime and Justice, Orlando: Academic Press, 1984.
472
+ [36] M. SCHUMACH, 3 Murders of women since July 29 believed actions of same gunman, The New York Times, (11 Mar 1977), p. 83.
473
+ [37] C. SHAMBLIN, An application of geographic information systems (GIS): the utility of victim activity spaces in the geographic profiling of serial killers, Master's thesis, Louisiana State University, 2004.
474
+ [38] B. SNOOK, R. CULLEN, A. MOKROS, AND S. HARBORT, Serial murderers spatial decisions: Factors that influence crime location choice, Journal of Investigative Psychology and Offender Profiling, 2 (2005).
475
+ [39] M. TONKIN, J. WOODHAMS, AND J. W. BOND, A theoretical and practical test of geographical profiling with serial vehicle theft in a U.K. context, Behavioral Sciences the Law, (2009).
476
+ [40] J. VON NEUMANN AND O. MORGERSTERN, Theory of Games and Economic Behavior, Princeton: Princeton University Press, 1944.
MCM/2010/C/2010-ICM-Com-A/2010-ICM-Com-A.md ADDED
@@ -0,0 +1,51 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Author's Commentary: The Marine Pollution Problem
2
+
3
+ Miriam C. Goldstein
4
+
5
+ Integrative Oceanography Division
6
+
7
+ Scripps Institution of Oceanography
8
+
9
+ University of California, San Diego
10
+
11
+ 9500 Gilman Drive, Mailbox 0208
12
+
13
+ La Jolla, CA 92093-0208
14
+
15
+ mgoldstein@ucsd.edu
16
+
17
+ # Introduction
18
+
19
+ Lightweight, inexpensive, durable, and infinitely variable, plastic defines the modern age. However, the very qualities that make plastic indispensable make it an environmental problem.
20
+
21
+ Today, plastic waste is found throughout the world's oceans, from the coast to the depths to the center of the open sea far from land. The best-known accumulation of trash is the "Great Pacific Garbage Patch," located in the North Pacific Central Gyre (NPCG), a vast swathe of ocean that stretches between the west coast of North America and the east coast of Asia.
22
+
23
+ Bordered by four major currents, the NPCG slowly rotates clockwise, pulling water in towards the center. Plastic debris from North America and Asia that does not sink or degrade becomes trapped in the NPCG. While larger pieces of plastic such as fishing nets and disposable drink bottles are found in the NPCG, most of the plastic debris is small. This is because as plastic items are exposed to ultraviolet light, they become brittle and are broken into smaller and smaller pieces by the movement of the ocean. This process is known as photodegradation.
24
+
25
+ The environmental impacts of small pieces of plastic debris are poorly understood. Larger pieces of debris, such as lost fishing gear, can entangle and drown oceanic animals such as seals and turtles. Seabirds and turtles eat plastic debris, and turtle death has been linked to intestinal blockage from plastic bags. However, effects on the organisms at the base of the food chain, such as phytoplankton, zooplankton, and small fishes, remain less studied but may be more
26
+
27
+ significant due to the high proportion of plastic debris that is less than $3\mathrm{mm}$ in diameter. Small particles of plastic are readily ingested by filter-feeding and deposit-feeding invertebrates, and plastic resin pellets accumulate high levels of persistent organic pollutants such as PCBs and DDT. Plastic debris also serves as a "raft" for benthic invertebrates such as barnacles, and has already been responsible for at least one exotic species introduction in the Atlantic.
28
+
29
+ The "Great Pacific Garbage Patch" has captured the public imagination, leading to a great deal of coverage in the popular media. While detectable amounts of plastic debris were documented in the NPCG as early as 1972, the public awareness of this issue is due in large part to Capt. Charles Moore and the nonprofit organization he founded to combat marine debris, the Algalita Marine Research Foundation. However, relatively little is known about the extent and environmental effects of the plastic debris. Since a robust scientific understanding of the problem is necessary to seeking a solution, the "Great Pacific Garbage Patch" was the topic for this year's problem in the Interdisciplinary Contest in Modeling (ICM) $^{\text{®}}$ .
30
+
31
+ # Formulation and Intent of the Problem
32
+
33
+ The goal of this year's ICM problem was for student teams to model one aspect of the marine debris issue in the NPCG. Because the issue encompasses physical oceanography, ecology, and waste management, there were many potential issues to choose from. Teams were asked to focus on one critical aspect of the problem of oceanic marine debris, and to model the important behavior and phenomena. The end result was to be in the form of a 10-page report to the leader of an expedition setting off to study marine debris.
34
+
35
+ Suggested tasks included:
36
+
37
+ - Create a monitoring plan, with the option of including other oceanic gyres such as the North Atlantic Gyre and South Pacific Gyre.
38
+ - Characterize the extent, distribution, and density of debris.
39
+ - Describe the photodegradation of debris.
40
+ - Model the impact of banning polystyrene takeout containers.
41
+ - Pursue any relevant topic of the team's choosing that included modeling.
42
+
43
+ Models were evaluated based on the team's understanding of the nature of the problem, their use of realistic parameters, and their approach to describing either the existing problem or a proposed solution.
44
+
45
+ This year's ICM problem is based on ongoing research by a variety of organizations and scientists, particularly the Algalita Marine Research Foundation, the Sea Education Association, Project Kaisei, and Scripps Institution of Oceanography. Most work has focused on understanding the abundance and distribution of plastic particles in the NPCG. Future work will focus on the impacts and mitigation of marine debris.
46
+
47
+ # About the Author
48
+
49
+ ![](images/1feb99e73439b97121274c4a959ebe0e1a9d5ab5599ba59f0e3ca5e432cbd4a7.jpg)
50
+
51
+ Miriam Goldstein is a fourth-year Ph.D. student in Biological Oceanography at the Scripps Institution of Oceanography at the University of California San Diego, CA. For her dissertation work, she is studying the effect of plastic debris on zooplankton communities and invasive species transport. She was the principal investigator on the 20-day Scripps Environmental Accumulation of Plastic Expedition (SEAPLEX) that investigated plastic debris in the North Pacific Gyre in August 2009. Miriam holds an M.S. in Marine Biology from Scripps and a B.S. in Biology from Brown University. She is originally from Manchester, NH.
MCM/2010/C/2010-ICM-Com-J/2010-ICM-Com-J.md ADDED
@@ -0,0 +1,87 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Judges' Commentary: The Outstanding Marine Pollution Papers
2
+
3
+ Rodney Sturdivant
4
+ Dept. of Mathematical Sciences
5
+ U.S. Military Academy
6
+ West Point, NY
7
+ Rodney.Sturdivant@usma.edu
8
+
9
+ # Introduction
10
+
11
+ The Interdisciplinary Contest in Modeling (ICM) $^{\text{®}}$ is an opportunity for teams of students to tackle challenging real-world problems that require a wide breadth of understanding in multiple academic subjects. This year's problem focused on the recently much publicized "Great Pacific Ocean Garbage Patch." Scientific expeditions into the North Pacific Central Gyre (a convergence zone where debris is accumulating) have led to a number of interesting scientific and technical problems. (Hereafter we refer to it simply as "the Gyre.")
12
+
13
+ Eight judges gathered to select the most successful entries of this challenging competition out of an impressive set of submissions.
14
+
15
+ # The Problem
16
+
17
+ The primary goal of this year's ICM was to model and analyze one issue associated with the debris problem. Specifically, teams were asked to address several elements with their model:
18
+
19
+ 1. Determine the problem's potential effect on marine ecology.
20
+ 2. Address government policies and practices that should be implemented to ameliorate the negative effects.
21
+ 3. Consider needs for future scientific research.
22
+ 4. Consider the economic aspects of the problem.
23
+
24
+ Several examples of issues that the teams might consider were also provided. For the first time, the ICM problem submissions were limited to a maximum length of 10 pages.
25
+
26
+ Overall, the judges were impressed both by the strength of many of the submissions and by the variety of issues they chose to model. In many cases, very different modeling approaches were used to address the same issue; as a result, this year's problem led to the greatest variety in submissions to the ICM in memory.
27
+
28
+ # Judges' Criteria
29
+
30
+ The framework used to evaluate submissions is described below, and it remained very similar to the criteria used in 2009. However, the 10-page limit for the submissions had an impact on the importance of the final criterion concerning communication. Teams that dramatically exceeded the limit were not considered for the Outstanding paper category.
31
+
32
+ # Executive Summary
33
+
34
+ It was important that a team succinctly and clearly explain the highlights of its submission. The executive summary needed to include the issue that the team chose to address and the modeling approach(es) used for analysis. Further, the summary needed to answer the most pressing questions posed in the problem statement, namely, the effect on the marine ecology, economic aspects of the issue, and how to ameliorate the problem. Truly outstanding papers were those that communicated their approach and recommendations in well-connected and concise prose.
35
+
36
+ # Domain Knowledge and Science
37
+
38
+ The problem this year was particularly challenging for students in terms of the science. To address the requirements effectively, teams needed first to establish an ecological frame of reference. Many teams were able to do this reasonably well; teams that excelled clearly did a great deal of research. Often, what distinguished the top teams was the ability not just to describe the garbage patch in a single section of the paper, but also to integrate this domain knowledge throughout the modeling process.
39
+
40
+ A second important facet of the problem was the ability to understand economic issues associated with the chosen problem. Many teams created reasonable models but unfortunately never tied them to the economic discussion.
41
+
42
+ # Modeling and Assumptions
43
+
44
+ For teams that chose to focus on describing and understanding the distribution of plastic in the Gyre, simulation was a popular approach to the problem. Differential equations were probably the most prevalent models used (in a wide variety of ways). Often, the models appeared appropriate but lacked any discussion of important assumptions. Additionally, some papers lacked a reasonable discussion of model development. Finally, the very best papers not only formulated the models well but also were able to use the models to produce meaningful results to address the problem and to make recommendations.
45
+
46
+ # Solution/Recommendation
47
+
48
+ Perhaps the most distinct difference between the best papers and others was the ability to utilize the team's models to develop or propose an actual solution to the problem. For example, a team might effectively model the distribution of plastic in the Gyre in one section of the paper. A completely independent section would then provide recommendations for remediating the plastic problem but without ever making use of the model or the model results.
49
+
50
+ # Analysis/Reflection
51
+
52
+ Successful papers utilized the models developed in early sections of the paper to draw conclusions about the important issues in addressing problems with the garbage patch and addressed how assumptions made in the model could impact the solution and recommendation. In the best papers, trade-offs were discussed and—in truly exceptional cases—some sensitivity analysis was conducted to identify potential issues with the solutions presented.
53
+
54
+ # Communication
55
+
56
+ The challenges of the modeling in this problem and the page limit may have contributed to the difficulty that many teams had in clearly explaining their solutions. Papers that were clearly exposited distinguished themselves significantly, emphasizing that it is not only good science that is important, but also the presentation of the ideas. In some cases, teams spent all their space describing the modeling and never presented important results, conclusions, or recommendations. On the other hand, some teams never really explained their models, making it difficult to judge the validity of their results. Balancing the need to present enough work to fully answer the question, while keeping to the 10-page limit, was clearly a challenge in this year's contest.
57
+
58
+ # Discussion of the Outstanding Papers
59
+
60
+ The judges were most impressed by papers that offered unique and innovative ideas. Three of the four Outstanding papers this year took very novel approaches to the problem and issues. The fourth paper was representative of what many teams opted to do but was more clearly articulated and the modeling more complete than others attempting the same approach.
61
+
62
+ - The Beijing Jiaotong University submission "A New Method for Pollution Abatement: Different Solutions to Different Types" was unique in looking at the pollution problem from a risk-analysis perspective. Using multiattribute decision theory, this team developed a model to rank the types of debris in the Gyre by their level of "risk." The modeling was complete and well explained. The team also then used the results of the model to propose a strategy for reducing the debris problem. The judges were a bit troubled by the conclusions of the paper—considering types of debris as significantly different may not be realistic—but the results followed from the assumptions in the Moore et al. [2001] paper provided with the problem statement.
63
+ - The paper from Lawrence University, "Size-Classified Plastic Concentration in the Ocean," was perhaps the most clearly written and thorough among the Outstanding papers. The team developed a model to classify the plastics in the Gyre. Their models looked at many factors, physical and chemical, to determine size and concentration of the debris. In addition to the very thoroughly explained modeling efforts, the paper ends with sections discussing some of the limitations of the model and then some very specific conclusions and recommendations that stem directly from the model itself.
64
+ - The Hangzhou Dianzi University submission, "Quantitative Marine Debris Impacts and Evaluation of Ocean System," became known among the judges as the "monk seal" paper. This team took a unique approach to the problem by studying the impacts of ocean debris on a single species, the Hawaiian monk seal. A "grey model" and time-series approach was utilized to consider trends for the monk seal, and then an analytical hierarchy process (AHP) used to try to quantify impacts of debris. The paper was not the strongest in terms of how well the team explained and presented their results, but the clever approach to the problem appealed to the judges.
65
+ - The final Outstanding paper, "Shedding Light on Marine Pollution" by the team from Carroll College, considered the specific issue of photodegradation of polyethylene floating in seawater. The team developed models for this process and very clearly articulated their approach and assumptions. This paper was among the best at presenting the modeling efforts and also noteworthy for the science (namely, chemistry) utilized in the process. The judges would have liked to see a bit more in the conclusions to explain the importance of the modeling results and ties to policy, but they were very impressed by the focus and clarity of this paper.
66
+
67
+ # Conclusion and Recommendations for Future Participants
68
+
69
+ The judges really enjoyed reading the submissions for this year's ICM contest. All teams deserve congratulations for the tremendous work done in a very short period of time and on a very difficult problem. The judging was, as a result, both pleasurable and challenging.
70
+
71
+ One issue worthy of mention that arises each year is that of proper scholarship in utilizing other sources in writing a paper. In researching the science for these complicated problems, teams naturally use information and ideas taken from a variety of resources. This is acceptable as long as those ideas are clearly documented in the paper. Copying the exact words from other papers should be minimized; but, if done, the words need to be placed in quotation marks, so that it is clear that it is not original to the authors.
72
+
73
+ Several key points from this year's contest judging emerged in determining the very best submissions. These are thoughts that may be useful to future ICM competitors.
74
+
75
+ - Many teams failed to select a single issue to model and analyze, instead trying to address all of the ideas for issues proposed in the problem statement. In some cases, these teams appeared to have done a remarkable amount of excellent modeling. However, it was simply impossible for them to present all this work in such a short report or to do justice to such a wide array of problems in such a short time period. The teams that were most successful clearly shaped the problem that they would address. When presented with a problem with a very large scope, narrowing the focus is critical.
76
+ - Judges were impressed with those who took a unique perspective on the problem. That could be either a different modeling approach (perhaps using a particular science, such as chemistry) or considering a different aspect of the problem (one example was a team that looked at how the plastic gets into the ocean). Original thought, as long as it was grounded in solid research, was cherished.
77
+ - Finally, a well-written and integrated report that reads well from start to finish is critical. The sections of the report should follow naturally and not appear as completely separate sections or ideas. The conclusions and recommendations, in particular, should be clearly tied to the modeling work presented.
78
+
79
+ # Reference
80
+
81
+ Moore, C.J., S.L. Moore, M.K. Leecaster, and S.B. Weisberg. 2001. A comparison of plastic and plankton in the North Pacific central gyre. *Marine Pollution Bulletin* 42: 1297-1300.
82
+
83
+ # About the Author
84
+
85
+ Rod Sturdivant is an Associate Professor at the U.S. Military Academy in West Point, NY. He earned a Ph.D. in biostatistics at the University of Massachusetts-Amherst and is currently program director for the probability and statistics course at West Point. He is also founder and director of the Center for Data Analysis and Statistics within the Dept. of Mathematical Sciences. His research interests are largely in applied statistics with an emphasis on hierarchical logistic regression models.
86
+
87
+ ![](images/13db1dac12ab59891c0af1435db1a7744041086ec4c6258aa685b04dd66b1f4b.jpg)
MCM/2010/C/6947/6947.md ADDED
@@ -0,0 +1,263 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Team Control Number
2
+
3
+ For office use only
4
+
5
+ T1
6
+
7
+ T2
8
+
9
+ T3
10
+
11
+ T4
12
+
13
+ 6947
14
+
15
+ Problem Chosen
16
+
17
+ C
18
+
19
+ For office use only
20
+
21
+ F1
22
+
23
+ F2
24
+
25
+ F3
26
+
27
+ F4
28
+
29
+ # Summary
30
+
31
+ After mathematically analyzing and modeling the ocean problem, we would like to present our conclusions and recommendations in order to determine the government policies and practices that should be implemented to ameliorate negative effects of marine debris.
32
+
33
+ We analyzed large mount of data and determined the severity and global impact of floating plastic. Our model uses the Multi-attribute Decision theory to select the valid data to improve the accuracy of the outcomes.
34
+
35
+ We made a deeper study on the risk degree of different types of floating plastic. This would provide valuable and economic insight into the pollution abatement. By knowing the most threatening plastic to the marine ecosystem, we provided practical suggestions in three levels for the government policy maker.
36
+
37
+ The model achieves several important objectives:
38
+
39
+ - The analysis of the marine debris problem: Floating plastic poses great danger on marine ecosystem and the damage is mainly caused by marine organism's ingestion.
40
+ - The risk degree of different types of plastic: among all kinds of floating plastic, the risk degrees of fragments and thin plastic are the largest and second largest respectively. In other words, fragments and thin plastic films are the most harmful two types of plastic
41
+ - An effective and economic ways to abate the marine pollution: according to the different risk degree of different types of plastic, policies should be specific designed. Strict policies should apply to the more harmful plastic.
42
+
43
+ Our model meets our expectations, and is easily modified to support different marine areas. We believe that our model can be used in further research and our recommendations will contribute a lot to the marine protection.
44
+
45
+ # A new method for pollution abatement: different solutions to different types
46
+
47
+ # Introduction
48
+
49
+ The wastes dumped into the ocean by human beings are accumulating in high densities over a large area due to the ocean current. The Great Pacific Ocean Garbage Patch is just one of five that may be caught in giant gyres scattered around the world's oceans (Hoshaw, 2009). The accumulation of the wastes, most are plastic, is now recognized as a serious problem in marine ecosystem (Tanimura, 2007). Although the plastic pollution is quite evident and many researchers have estimated the amount of different types of floating marine debris, there are few studies on the risk degree of different types of plastic to the marine creatures.
50
+
51
+ To accomplish this goal, we model and analysis the risk degree caused by different types of plastic to the marine organism. As an application, we apply this model to the government legislation. The determination of the risk degree of all the types of plastic can help the policy maker formulate the favorable regulations, policies and laws according to the risk degree of the different materials. In this way, the marine environmental protection policies will be more proper, effective and it will cut down the unnecessary expense.
52
+
53
+ # Analysis about ocean debris problem
54
+
55
+ Plastic is extensively used in various industries for its lightweight, strong, durable and cheap advantages (Laist, 1987). And the study of Unnithan(1998) showed that the recovery of plastic often does not provide readily realizable profits, or options for reuse. So more and more abandoned plastic enter to the nature every year and cause serious pollution. Since the ocean is downhill and downstream from virtually everywhere humans live, and about half of the world's human population lives within 50 miles of the ocean, lightweight plastic trash, lacking significant recovery infrastructure, blows and runs off into the sea (Moore, 2008). This has caused seriously marine pollution and posed a danger on marine ecosystem.
56
+
57
+ Marine debris poses a danger to marine organisms through ingestion and entanglement (Moore 2001). Between these two ways, ingestion should be attached greater important to because the ingestion of small size plastic fractions affects large number and diversity of species when compared to the entanglement. (Monica F, 2009) Spear et al. (1995) provided solid evidence for a negative relationship between number of plastic particles ingested and physical condition in seabirds from the tropical Pacific.
58
+
59
+ Moreover, Mato et al. (2001) found that plastic resin pellets contain toxic chemicals such as PCBs and nonylphenol. They suggested that plastic resin pellets could be an
60
+
61
+ exposure route for toxic chemicals, potentially affecting marine organisms.
62
+
63
+ From the published data on the abundance of floating plastic in the North Pacific Ocean (ReiYamashita 2006) we know that the abundance of floating plastic is increasing enormously every few years.
64
+
65
+ ![](images/3cfee2d43e31e2704fd5bdb4787c8834b9df1a109069a0c6d55d1ebf437f401d.jpg)
66
+ Figure1 Abundance of floating plastics in the North Pacific Ocean during 1975-2001
67
+
68
+ # Terminology and Conventions
69
+
70
+ This section defines the basic terms used in this paper.
71
+
72
+ - Risk degree of a type of plastic refers to its damage possibility to the marine organism. That is to say, the larger a kind of plastic's risk degree is, the more likely it is ingested by marine organism and cause damage to the creature.
73
+ - The abundance of a type of plastic refers to the number of plastic debris per square kilometer.
74
+ - The size of a piece of plastic refers to the minimum mesh size the plastic piece can go through. Here we use the mesh size as the size of a piece because the actual size of the debris can not be measured accurately.
75
+ - The attribute is used to measure the achieving degree of an object. In this paper, the object is the risk degree of the plastic. The two attributes are abundance and size of the plastic.
76
+ - The weight of an attribute is the relative importance of the attribute. The larger the weight is, the more decisive it is for the object.
77
+
78
+ Table 1 Variables and definitions
79
+
80
+ <table><tr><td>Variable</td><td>definition</td><td>Variable</td><td>definition</td></tr><tr><td>wi</td><td>The weight of attribute i</td><td>A</td><td>Total-abundance attribute</td></tr><tr><td>B</td><td>Mesh-size attribute(mm)</td><td>rij</td><td>Effect measurement</td></tr><tr><td>Sij</td><td>The performance of alternative j determined by attribute i</td><td>uimaxi</td><td>Upper limit of effect measurement</td></tr><tr><td>uimin</td><td>Lower limit of effect measurement</td><td>D</td><td>Decision matrix</td></tr><tr><td>x(k)</td><td>Raw data of alternative k</td><td>r(k)</td><td>Regularization data of alternative k</td></tr></table>
81
+
82
+ # Assumptions
83
+
84
+ We make the following assumptions in this paper:
85
+
86
+ - Moore's data in 1999 is accurate and random enough to be a representative sample of the North Pacific Central Gyre.
87
+ - Marine debris poses danger to marine organisms only through ingestion. This ignores the danger brought by entanglement because the ingestion of small size plastic fractions affects large numbers and diversity of species when compared to the entanglement (Monica F, 2009).
88
+ - The danger to marine organism is only determined by the amount of pieces it ingests. The more pieces of plastic it ingest, the greater it will harm the plastic eater. The assumption is necessary because the other data, like poison chemical content of the plastic, cannot be obtained in our work.
89
+ - The amount of plastic the creatures ingest is only determined by two factors: the size of the plastic and the abundance of the plastic. This ignores the other factors may contribute to fish digestion such as color and shape, which are difficult to model accurately and have little effect on fish ingestion.
90
+ - The eating habits of different marine organism are same. In other words, all the marine creatures food selection are the same and their ingestion are random.
91
+
92
+ # Modeling
93
+
94
+ The logic of the simulation process is detailed in Figure 2.
95
+
96
+ ![](images/ef9704374304f8b5bfb38d8842057811d28e186457bc2f6d22cd0599f6a8f0a2.jpg)
97
+ Figure2 simulation of the pollution abatement process
98
+
99
+ # Attribute weighting
100
+
101
+ To determine the risk degree of different types of plastic, both the abundance attribute and size attribute should be taken into consideration. But the effect degree of the two attributes, the abundance and the size of the plastic, is not known. So we use the Rank Reciprocal Weighting theory to set the weight of these two attributes.
102
+
103
+ In Rank Reciprocal Weighting method, the denominator of a weight is the sum of all the attribute rank reciprocals. The numerator of a weight is its attribute rank reciprocal. The smaller an attribute rank is, the more importance the attribute is, the larger its rank reciprocal is and the larger its weight is. The weights $w_{i}$ of the factor $i$ are
104
+
105
+ given by $w_{i} = \frac{1 / i}{\sum_{i = 1}^{n}1 / i}$ , where $n$ is the number of the attributes.
106
+
107
+ It is common sense that the abundance of the plastic is the main attribute that effect the marine organism, so the weight of abundance attribute is $w_{1}$ and the weight of size attribute is $w_{2}$ . According to the formula above, we have:
108
+
109
+ $$
110
+ w _ {1} = \frac {1 / 1}{1 / 1 + 1 / 2} = 2 / 3 w _ {2} = \frac {1 / 2}{1 / 1 + 1 / 2} = 1 / 3
111
+ $$
112
+
113
+ # Valid data selection
114
+
115
+ Then we will calculate the risk degree of both the abundance attribute and the size attribute to the marine ecosystem according to the raw data obtained by Moore in 1999(Moore. 2001). The data we need are as follows:
116
+
117
+ Table2
118
+ Abundance (pieces $km^2$ ) of plastic pieces and tar found in the North Pacific gyre
119
+
120
+ <table><tr><td>Mesh-size(mm)</td><td>Total-abundance</td></tr><tr><td>&gt;4.706</td><td>24764</td></tr><tr><td>4.759--2.800</td><td>19696</td></tr><tr><td>2.799--1.000</td><td>114288</td></tr><tr><td>0.999--0.710</td><td>85903</td></tr><tr><td>0.709--0.500</td><td>57928</td></tr><tr><td>0.499--0.355</td><td>31692</td></tr><tr><td>Total</td><td>334270</td></tr></table>
121
+
122
+ Each group data can be regarded as an alternative, and though the Grey System Theory we can get the valid data.
123
+
124
+ Table3 Effect measurement decision matrix
125
+
126
+ <table><tr><td colspan="2">Alternative</td><td>1</td><td>2</td><td>3</td><td>4</td><td>5</td><td>6</td></tr><tr><td>factor A</td><td>Total_abundance</td><td>24764</td><td>19696</td><td>114288</td><td>85903</td><td>57928</td><td>31692</td></tr><tr><td>factor B</td><td>Mesh-size(mm)</td><td>4.706</td><td>2.800</td><td>1.000</td><td>0.710</td><td>0.500</td><td>0.355</td></tr></table>
127
+
128
+ When the size of the plastic is fixed, the larger the abundance of a type of plastic is, the more likely it is ingested. That is to say, the more the abundance of the plastic is, the larger the risk degree is. So we use the upper limit effect measurement, which is applicable when the effect measure is expected to be large. Let the maximum of all alternative outcomes $u_{i}^{\max}$ be the corresponding element in the standard row:
129
+
130
+ $u_{i}^{\max} = \max_{j}s_{ij}$ . The upper effect measurement associated with $i$ and $j$ respectively
131
+
132
+ is defined as $r_{ij} = \frac{s_{ij}}{u_i^{\max}}, i = A, B; j = 1,2,3,4,5,6.$
133
+
134
+ Similarly, when the abundance of the plastic is fixed, the smaller the size of a type of plastic is, the more likely it is ingested. So we use the lower limit effect measurement, which is applicable when the effect measure is expected to be small. Let the minimum of all alternative outcomes $u_{i}^{\min}$ be the corresponding element in the standard row: $u_{i}^{\min} = \min_{j} s_{ij}$ . The upper effect measurement associated with $i$ and $j$ respectively
135
+
136
+ is defined as $r_{ij} = \frac{u_i^{\min}}{s_{ij}}, i = A, B; j = 1, 2, 3, 4, 5, 6$ .
137
+
138
+ Decision matrix is the matrix that uses to make a decision. Multi-attribute decision matrix is assembled by effect measurement $r_{ij}$ . A decision matrix $D$ with $n$ attributes and $m$ alternatives are as follows:
139
+
140
+ $$
141
+ D = \left[ \begin{array}{c c c c} r _ {1 1} & r _ {1 2} & \dots & r _ {1 m} \\ r _ {2 1} & r _ {2 2} & \dots & r _ {2 m} \\ \vdots & \vdots & & \vdots \\ r _ {n 1} & r _ {n 2} & \dots & r _ {n m} \end{array} \right]
142
+ $$
143
+
144
+ Substituted $r_{ij}$ into decision matrix D, we have:
145
+
146
+ $$
147
+ D = \left[ \begin{array}{l l l l l l} r _ {A 1} & r _ {A 2} & r _ {A 3} & r _ {A 4} & r _ {A 5} & r _ {A 6} \\ r _ {B 1} & r _ {B 2} & r _ {B 3} & r _ {B 4} & r _ {B 5} & r _ {B 6} \end{array} \right] = \left[ \begin{array}{l l l l l l} 0. 2 1 7 & 0. 1 7 2 & 1. 0 0 0 & 0. 7 5 2 & 0. 5 0 7 & 0. 2 7 7 \\ 0. 0 7 5 & 0. 1 2 7 & 0. 3 5 5 & 0. 5 0 0 & 0. 7 1 0 & 1. 0 0 0 \end{array} \right]
148
+ $$
149
+
150
+ We have already figure out the weight of the abundance attribute and size attribute are 2/3 and 1/3 respectively. According to the Grey Multi-attribute Decision we have:
151
+
152
+ $$
153
+ \left[ \begin{array}{l l} 2 / 3 & 1 / 3 \end{array} \right] \left[ \begin{array}{l l l l l l} 0. 2 1 7 & 0. 1 7 2 & 1. 0 0 0 & 0. 7 5 2 & 0. 5 0 7 & 0. 2 7 7 \\ 0. 0 7 5 & 0. 1 2 7 & 0. 3 5 5 & 0. 5 0 0 & 0. 7 1 0 & 1. 0 0 0 \end{array} \right] = \left[ \begin{array}{l l l} 0. 1 7 0 & \text {A l t e r n a t i v e} & 1 \\ 0. 1 5 7 & \text {A l t e r n a t i v e} & 2 \\ 0. 7 8 5 & \text {A l t e r n a t i v e} & 3 \\ 0. 6 6 8 & \text {A l t e r n a t i v e} & 4 \\ 0. 5 7 5 & \text {A l t e r n a t i v e} & 5 \\ 0. 5 1 8 & \text {A l t e r n a t i v e} & 6 \end{array} \right]
154
+ $$
155
+
156
+ The contribution of alternative 1 and alternative 2 to the outcome is litter, so these two alternatives are not valid alternatives and should be rejected. Then we will analysis the risk degree of different types of plastic respectively according to the other four alternatives.
157
+
158
+ # Risk degree determination
159
+
160
+ Let $x(k), k = 3,4,5,6$ be the number of alternative $k$ and regulate these 4
161
+
162
+ alternatives according to the formulation: $r(k) = \frac{x(k)}{\sum_{k=3}^{6} x(k) / 4}$ . Then we have:
163
+
164
+ $$
165
+ \left[ \begin{array}{c c c c} 0. 3 0 9 & 0. 2 6 3 & 0. 2 2 5 & 0. 2 0 3 \end{array} \right]
166
+ $$
167
+
168
+ By the research of Moore in 1999, we have data as follows:
169
+
170
+ Table4
171
+ Abundance by type and size of plastic pieces and tar in North Pacific gyre
172
+
173
+ <table><tr><td>plastic alternative</td><td>Fragments</td><td>Styrofoam pieces</td><td>pellets</td><td>Polypropylene /monofilament</td><td>Thin plastic films</td><td>Miscella- neous</td></tr><tr><td>Alternative 3</td><td>61187</td><td>1593</td><td>12</td><td>9969</td><td>40622</td><td>905</td></tr><tr><td>Alternative 4</td><td>55780</td><td>591</td><td>0</td><td>2933</td><td>26273</td><td>326</td></tr><tr><td>Alternative 5</td><td>45196</td><td>576</td><td>12</td><td>1460</td><td>10572</td><td>121</td></tr><tr><td>Alternative 6</td><td>26888</td><td>338</td><td>0</td><td>845</td><td>3222</td><td>398</td></tr></table>
174
+
175
+ Regulate the data in table 4 follow the formulate above and we have:
176
+
177
+ Table5 Regulation data of table 4
178
+
179
+ <table><tr><td>plastic alternative</td><td>Fragments</td><td>Styrofoam pieces</td><td>pellets</td><td>Polypropylene /monofilament</td><td>Thin plastic films</td><td>Miscella- neous</td></tr><tr><td>Alternative 3</td><td>0.5354</td><td>0.0139</td><td>0.0001</td><td>0.0872</td><td>0.3554</td><td>0.0079</td></tr><tr><td>Alternative 4</td><td>0.6493</td><td>0.0069</td><td>0.0000</td><td>0.0341</td><td>0.3058</td><td>0.0038</td></tr><tr><td>Alternative 5</td><td>0.7802</td><td>0.0050</td><td>0.0002</td><td>0.0252</td><td>0.1825</td><td>0.0021</td></tr><tr><td>Alternative 6</td><td>0.8484</td><td>0.0107</td><td>0.0000</td><td>0.0267</td><td>0.1017</td><td>0.0126</td></tr></table>
180
+
181
+ Then the risk degrees of different types of plastic are as follows:
182
+
183
+ $$
184
+ \left[ \begin{array}{l l l l} 0. 3 0 9 & 0. 2 6 3 & 0. 2 2 5 & 0. 2 0 3 \end{array} \right] \left[ \begin{array}{l l l l l l l} 0. 5 3 5 4 & 0. 0 1 3 9 & 0. 0 0 0 1 & 0. 0 8 7 2 & 0. 3 5 5 4 & 0. 0 0 7 9 \\ 0. 6 4 9 3 & 0. 0 0 6 9 & 0. 0 0 0 0 & 0. 0 3 4 1 & 0. 3 0 5 8 & 0. 0 0 3 8 \\ 0. 7 8 0 2 & 0. 0 0 5 0 & 0. 0 0 0 2 & 0. 0 2 5 2 & 0. 1 8 2 5 & 0. 0 0 2 1 \\ 0. 8 4 8 4 & 0. 0 1 0 7 & 0. 0 0 0 0 & 0. 0 2 6 7 & 0. 1 0 1 7 & 0. 0 1 2 6 \end{array} \right] = \left[ \begin{array}{l} 0. 6 8 4 0 \\ 0. 0 1 0 5 \\ 0. 0 0 0 1 \\ 0. 0 4 7 0 \\ 0. 2 5 2 0 \\ 0. 0 0 6 5 \end{array} \right]
185
+ $$
186
+
187
+ According to the outcomes, the fragments and thin plastic films are the most harmful two types of plastic. $93.6\%$ of the damager is caused by these two types of plastic and risk degrees of others are very small in comparison to them.
188
+
189
+ ![](images/27558c532f8006855a0a172a145f2137af0accfce43536924e87fba8a0471722.jpg)
190
+ Figure3 Risk degrees of different types of plastic
191
+
192
+ # Strength of model
193
+
194
+ Our model meets all of our original expectations with the use of the Rank Reciprocal Weighting theory and Grey Multi-attribute Decision method. We have determined the risk degree of different types of plastic. By knowing the risk degree of all kinds of plastic in ocean, the policy maker can formulate specific, effective and economic policies and regulations.
195
+
196
+ The model provides a framework for marine plastic pollution monitoring which can be applied to various periods and various sea areas. Extend this model to other sea areas, different policies can be made depending on the pollution in different areas to protect the ocean more effective and to the point.
197
+
198
+ Finally, a great strength of our model lies in the accuracy selection of the valid data. After calculation, analysis and selection, we substitute the valid data into the model to get a more accuracy outcome. Besides, our data show that the plastic we choose were 0.355-2.799mm in size. This size of particle could be ingested by most marine organism (Bourne and Imber, 1982; Azzarello and Van Vleet, 1987; Moore et al., 2001). So the accuracy of our data selection can be confirmed.
199
+
200
+ # Weakness of model
201
+
202
+ In fact, some other factors may contribute to the risk degree are not taken into consideration, such as the poison and the figure of the plastic. This may lead to a deviation of our model.
203
+
204
+ In reality, the habits of different marine organism are different. But this behavior is not reflected in our model. While we believe that all the behavior of the marine creatures are the same and their ingestions are random.
205
+
206
+ Our model aims to find out the risk degrees of different types of plastic. While we can not use this model measuring the overall marine pollution level.
207
+
208
+ # Discussion
209
+
210
+ According to our model, the fragments and thin plastic films are the most harmful two types of plastic. The fragments danger makes up $68.4\%$ of total danger that caused by floating plastic, the thin plastic film make up $25.2\%$ . And the risk degrees of others are very small in comparison to them. So we divided the floating plastic into 3 grades by the risk degree: fragments belong to "the high risk plastic" (HRP), thin plastic films belong to "the middle risk plastic" (MRP), Styrofoam pieces, pellets and Polypropylene/monofilament belong to "the low risk standard plastic" (LRP)
211
+
212
+ Allow for the notable different among these three standards, we suggest policy maker make different policies to different plastic in order to abate the marine pollution more effective and more economic.
213
+
214
+ The methods to grade the plastic compound products are as follow:
215
+
216
+ Condition 1 If the product is only made up of HRP and MRP
217
+
218
+ Let the proportion and risk degree of HRP be $p$ and $\omega_{1}$ respectively. Let the proportion and risk degree of MRP be $q$ and $\omega_{2}$ respectively. If $p / q > \omega_{2} / \omega_{1}$ , the product should be named as HRP. Otherwise, it should be named as MRP.
219
+
220
+ Condition 2 If the product contains LRP
221
+
222
+ Because the risk degree of LRP is very low, the product should be named as LPR only when the proportion of LRP is higher than $90\%$ . Otherwise, the product should be named follow the Condition1. The specific solutions to the three standards are as follow:
223
+
224
+ Table 6 The specific solutions to the three standards
225
+
226
+ <table><tr><td>Policy
227
+ Solution</td><td>The policy for HRP</td><td>The policy for MRP</td><td>The policy for LRP</td></tr><tr><td>grades of tax rates</td><td>highest</td><td>high</td><td>low</td></tr><tr><td>Rate of reuse</td><td>&gt;=85%</td><td>&gt;=75%</td><td>&gt;=60%</td></tr><tr><td>penalty for littering</td><td>Fines up to $50000</td><td>Fines up to $40000</td><td>Fines up to $30000</td></tr><tr><td>plastic</td><td>per day</td><td>per day</td><td>per day</td></tr><tr><td>research funding</td><td>Highest</td><td>high</td><td>low</td></tr></table>
228
+
229
+ Note: The penalty for littering plastic is decided referring to the environment law on of United States.
230
+
231
+ In comparing plastics with other discarded materials such as lignocellulosic paper, plastics are chemically resistant, are particularly persistent in the environment (Andrady A. L.2003). The cost of removing the existing floating plastic is prohibiting. To prevent the accumulation of the plastic debris in North Pacific Ocean, the most effective way is cut down the source of the waste.
232
+
233
+ # Recommendations
234
+
235
+ Due to the extensively use of plastic in industries, just forbid the production of the plastic to abate the pollution is unrealistic. To improve the marine environment, we recommend:
236
+
237
+ - Reduce the production of plastic products which will decompose into fragments or thin plastic films, such as hard plastic and plastic bags, as far as the basic demand of people can be met.
238
+ - Modify the design of products or package to reduce the use of plastic.
239
+ - Make plastic more durable so that it will be reused to reduce the total demand for plastic.
240
+ - Make policy that banning all the promotion for plastic products.
241
+ - Substitute away the toxic constituents in plastic products.
242
+ Moderately increase the tax for purchasing plastic products.
243
+
244
+ - For the area that is seriously polluted, clean up the debris with an efficient and economic method. For example, work in night to reduce the damage to the local ecosystem because the plankton abundance during the day is higher than that at night.
245
+ - In future research on marine plastic pollution, much more importance should be attached to the abundance of fragments and thin plastic films. The changes of them should be monitored and used to adjust their standards. And the policies can be adjusted according to the risk standards.
246
+ - Increase the funding on research of plastic degrade.
247
+ - Improve the reuse of the plastic products.
248
+ - Establish a comprehensive and accuracy marine pollution database for further study.
249
+
250
+ # Reference
251
+
252
+ Andrady, A. L., 2003. In Plastics and the environment (ed. Andrady A. L.). West Sussex, England: John Wiley and Sons.
253
+ Azzarello, M.Y., Van Vleet, E.S., 1987. Marine birds and plastic pellets. Marine Ecology Progress Series 37 (2-3), 295-303.
254
+ Bourne, W.R.P., Imber, M.J., 1982. Plastic pellets collected by a prion on Gough Island, Central South Atlantic Ocean. Marine Pollution Bulletin 13 (1), 20-21.
255
+ Costa M.F., 2009. On the importance of size of plastic fragments and pellets on the strandline: a snapshot of a Brazilian beach. Environ Monit Assess. doi:10.1007/s10661-009-1113-4.
256
+ Hoshaw L.,2009. Afloat in the Ocean, Expanding Islands ofTrash. Retrieved February 21, 2010, from:http://www.nytimes.com/2009/11/10/science/10patch.html?em
257
+ Laist, D.W., 1987. Overview of the biological effects of lost and discarded plastic debris in the marine environment. Marine Pollution Bulletin 18, 319-326.
258
+ Mato, Y., Isobe, T., Takada, H., Kanehiro, H., Ohtake, C., Kaminuma, T., 2001. Plastic resin pellets as a transport medium for toxic chemicals in the marine environment. Environmental Science and Technology 35, 318-324.
259
+ Moore, C.J., Moore, S.L., Leecaster, M.K., Weisberg, S.B., 2001. A comparison of plastic and plankton in the North Pacific central gyre. Marine Pollution Bulletin 42 (12), 1297-1300.
260
+ Moore, C. J., 2008. Synthetic polymers in the marine environment: A rapidly increasing, long-term threat. Environmental Research 108, 131-139
261
+ Spear, L.B., Ainley, D.G., Ribic, C.A., 1995. Incidence of plastic in seabirds from the Tropical Pacific, 1984-91: relation with distribution of species, sex, age, season, year and body weight. Marine Environmental Research 40, 123-146.
262
+ Tanimura A., Yamashita R., 2007. Floating plastic in the Kuroshio Current area, western North Pacific Ocean. Baseline / Marine Pollution Bulletin 54,464-488
263
+ Unnithan, S., 1998. Through thick, not thin, say ragpickers. Indian Express 23 November.
MCM/2010/C/7812/7812.md ADDED
@@ -0,0 +1,370 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Size-Classified Plastic Concentration in the Ocean
2
+
3
+ # Problem and Introduction
4
+
5
+ Ocean (the Pacific Ocean, in particular) debris has proved to be new to the scientific community. Yet, it is quite similar to common marine litter problems in that it is an environmental, economic and health problem (UNEP, 2009). Moreover, ocean debris often causes dangers to marine organisms ranging from sea birds to sea mammals and filter eaters through ingestion and entanglement (Day, 1980) and thus breaks the balance of ocean ecosystem.
6
+
7
+ Previous studies have addressed the impacts of debris on birds, fish and filter-feeding organisms. However, few studies have recognized that while affecting the ocean ecosystem, ocean debris itself is also changing dynamically in the sense of mass concentration due to both human inputs and nature forces (mainly, physical abrasion and chemical photolysis).
8
+
9
+ Furthermore, although many studies have acknowledged the fact that marine organisms usually fail to distinguish between debris and their food (C. J. Moore, 2001), no studies have specifically considered the effects of different sizes of debris on different marine organism. The fact that marine organisms eat various sizes of food makes it important to classify the sizes of debris because marine organisms' abilities to distinguish between debris and food depend on the size of the debris.
10
+
11
+ Therefore, this model studies the dynamic and classified system of ocean plastics to better understand how exactly human behaviors over time affect ocean ecosystem.
12
+
13
+ For the purpose of this model, ocean debris is simply referred as "Plastic" since plastics are the major components of debris (UNEP, 2009). Also, this model uses the North Pacific Central Gyre area as a sample for further discussion.
14
+
15
+ # Classified Plastic Concentration Model
16
+
17
+ # Plastic Input
18
+
19
+ Ocean plastics are mainly results of human behaviors (UNEP 2009). There are two categories often used to classify sources of ocean plastics depending on their origins: land-based sources (LB) and sea-based
20
+
21
+ sources (SB). Land-based sources primarily compose of consumer and industry plastic products wastes/dumps which are carried through the inland water system to the ocean.
22
+
23
+ Also, the types of plastic also vary over different sources. Based on the Leontief model, the source-coefficient matrix for this model is as the followings:
24
+
25
+ <table><tr><td rowspan="2">Sources</td><td colspan="5">Plastic Wastes Inputs</td></tr><tr><td>Type1</td><td>Type2</td><td>Type3</td><td>...</td><td>TypeN</td></tr><tr><td>C</td><td>a11</td><td>a12</td><td>a13</td><td>...</td><td>a1n</td></tr><tr><td>I</td><td>a21</td><td>a22</td><td>a23</td><td>...</td><td>a2n</td></tr><tr><td>S</td><td>a31</td><td>a32</td><td>a33</td><td>...</td><td>a3n</td></tr><tr><td>F</td><td>a41</td><td>a42</td><td>a43</td><td>...</td><td>a4n</td></tr></table>
26
+
27
+ Table 1. Source-coefficient matrix. C: plastic source from consumers; I: plastic source from industries; S: plastic source from shipping; F: plastic source from fishing.
28
+
29
+ Then, the following linear model is made on the mass of land-based plastic wastes at time t: $\mathrm{P_{LBt}} = \alpha_1 + \alpha_2\mathrm{P_{ct}} + \alpha_3\mathrm{P_{it}} + \mu_1\dots$ (1), where, for a given time t, $\mathrm{P_{LBt}}$ stands for the mass of land-based plastic wastes; $\mathrm{P_{ct}}$ and $\mathrm{P_{it}}$ stand for the mass of consumer plastic products and industry plastic products respectively; $\alpha_{1}$ is the intercept and $\alpha_{2},\alpha_{3}$ the coefficients; $\mu_{1}$ is an error item that represents the total influence of all other related factors.
30
+
31
+ Shipping and fishing are the major sea-based sources of plastic wastes. Based on this, the sea-based plastic wastes model is expressed as: $\mathrm{P_{SBt}} = \beta_1 + \beta_2\mathrm{P_{st}} + \beta_3\mathrm{P_{ft}} + \mu_2$ ... (2), where, similar to the land-based plastic wastes model, $\mathrm{P_{SBt}}$ stands for the mass of sea-based plastic wastes; $\mathrm{P_{ct}}$ and $\mathrm{P_{it}}$ stand for the mass of plastic products consumed by shipping and fishing respectively; $\beta_{1}$ is the intercept and $\beta_{2},\beta_{3}$ the coefficients; $\mu_{2}$ is the same as $\mu_{1}$ .
32
+
33
+ Next, this model names the ocean area to study as the Area of Interest (AOI). The inputs of plastic wastes could simply be the combination of land-based and seabased sources. However, the problem of ocean plastics involves another issue related with ocean geography (ocean current, in particular) that causes dispersion and deposition of plastics. Here is our approach to this problem.
34
+
35
+ Any ocean area rather than our interested area AOI is defined as "Outside Area (OA)". Due to the effect of ocean current, there must be already-existed debris exchange (both in and out) between AOI and OA. For the purpose of our model, however, this model assumes that
36
+
37
+ such exchange is balanced. In another words, the input from OA to AOI and the output from AOI to OA are the same in total mass.
38
+
39
+ Also, regarding the dispersion and deposition of plastics happened when land-based and sea-based sources input to AOI, this model incorporates a new factor $\lambda$ and name it "Ocean Geography Factor (OGF)". $\lambda$ represents the affected accumulative proportional mass of plastics wastes sources that is not transported into AOI due to various ocean geographical factors including ocean currents, tidal cycles, regional-scale topography, including sea-bed topography and wind (UNEP 2009).
40
+
41
+ Therefore, this model only considers the effect of fresh land-based plus sea-based sources of plastics wastes as inputs to our AOI. See Figure 1 below.
42
+
43
+ ![](images/d9f890d972cfbd05670b9d00d112474560d021ac82dec57d93f24067afea4b9c.jpg)
44
+ Figure 1. Plastic inputs.
45
+
46
+ The plastic input model is then, $\mathrm{I} = \mathrm{P_{LBt}}(1 - \lambda_1) + \mathrm{P_{SBt}}(1 - \lambda_2) \dots (3)$ , where I means inputs of ocean plastic debris to AOI; $\mathrm{P_{LBt}}$ stands for the mass of land-based plastic wastes; $\mathrm{P_{SBt}}$ stands for the mass of sea-based plastic wastes; $1 - \lambda_1$ stands for the proportion of $\mathrm{P_{LBt}}$ unaffected by ocean geography, and $1 - \lambda_2$ the proportion of $\mathrm{P_{SBt}}$ unaffected by ocean geography.
47
+
48
+ # Individual Plastic Object
49
+
50
+ The physical abrasion and chemical photolysis of plastic objects are important factors in the discussion of the concentration of plastic in the ocean. The transformation of an individual plastic object in the ocean is discussed in this part.
51
+
52
+ Assume the size of an arbitrary plastic object in the ocean at time $t$ is $S(t)$ , whose initial size at time $t = 0$ is $S_0$ . The size of the object decreases by a proportion $r_1$ of its initial size because of the physical abrasion and a
53
+
54
+ The proportion $\mathbf{r}_2$ because of the chemical photolysis per unit time (say 1 year).
55
+
56
+ According to the assumptions stated above, the following relation can be made: $\frac{dS(t)}{dt} = -S(t)\cdot (r_1 + r_2)\dots$ (4). With the initial condition: $S(0) = S_{0}$ .Equation (4) is solved to get the expression:
57
+
58
+ $$
59
+ S (t) = S _ {0} e ^ {- \left(r _ {1} + r _ {2}\right) t} \dots (5).
60
+ $$
61
+
62
+ To interpret the chemical decay rate $\mathbf{r}_2$ , the physical abrasion rate $\mathbf{r}_1$ is set to 0. Equation (5), then, becomes: $S(t) = S_0 e^{-r_2 t}$ . Calculate the half life of the plastic object studied by taking $S(t) = \frac{1}{2} S_0$ , $t = t_{1/2}$ , then $t_{1/2}$ is expressed as $t_{1/2} = \frac{\ln 2}{r_2}$ . This happens to be the expression for the half time of photolysis and $\mathbf{r}_2$ happens to be in the place of the rate constant $K_p$ of photolysis reactions. Therefore, $\mathbf{r}_2$ can be interpreted as the reaction rate of photolysis on plastics. On the other hand, there is no direct relationship between $\mathbf{r}_1$ and a physical quantity.
63
+
64
+ # The Classification of the Sizes of Plastic Objects
65
+
66
+ One of the main purposes of studying the concentration of plastic objects in the ocean is to estimate their influence to marine organisms. Therefore, the sizes of the plastic objects are of essential interest. The objects have the most significant influence to a particular kind of animal when their sizes are in the range of that kind of animal's food size.
67
+
68
+ The sizes of plastic objects are divided into four ranges or categories as listed below:
69
+
70
+ <table><tr><td>Level 1</td><td>n3&#x27;s - n4&#x27;s</td><td>bottles, bags, fishing nets, etc.</td></tr><tr><td>Level 2</td><td>n2&#x27;s - n3&#x27;s</td><td>cigarette filters, fragments, etc.</td></tr><tr><td>Level 3</td><td>ns - n2&#x27;s</td><td>visible dust</td></tr><tr><td>Level 4</td><td>s - ns</td><td>polymer molecules</td></tr></table>
71
+
72
+ Table 2. Category of the Sizes of Plastics.
73
+
74
+ Because the decay of the objects is exponential according to Equation (5), the ranges are made exponentially equivalent. In Table 1, s is the smallest size of a particle that can be called plastic; n is a common constant factor. The magnitudes of s and n are to be determined later.
75
+
76
+ With the categorization of plastic objects in Table 1, the time for an object to decay from the maximum of level i to the maximum of level $\mathrm{i} + 1$ and the time for an object to decay from the minimum of level i to the minimum of level $\mathrm{i} + 1$ would be the same. This implies that, if at time $\mathbf{t}_0$ there are $\mathbf{x}$ particles in Level i,
77
+
78
+ there is a time $\mathrm{m}$ after which all the $\mathbf{x}$ particles are in level $\mathrm{i + 1}$ . The time interval $\mathrm{m}$ is calculated by Equation (6):
79
+
80
+ $$
81
+ \begin{array}{l} n s = n ^ {2} s \cdot e ^ {- (r _ {1} + r _ {2}) m} \dots (6), \text {w h i c h g i v e s :} m = \frac {\ln n}{r _ {1} + r _ {2}} \dots \\ (7). \end{array}
82
+ $$
83
+
84
+ Assume the size (and thus the mass and concentration) of a given amount of plastic objects decays to lower size levels at a constant rate. Then, for any infinitesimal time interval dt, the amount of particles transfer from Level i to Level $i + 1$ can be expressed as: $-\frac{C_i}{m} dt$ , where $C_i$ is the concentration (mass/ area) of all objects in size Level i.
85
+
86
+ # Derivation of the Concentration vs. Time Relation
87
+
88
+ Four differential equations are derived for the concentration of objects in the four size levels as follows:
89
+
90
+ $$
91
+ \frac {d C _ {1}}{d t} = I _ {1} - \frac {C _ {1}}{m} - \left(r _ {1} + r _ {2}\right) \cdot C _ {1}, C _ {1} (0) = C _ {1 0} \dots (8);
92
+ $$
93
+
94
+ $$
95
+ \frac {d C _ {2}}{d t} = I _ {2} + \frac {C _ {1}}{m} - \frac {C _ {2}}{m} - \left(r _ {1} + r _ {2}\right) \cdot C _ {2}, C _ {2} (0) = C _ {2 0} \dots (9);
96
+ $$
97
+
98
+ $$
99
+ \frac {d C _ {3}}{d t} = I _ {3} + \frac {C _ {2}}{m} + r _ {1} \cdot \left(C _ {1} + C _ {2}\right) - \frac {C _ {3}}{m} - r _ {2} \cdot C _ {3}, C _ {3} (0) = C _ {3 0} \dots (1 0);
100
+ $$
101
+
102
+ $$
103
+ \frac {d C _ {4}}{d t} = \frac {C _ {3}}{m} + r _ {2} \left(C _ {1} + C _ {2} + C _ {3}\right) - \frac {C _ {4}}{m ^ {\prime}}, C _ {4} (0) = C _ {4 0} \dots (1 1).
104
+ $$
105
+
106
+ In Equations (8), (9), (10) and (11), $C_i$ represents the mass concentration of plastic objects of size level $i$ in the Area of Interest (AOI). $C_i(0)$ gives the initial conditions of the differential equations. $r_1$ and $r_2$ are the decay rates of the objects and $m$ is the time for all objects in one size level to decay to the lower size level. I is a new variable that represents the concentration of the plastic input into the area of interest by means of human activities. The expressions for I have been derived in the plastic inputs part above.
107
+
108
+ The physical meaning of Equation (8) is: The rate of change of the Level 1 concentration is the sum of the rate of input, the rate of decaying to Level 2 and the rate of loss by physical and chemical reactions. To be more clear, the assumption is made so that the main part of an initial object will decay to objects of the next level, the particle loss by physical reaction will contribute to the increase of the mass concentration of level 3 and the particle loss by the chemical reaction will contribute to the increase of the mass concentration of level 4.
109
+
110
+ Similarly, the rate of change of the Level 2 concentration is the sum of rate of input, the rate of
111
+
112
+ reception from Level 1, the rate of decay to Level 3 and the rate of physical and chemical particle loss.
113
+
114
+ The rate of change of the concentration of Level 3 is the sum of the rate of input, the rate of reception from Level 2, the rate of contribution from the physics abrasion of Level 1 and 2, the rate of decay to Level 4 and the rate of chemical lost. In this level, the physical abrasion may or may not function in the decay of objects. For the simplicity of the equation, the decay time from Level 3 to Level 4 is approximated to still be $m$ .
115
+
116
+ The rate of change of the Level 4 concentration is the sum of the rate of reception from Level 3, the rate of contribution from the chemical loss of Level 1, 2 and 3 and the rate of decay to non-polymer or harmless molecules. Here, another coefficient $m'$ is involved to represent the average time for a particle of size in Level 4 to decay to non-polymer or harmless molecules. Also, assume there are no incoming particles at the molecule size level directly from human activities.
117
+
118
+ Equation (8), (9), (10), (11) gives the equation $\frac{dC_1}{dt} +\frac{dC_2}{dt} +\frac{dC_3}{dt} +\frac{dC_4}{dt} = I_1 + I_2 + I_3 - \frac{C_4}{m'}\dots$ (12).
119
+
120
+ Regarding the four size levels as a system, the rate of change of the whole system concentration is just the rate of total input minus the rate the plastic polymer decay to harmless particles.
121
+
122
+ # Mass Concentration as a Function of Time
123
+
124
+ The solutions for Equation (8), (9), (10) and (11) are listed below in order. They are expressions of the mass concentration of objects of certain size levels in the area of interest as a function of time.
125
+
126
+ $$
127
+ C _ {1} = I _ {1} t + C _ {1 0} \cdot e ^ {\left[ - \frac {1}{m} - \left(r _ {1} + r _ {2}\right) \right] t} \dots (1 3);
128
+ $$
129
+
130
+ $$
131
+ C _ {2} = \left(I _ {2} + \frac {C _ {1}}{m}\right) t + C _ {2 0} \cdot e ^ {\left[ - \frac {1}{m} - \left(r _ {1} + r _ {2}\right) \right] t} \dots (1 4);
132
+ $$
133
+
134
+ $$
135
+ C _ {3} = \left[ I _ {3} + \frac {C _ {2}}{m} + r _ {1} \left(C _ {1} + C _ {2}\right) \right] t + C _ {3 0} \cdot e ^ {\left(- \frac {1}{m} - r _ {2}\right) t} \dots (1 5);
136
+ $$
137
+
138
+ $$
139
+ C _ {4} = \left[ \frac {C _ {3}}{m} + r _ {2} \left(C _ {1} + C _ {2} + C _ {3}\right) \right] t + C _ {4 0} \cdot e ^ {\left(- \frac {1}{m ^ {\prime}}\right) t} \dots (1 6).
140
+ $$
141
+
142
+ In the expressions, the only variable is time $t$ . However, the constants $I_1, I_2, I_3, m, m', r_1, r_2$ depend on the environmental and industrial situation and make the equations flexible enough to deal with particular situations on a case by case basis.
143
+
144
+ Each of the four expressions is a combination of a linearly increasing term and an exponentially decay part.
145
+
146
+ The behavior of the concentrations along time will depend on which of the two terms overweigh the other. Roughly speaking, the larger the plastic input I is, the more weighed the first term; the larger the decay rates $\mathbf{r}_1$ and $\mathbf{r}_2$ (which result in a larger $1 / \mathrm{m}$ ), the more weighed the second term. In a relatively short term fashion, the behavior will be linear increase dominated, exponential decay dominated or a combination of both. This behavior will be of important interest for the prediction of the situation of the ecosystem. In a long term fashion, the concentrations will show a linearly increasing pattern whatever constants are chosen, which limits the model's abilities to describe the long term run of the system of interest. This limitation will be discussed in the "Limitation" section.
147
+
148
+ # Discussion and Results
149
+
150
+ # Ranges and Initial Values of Variables
151
+
152
+ # Size Levels of Plastic Object
153
+
154
+ The sizes of plastic objects are classified for studying their influence for certain groups of organisms. Therefore, the numerical ranges four size levels, i.e., constant n and s, are determined by the diet of organisms feeding on marine sources.
155
+
156
+ Filter feeders feed on planktons, the length range of which is $2*10^{-7}\mathrm{m}$ to $1*10^{-3}\mathrm{m}$ . Assign to this level the size range $5*10^{-6}\mathrm{m} - 5*10^{-4}\mathrm{m}$ , which represents the size of majority planktons. Thus gives $\mathrm{ns} = 5*10^{-6}\mathrm{m}$ and $\mathrm{n}^2\mathrm{s} = 5*10^{-4}\mathrm{m}$ . The food of small birds and fishes fits in the next level of $\mathrm{n}^2\mathrm{s} - \mathrm{n}^3\mathrm{s}$ or $5*10^{-4}\mathrm{m} - 5*10^{-2}\mathrm{m}$ (5cm). The food of large marine mammals, large birds and fishes fits in Level 1, which is $5*10^{-2}\mathrm{m} - 5\mathrm{m}$ . Level 4 ( $5*10^{-8}\mathrm{m} - 5*10^{-6}\mathrm{m}$ ) is considered the size molecules and small polymers. This size level is hardly related to the size of food of any marine organisms of interest. Because plastic in Level 4 has no particular influence to any organisms and its concentration in the marine system is very low (e.g.
157
+
158
+ $$
159
+ \frac {1 0 k g \text {P l a s t i c s}}{1 0 0 0 k g m ^ {- 3} \bullet 1 k m ^ {3} \text {w a t e r}} \approx \frac {1}{1 0 ^ {1 1})}, \text {i t s p o i s o n i n g e f f e c t}
160
+ $$
161
+
162
+ is ignored in this model.
163
+
164
+ <table><tr><td>Level</td><td>Diameter Range</td><td>Corresponding Feeder</td></tr><tr><td>1 big</td><td>5*10-2-5</td><td>large mammal &amp; bird</td></tr><tr><td>2 middle</td><td>5*10-4-5*10-2</td><td>small bird, fish, turtle</td></tr><tr><td>3 small</td><td>5*10-6-5*10-4</td><td>filter feeder</td></tr><tr><td>4 tiny</td><td>5*10-8-5*10-6</td><td>N/A</td></tr></table>
165
+
166
+ Table 3. Size Level in Length (m).
167
+
168
+ <table><tr><td>Level</td><td>Volume Range</td><td>Corresponding Feeder</td></tr><tr><td>1</td><td>1.25*10-4-1.25*102</td><td>large mammal &amp; bird</td></tr><tr><td>2</td><td>1.25*10-10-1.25*10-4</td><td>small bird, fish, turtle</td></tr><tr><td>3</td><td>1.25*10-16-1.25*10-10</td><td>filter feeder</td></tr><tr><td>4</td><td>1.25*10-22-1.25*10-16</td><td>N/A</td></tr></table>
169
+
170
+ Table 4. Size Level in Volume $\left( {\mathrm{m}}^{3}\right)$ .
171
+ Thus constant $n = 10^6$ , $s = 1.25 * 10^{-22} \, \text{m}^3$ .
172
+
173
+ # The Initial Mass Concentration of Plastics
174
+
175
+ According to Moore, C.J. et al, 2001, the mass concentration of plastic $4.8*10^{-3}\mathrm{m} - 3.5*10^{-4}\mathrm{m}$ in North Pacific Central Gyre is $5.1\mathrm{kg / km^2}$ . The size range is roughly half of that of size Level 2 in our model. Assume, then, $\mathrm{C}_{20} = 10\mathrm{kg / km^2}$ . To study and compare the concentration of objects in all four size levels, assume also that $\mathrm{C_{10} = C_{20} = C_{30} = C_{40} = 10kg / km^2}$ . Because the main focus is the change of the concentration instead of the original content, the values of $\mathrm{C_{10}}$ , $\mathrm{C}_{20}$ , $\mathrm{C}_{30}$ , and $\mathrm{C}_{40}$ are not going to be varied in later discussion.
176
+
177
+ # The Physical Abrasion and Photolysis Rate
178
+
179
+ Studies have shown that complete decay of plastic in the ocean varies from 10-20 years to 450 years.
180
+
181
+ <table><tr><td>Material</td><td>Degradation Rate (year)</td></tr><tr><td>plastic bag</td><td>10-20</td></tr><tr><td>commercial netting</td><td>30-40</td></tr><tr><td>foamed plastic buoy</td><td>80</td></tr><tr><td>plastic beverage bottle</td><td>450</td></tr></table>
182
+
183
+ Table 5. Degradation rates of different plastic products in the marine environment.
184
+
185
+ Because most plastics are non-biodegradable, this model assumes the entire disappearance of plastic is due to photolysis process. Take 100 years as a standard, then the average life time $\tau$ is 100 yrs. $t_{\frac{1}{2}} = \tau \bullet \ln 2 = 69.3\text{yrs}\dots (17)$ . By $t_{\frac{1}{2}} = \frac{\ln 2}{r_2}\dots (18)$ , get $r_2 = 1\%$ . Then let time interval of decay between two size levels $m = 100$ yrs. This estimation is justified because an object would decay to $\frac{1}{10^6}$ of its original size, which is viewed as disappearance in this case. Assume $r_1 = r_2$ (justified in "The Effect of Photolysis Rate" section), then, $r_2 = 6.9\%$ . A rough range of decay rate is $1\% \leq r_1 = r_2 \leq 6.9\%$ .
186
+
187
+ # The Effect of Input on Plastic Concentration
188
+
189
+ Plastic input into the ocean can come in different size levels. With respect to the influence of the input on
190
+
191
+ the concentration of plastic, two topics are explored. The first topic is the relationship between the absolute rate of input and the plastic concentration; the second topic is the influence of the distribution of the input over sizes on the concentration. Make the following assumptions: $\mathrm{C_{10} = C_{20} = C_{30} = C_{40} = 10kg / km^2}$ , $r_1 = r_2 = 5\%$ /yr (within in the range of $1\% -6.9\%$ ). Then, the time interval of decay between two size levels is $m = \frac{\ln 10^6}{r_1 + r_2} = 138yrs$ .
192
+
193
+ # Effect of the Absolute Input Rate
194
+
195
+ During a 100-year period, assume plastic input only comes in size Level 1 (input consists of different sizes will be discussed later), $\mathrm{I}_2 = \mathrm{I}_3 = 0$ . Then, the concentration trend of plastics of the $1^{\mathrm{st}}$ , $2^{\mathrm{nd}}$ , and $3^{\mathrm{rd}}$ levels with different input rates are compared in the curves $\mathbf{C}_1(t)$ , $\mathbf{C}_2(t)$ , and $\mathbf{C}_3(t)$ .
196
+
197
+ ![](images/f1cdd1a7f01b29380908b1c99039389c05a4294d90edfd4e040f429f9e3937d9.jpg)
198
+ Figure 2. Predicted concentration change of Level 1 plastic under various Size 1 input $(\mathrm{I}_1 = 1, 0.1, 0.001\mathrm{kg} / \mathrm{km}^2\mathrm{yr})$ .
199
+
200
+ ![](images/cb8c336b9f198b61e969e003ce0620f2d50bbffe38f5078623166c98a38cc968.jpg)
201
+
202
+ ![](images/5c38c6db0b61ac193094b904936ed2adb57eead6df0d82f018ca4fed575a87db.jpg)
203
+ Figure 3. Predicted concentration change of Level 2 plastic under various Size 1 input $(\mathrm{I}1 = 1,0.1,$ $0.001\mathrm{kg / km^2yr})$
204
+ Figure 4. Predicted concentration change of Level 3 plastic under various Size 1 input (I1=1, 0.1, $0.001\mathrm{kg} / \mathrm{km}^2\mathrm{yr}$ ).
205
+
206
+ When the input is small enough (0.001), the concentration curves of all the three levels show a pattern of exponential decay, this kind of pattern is ideal when the high value of concentration is undesirable. A maximum input can be obtained, then, which keeps the concentration decaying during a long enough time. On the other hand, when the input is extremely big (1), the concentration experience a very short decay time, if it decays at all, and then increase with a high speed. This is extremely undesirable. With an input rate (0.1) between them, the concentration value will not flow up significantly in the time interval of interest. It is important to note that long term increase of concentration may or may not follow a short term of decrease.
207
+
208
+ Compare among the behavior of $C_1$ , $C_2$ and $C_3$ with the same input rate, $C_3$ always flows up most rapidly. This indicates that the organisms whose food is in Level 3 will probably be the most vulnerable.
209
+
210
+ In conclusion, the input amount exerts a determining effect on plastic concentration of all sizes. Only input under certain amount will lead to continuous decrease of plastic concentration.
211
+
212
+ # Effect of the Distribution of Input
213
+
214
+ Pre-dump physical treatment of plastic, e.g., grinding can result in different plastic input component. Let $\mathbf{D}_1$ , $\mathbf{D}_2$ , $\mathbf{D}_3$ denote three distributions of plastic input: totally size Level 1 (no physical treatment), totally size Level 2 (grind plastic to size Level 2), and equally
215
+
216
+ mixture of size Level 1 & 2 (grind half plastic input) respectively. The yield of Level 3 plastic waste is physically uncontrollable, so is not discussed here.
217
+
218
+ The effect of the three distribution of input is displayed on Figure 5, 6 and 7.
219
+
220
+ ![](images/80a87c77556b191196467ab57fd4129bbcea5951d00d2fc01770d4c7660cdf09.jpg)
221
+ Figure 5. Predicted concentration change of Level 1 plastic under different distribution of input $(\mathrm{D}_1, \mathrm{D}_2, \mathrm{D}_3)$ .
222
+
223
+ ![](images/ede0e97d52b4eacc2f8ce9bf17830c7e59972c619af1c4762b1a3699adc8fc1a.jpg)
224
+ Figure 6. Predicted concentration change of Level 2 plastic under different distribution of input $(\mathrm{D}_1, \mathrm{D}_2, \mathrm{D}_3)$ .
225
+
226
+ ![](images/ce0543b83b562c831c6a1b2683fc477ba9102dbf427ca19d6bc0aa4b27486dee.jpg)
227
+
228
+ ![](images/219162be5788954d7d2d0f718d51e919d96ced485e7e0f7b463efd074eb0a57b.jpg)
229
+ Figure 7. Predicted concentration change of Level 3 plastic under different distribution of input $(\mathrm{D}_1, \mathrm{D}_2, \mathrm{D}_3)$ .
230
+ Figure 8. Predicted change of the total plastic concentration under different distribution of input $(\mathrm{D}_1, \mathrm{D}_2, \mathrm{D}_3)$ .
231
+
232
+ The plastic concentration of different size is affected by size distribution of input. For size Level 1 & 3 plastic, treated input $(\mathrm{D}_2)$ leads to least concentration increasing. However for Size Level 2 plastic, untreated input $(\mathrm{D}_1)$ leads to least concentration increasing. The total plastic concentration also favors input of size Level 2 (treated input) only. However, for the sake of the total plastic concentration to be minimized, $\mathrm{D}_2$ is not the optimized point. The optimized input solution should be one with most input in size Level 2 and a little in size Level 1.
233
+
234
+ # The Effect of Photolysis Rate $\mathbf{r}_2$ on Plastic Concentration
235
+
236
+ # Factors Affect Photolysis Rate $\mathrm{r}_2$
237
+
238
+ Plastic polymers that are vastly used in daily life and industry are mostly non-biodegradable. After been disposed into the marine system, plastics are mainly subject to physical abrasion and photolysis process whose rates are $\mathbf{r}_1$ and $\mathbf{r}_2$ respectively. Physical abrasion result from friction and impassive abrasion. Impassive abrasion is non-predictable and can be ignored if plastics are carried by current in open sea. Both fluvial friction abrasion and photolysis rate are very small. Therefore we suppose physical abrasion rate is constant and same as photolysis rate. Photolysis rate is determined by the equation $\frac{-d[P]}{dt} = k_p[P] = \phi_r I_a \dots$ (19) where [P] is
239
+
240
+ plastic concentration, $\mathbf{K}_{\mathrm{p}}$ is photolysis rate constant, i.e., $\mathbf{r}_2$ , $\Phi_{\mathrm{r}}$ is the quantum yield for reaction $\mathbf{I}_{\mathrm{a}}$ is the sunlight absorption rate.
241
+
242
+ Quantum yield $\Phi_{\mathrm{r}}$ is the number of molecules transformed by absorbing light divided by total number of molecules that absorb light. This parameter is dependent on the chemical property of polymer thus varies by plastic type.
243
+
244
+ Sunlight absorption rate $\mathbf{I}_{\mathrm{a}}$ is calculated by
245
+
246
+ $I_{a} = \Sigma \varepsilon_{w}L_{w}\ldots (20)$ where $\varepsilon_{\mathrm{w}}$ is the molar extinction coefficient of the plastic at wavelength w and is determined by plastic type, $\mathrm{L_w}$ is determined by dayaverage sunlight intensity and calculated for solar radiation from $280 - 800\mathrm{nm}$ at various latitude. The lower sunlight intensity of area of interest is, the slower the decay rate of plastic, the bigger size of remaining plastic.
247
+
248
+ # Comparative Effect of $\mathbf{r}_2$ and I on Plastic Concentration
249
+
250
+ The effect on plastic concentrations of input I and photolysis rate $\mathbf{r}_2$ have been discussed in previous sections separately. In this section, their influences are compared to determine which one is overarching.
251
+
252
+ According to estimation, the average photolysis rate should be $1\%/\mathrm{yr} - 6.9\%/\mathrm{yr}$ . However, $\mathbf{r}_2$ not only depends on uncontrollable factor such as sunlight intensity, but also controllable factor such as plastic type. Some plastic product can decay completely within 20 years while others take up to 450 years. Thus decay rate can be more flexible than input amount and distribution. Assume the physical abrasion rate is held constant at $5\%/\mathrm{yr}$ . Take $0.1\mathrm{kg}/\mathrm{km}^2\mathrm{yr}$ as the central point and choose the range of input concentration to be $0.08\mathrm{kg}/\mathrm{km}^2\mathrm{yr}-0.12\mathrm{kg}/\mathrm{km}^2\mathrm{yr}$ . Taking technology advancement and temporal decrease in concentration at 20-25 years (Figure 2), 30 years since now would be a reasonable time point to compare impact of photolysis rate and input and a photolysis rate $\mathbf{r}_2$ up to $15\%/\mathrm{yr}$ . can be assumed. Suppose all input are of Size Level 1 plastic only $(\mathbf{I}_1)$ . At $30^{\mathrm{th}}$ year time point, concentrations of plastics of various photolysis $(1\%-15\%)$ rate under a range input can be predicted by plotting $\mathbf{C}_1(\mathbf{I}_1,\mathbf{r}_2)$ , $\mathbf{C}_2(\mathbf{I}_1,\mathbf{r}_2)$ , and $\mathbf{C}_3(\mathbf{I}_1,\mathbf{r}_2)$ .
253
+
254
+ ![](images/ffc01a3300acc6ed12ce9e8d59bcdacea8c34fdc697e92aee98cca00420a1e4f.jpg)
255
+ Figure 9. Predicted concentration change of Level 1 plastic with photolysis rate of $1\%/\mathrm{yr}-15\%/\mathrm{yr}$ under input $0.08\mathrm{kg/km}^2\mathrm{yr}-0.12\mathrm{kg/km}^2\mathrm{yr}$ .
256
+
257
+ ![](images/3f0266bb5d43d52d2274f352c5f059db934fa8504059d1afb957dc967d75d2ef.jpg)
258
+
259
+ ![](images/88594a4be5c791c41745a3906861f3b8f53aaadc524a037bd1780fe6a73686dd.jpg)
260
+ Figure 10. Predicted concentration change of Level 2 plastic with photolysis rate of $1\%/\mathrm{yr}-15\%/\mathrm{yr}$ under input $0.08\mathrm{kg/km}^2\mathrm{yr}-0.12\mathrm{kg/km}^2\mathrm{yr}$ .
261
+
262
+ Figure 11. Predicted concentration change of Level 3 plastic with photolysis rate of $1\%/\mathrm{yr}-15\%/\mathrm{yr}$ under input $0.08\mathrm{kg/km}^2\mathrm{yr}-0.12\mathrm{kg/km}^2\mathrm{yr}$ .
263
+
264
+ Of all size level plastics, the influence on plastic concentration by photolysis rate decreases exponentially as photolysis rate increases while that of input undergoes a steady linear increase as input increases.
265
+
266
+ Compared among Figure 9, 10, and 11, the photolysis rate has a relative large influence on concentration of Size Level 2 plastics than on Size Level
267
+
268
+ 1 & 3 plastics. In other words, $\frac{\partial C_2}{\partial r_2} > \frac{\partial C_3}{\partial r_2} > \frac{\partial C_1}{\partial r_2}$ ...
269
+
270
+ (21). The change in the influence of $\mathbf{r}_2$ decreases as $\mathbf{r}_2$ gets
271
+
272
+ bigger, which is $\frac{\partial C_i}{\partial r_2}(a) > \frac{\partial C_i}{\partial r_2}(b)$ ... (22) when $a < b$ .
273
+
274
+ For all three levels, there is one or several equilibrium
275
+
276
+ points which gives $\frac{\partial C_i}{\partial r_2} = \frac{\partial C_i}{\partial I_1}$ ... (23). These points are
277
+
278
+ important in policy determinations. Specifically, within the range of approximately $1\%/\mathrm{yr}-7\%/\mathrm{yr}$ , concentration of Size Level 2 plastic decrease dramatically as photolysis rate increases while decrease slightly as input decreases.
279
+
280
+ # Limitations
281
+
282
+ The major limitation of this model is its inability to describe the long term behavior of the system. Take
283
+
284
+ Equation (13) $C_1 = I_1t + C_{10} \cdot e^{\left[-\frac{1}{m} -(r_1 + r_2)\right]t}$ as an example.
285
+
286
+ As $t$ increases, the term $I_1t$ will keep on increasing linearly independent of the rate of decay and other terms. This makes the problem of pollutions seems to be unsolvable in a long term run. A restrictive term is needed to limit the increase the term $I_1t$ and other input terms. For example, manipulate $I_1t$ to $I_1f(t)$ , in which $f(t)$ 's slope decreases as $t$ increases. However, the long term behavior is relatively not interesting in this problem because all the situations, like the production of plastic, the recycle technology or even the marine environment, will be quite different after a relatively long time. The model still works fairly well in a short time, say 30-50 years, with the reasonable choices of coefficients.
287
+
288
+ There are a series of other limitations. First, the physical abrasion rate of plastic objects is not modeled satisfactorily due to a lack of relevant information. Second, the sizes of plastic objects are classified so that
289
+
290
+ the exponential range of each level is the same. That is, the maximum of each level is a certain multiple of the minimum of that level. The size levels cannot match perfectly with the food levels of marine animals. However, the distinctions are not significant. Third, the toxicity of plastic molecules in sea water is ignored in the model because there mass concentration is very small. It is possible that the toxicity may have unknown influence to marine organisms in spite of its low concentration.
291
+
292
+ The model can be improved by: (i) study the behavior of the system at infinities; (ii) dig more into the mechanism of the physical and chemical reactions of plastic; (iii) explore a more reasonable way to classify the sizes of plastic objects; (iv) explore the toxicity of plastic molecules and the release of such toxicity.
293
+
294
+ # Implications and Advices
295
+
296
+ Referring to Table 3, for the purpose of further discussion, this model defines the marine organisms corresponding to plastic object size level 1 as "O1". Similarly, the marine organisms related with size level 2, 3 and 4 are referred as O2, O3 and O4.
297
+
298
+ # Geographical Influence
299
+
300
+ From the results of sunlight intensity and decay rate study, sunlight intensity and decay rate have positive relationship. More specifically, at places with lower sunlight intensities, the quantity of large plastic objects $(\mathrm{C}_1)$ is likely to be more than the smaller ones $(\mathrm{C}_2, \mathrm{C}_3$ and C4), vice versa because the decay rate is slower.
301
+
302
+ Although sunlight intensity is determined by many geographical factors such as rainfall and latitude, it is useful to discuss the effects of one factor and hold all others constant. In this model, latitude is chosen based on the fact that sunlight intensity is higher in low latitude areas and lower in high latitude areas given all other variables constant.
303
+
304
+ Therefore, the implication here is that in high latitude areas there are mainly large size plastic objects because of a lower decay rate.
305
+
306
+ As a result, organisms O1 will be largely impacted. The advice is to set up effective ocean cleaning plans especially in high latitude areas.
307
+
308
+ # Plastic Inputs
309
+
310
+ Inputs and Concentration (General)
311
+
312
+ Based on Figure 2, 3, and 4, it is verified that any increase in the plastic inputs would cause an increase of mass concentration in all size levels of plastic objects in the ocean, or in another word, the relationship between plastic inputs and mass concentration is always positive. Thus, in general, no increase of either land-based or sea-based plastic sources is recommended.
313
+
314
+ Because the plastic inputs affect the ocean plastics in all levels in all times, it is advisable for governments to use regulations or plastic waste tax tools for both land-based and sea-based sources.
315
+
316
+ # Indicator
317
+
318
+ The results also show that the mass concentration of Level 3 is a good indicator of the classified plastic system in the ocean because holding the inputs constant, the mass concentration of Level 3 is the most responsive to change.
319
+
320
+ Monitoring the mass concentration of Level 3 thus provides good indicates the balance of the whole ecosystem because the loss of vulnerable O3 easily breaks the system.
321
+
322
+ # "Illusion"
323
+
324
+ Another important implication from the results is that once inputs reduced, the mass concentration of plastic objects decreases in short-term but has the potential to rise again in long-term (refer to Figure 2, 3, and 4).
325
+
326
+ Therefore, it is vital for governments to undertake monitoring plans constantly and continuously in order to understand the real situation. Simply speaking, do not get "happy" too early by the "illusion".
327
+
328
+ # Input Control
329
+
330
+ This model could be used to find the maximum of plastic inputs for a certain purpose of protecting any group of organisms once data are given.
331
+
332
+ With a limit on inputs, it is easy for governments to use the already established plastic inputs parts to control the 4 main sources, consumer (land-based), industry (land-based), shipping (sea-based) and fishing (sea-based).
333
+
334
+ # Plastic Wastes Pre-Dump Treatment
335
+
336
+ According to the results of this model, the best treatment types $(\mathrm{D}_1,\mathrm{D}_2$ and $\mathrm{D}_3)$ for minimizing the mass concentration of plastic objects at each size level is as the following:
337
+
338
+ <table><tr><td>Level 1</td><td>Total-Treatment (D2)</td></tr><tr><td>Level 2</td><td>No-Treatment (D1)</td></tr><tr><td>Level 3</td><td>Total-Treatment (D2)</td></tr><tr><td>Total</td><td>Total-Treatment (D2)</td></tr></table>
339
+
340
+ Table 6. Pre-dump Treatment and Concentration
341
+
342
+ This table implies 3 things: (i) for the purpose of protecting both organism O1 and O3, total-treatment before plastic waste dump is the best choice; (ii) for the purpose of protecting organisms O2 no-treatment of plastic wastes input would be a better choice, which is counter-intuitive; (iii) for the purpose of protecting the ocean ecosystem as a whole, the preferable plan is to give more than $50\%$ of the plastic wastes total-treatment.
343
+
344
+ The suggestion for government based on this implication is that controlling the plastic wastes pre-dump treatment is efficient for different ocean organism protection purposes. If, with no specific target organism group to protect, it is suggested for governments to carry out mandatory order on the plastic wastes pre-dump treatment to protect the ocean ecosystem as a whole.
345
+
346
+ # Photolysis vs. Input
347
+
348
+ This model shows that for level 2 (30-year time period), the influence of photolysis overwhelms the influence of input. Thus, it implies that for the purpose of protecting organisms O2, it is better to focus on how to increase the effect of photo rate of plastics.
349
+
350
+ At the same time, the approach to protecting organisms O1 and O3 depends on the current level of plastic quality (we define plastics with higher qualities have higher photolysis rate). If the average quality is high enough so that the influence of photolysis is rather little compared to the influence of inputs, then it is better to focus on how to limit inputs and vice versa.
351
+
352
+ Thus, this implication proposes to the government that when the average quality of plastic is low, encouraging easily-degradable materials for plastic products is more effective than restricting plastic wastes inputs; and when average quality of plastic is high, it is better to focus on restricting plastic wastes inputs.
353
+
354
+ # References:
355
+
356
+ A Comparison of Plastic and Plankton in the North Pacific Central Gyre. Marine Pollution Bulletin. 42 (2001): 464-488. The Interdisciplinary Contest in Modeling. Lawrence University., Appleton, Seeley G. Mudd Library. 21 Feb. 2010.
357
+
358
+ <http://www.comap.com/undergraduate/contest> /mcm/contest/2010/problems/ICM_2010.pdf>
359
+ Azzarello, Marie Y. and Edward S. Van Vleet. "Marine Birds and Plastic Pollution." Marine Ecology. 37 (1987): 295-303.
360
+ Cadée, Gerhard C. "Seabirds and floating plastic debris."
361
+ Marine Pollution Bulletin. 44 (2002): 1294-1295.
362
+ Science Direct. Lawrence University., Appleton, Seeley G. Mudd Library. 21 Feb. 2010.
363
+ <http://www.sciencedirect.com/science?_b=ArticleURL&_pvudi=B6V6N-45X2MG4-J&_rdoc=11&_alid=1214142569&_user=9000194&_fmt=high&_orig=search&view=c&_ct=1677&_sort=r&_acct=C000050381&_version=1&_urlVersion=0&_userid=9000194&md5=eba56d418830debdcb5da6c5c711a487>.
364
+ Day, R. H. The Occurrence and Characteristics of Plastic Pollution in Alaska's Marine Birds. M.S. Thesis, University of Alaska. Fairbanks, AK, 111pp. 1980.
365
+ Macalady, Donald, ed. Perspectives in Environmental Chemistry. New York Oxford: Oxford University Press, 1998.
366
+ O'Hara et al., A Citizen Guide to Plastics in the Ocean: More Than A Litter Problem. Washington, D.C.: Centre for Environmental Education, 1988.
367
+ "Plankton Definitions." Soil & Water Conservation Society of Metro Halifax. 21 Feb. 2010. <http://www.chebucto.ns.ca/ccn/info/Science/SWCS/plankton.html>.
368
+ US Congress, Office of Technology Assessment, Wastes in Marine Environments. OTAO-334 Washington, DC: U.S. Government Printing Office, April 1987.
369
+ United Nations Environment Programme, Regional Seas Programme. "Guidelines on the Use of Market-based Instruments to Address the Problem of Marine Litter." 2009.
370
+ United Nations Environment Programme, Regional Seas Programme. "Marin Litter: A Global Challenge." April 2009.
MCM/2010/C/8048/8048.md ADDED
@@ -0,0 +1,333 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Summary
2
+
3
+ To study the marine debris problem, we abstract the ocean system as a simplified input-output system. The input of the ocean system is debris, and the output is impacts.
4
+
5
+ The Hawaiian monk seal is taken as an example to study the potential impacts of the marine debris on the ocean ecosystem. Along with the increase of derelict fishing gear and other marine debris, the annul number of Hawaiian monk seal entanglements has an increasing trend with fluctuation. This increasing trend is divided into certain growth trend and smooth random change trend. Grey model GM (1, 1) and time series analysis method are used to predict the certain growth trend and the smooth random change trend respectively. Then combine the two trends to generate the predictive value. Based on error analysis, the predictive data is highly similar with actual data during the period 1985-1999. This paper comes to some conclusions: the number of entangled Hawaiian monk seal will increase in short-term; in terms of long period, the Hawaiian monk seal will probably vanish in the near future, the food chain will be destroyed, which may leads to ecosystem disorder.
6
+
7
+ To investigate the annul situation of ocean system, we establish an ocean system evaluation model. Quantitative debris and impacts data of the ocean system are obtained based on analytical hierarchy process (AHP). This paper puts forward an evaluation vector, which consists of the two components-Debris and Impacts, to evaluate the situation of ocean system. A comparison function is constructed to compare the situation of ocean system during different period on the basis of evaluation vector. The results indicate that the amount of debris increases and the situation of ocean system get worse along with the time passing. Contrast between the predictive impacts and the actual impacts indicate that recycling action will improve the situation and bring the positive effect.
8
+
9
+ A feedback system is provided and a feedback variable-Recovery is brought into the system. Analyzing the system, we conclude that decreasing the amount Debris or increasing the amount of Recovery contribute to improve the situation.
10
+
11
+ In the last, we submit a research report to the expedition leader summarizing our findings, proposals for solutions and needed policies.
12
+
13
+ Key words: Hawaiian monk seal; Grey Model GM (1, 1); Time Series Analysis; Ocean System Evaluation Model; Analytical Hierarchy Process
14
+
15
+ # Quantitative Marine Debris Impacts and Evaluation of Ocean System
16
+
17
+ # 1. Introduction
18
+
19
+ Two of the key characteristics that make plastics so useful--their light weight and durability--also make inappropriately handled waste plastics a significant environmental threat. Plastics are readily transported long distances from source areas and accumulate mainly in the oceans, where they have a variety of significant environment and economic impacts [1]. The United Nations Joint Group of Experts on the Scientific Aspects of Marine Pollution (GESAMP) estimated that land-based sources are responsible for up to $80\%$ of marine debris and the remainder was due to sea-based activities [2]. Masahisa et al [3] used numerical simulation methods to research the movement and accumulation of floating marine debris drifting throughout the Pacific Ocean; they found that a large amount of marine debris is concentrated in specific regions that located far from the sources of much marine debris. The specific region is often referred to as The Great Pacific Ocean Garbage Patch (GPOGP) in the media.
20
+
21
+ In our paper, the main questions that we investigate are:
22
+
23
+ - What are the potential short- and long-term impacts of the marine debris on the ocean ecosystem?
24
+ - What are the sources of marine debris? What is current situation of the ocean? If the situation is poor, how to improve it?
25
+
26
+ # 2. Description and Analysis
27
+
28
+ We put forward a method to study the marine debris problem which exists in the Ocean System. To simplify the problem, we first abstract the Ocean System as a simplified input-output system (Fig.1).
29
+
30
+ ![](images/5c449457a5ba10f26c61301c3691a2c34ebebba7e9acd668f3aa61402d6d6d7f.jpg)
31
+ Movement and Accumulation
32
+ Fig.1 input-output ocean system
33
+
34
+ Viewed in isolation, a good Ocean System should make little negative impacts to the marine ecosystem. When linked to input, a good Ocean System is the one with less marine debris. In order to reduce the negative impacts, solutions should include reducing the input.
35
+
36
+ # 2.1 Debris Sources
37
+
38
+ A review of the available data on debris found worldwide indicates that the dominant types and sources of debris come from what we consume (including food wrappers, cigarettes), what we use in transporting ourselves by sea, and what we harvest from the sea (fish gear). Marine debris researchers traditionally classify debris
39
+
40
+ sources into two categories: land- or sea-based, depending on where the debris enters the water [4]. In our paper, to completely research the debris sources and easily calculate, the debris is divided into two aspects:
41
+
42
+ - Direct Data: direct data contain all the land- or sea-based marine debris, which mainly contains plastics, rubber and leather, textiles and so on.
43
+ - Indirect Data: along with population increasing and economic development, more debris is manufactured, discarded, and finally go into the ocean, so the population and economy have been considered as indirect data.
44
+
45
+ # 2.2 Impacts
46
+
47
+ Marine debris is a global issue, affecting all the major bodies of water on the planet—above and below the water's surface. This debris can negatively impact wildlife, habitats, and the economy of coastal communities [4].
48
+
49
+ Since the marine debris concentrate in the ocean, its main impacts are ocean ecosystem and coastal economy. In following section, we will model and focus on researching its impacts on the ocean ecosystem and coastal economy.
50
+
51
+ # 3. Model for Impacts on Ocean Ecosystem
52
+
53
+ One of the most notable types of impacts on ocean ecosystem from marine debris is wildlife entanglement. Numerous marine animals become entangled in marine debris each year [5]. In this paper, we take the entangled Hawaiian monk seal as example to study the impacts on ecosystem.
54
+
55
+ The annual number of Hawaiian monk seal entanglements comes from NOAA [6], as show in Fig.2.
56
+
57
+ From Fig.2, a sequence about the entanglements number of the 15 years (1985-1999) can be generated, and it is not difficult to find there is a growing trend with fluctuation.
58
+
59
+ $$
60
+ \begin{array}{l} X _ {0} = X _ {0} (1), X _ {0} (2), \dots , X _ {0} (1 5) \\ = 2, 4, 1 2, 1 5, 1 2, 4, 7, 1 4, 7, 6, 1 1, 2 2, 1 6, 1 8, 2 5 \\ \end{array}
61
+ $$
62
+
63
+ We divide sequence $X_0$ into sequence $Y_0$ and sequence $Z_0$ , $Y_0$ reflects the certain growth trend of $X_0$ , and $Z_0$ reflects the smooth random change trend of $X_0$ .
64
+
65
+ ![](images/f5a0c5c1cb73859c3fcdb6ac314f5172efe9079606a68fd50824e3e4a4e28869.jpg)
66
+ Fig.2 Number of Hawaiian Monk Seal Entanglements Observed
67
+
68
+ # 3.1 GM (1, 1) Model Predicts Certain Growth Trend of Sequence $X_{0}$
69
+
70
+ GM (1, 1) is the most commonly used Grey Model, particularly suitable for small samples, which is a single variable first-order Grey Model. The modeling procedure is summarized as follows:
71
+
72
+ - Given the original data sequence $X_0 = X_0(1), X_0(2), \dots, X_0(15)$ , where $X_0(i)$ corresponds to the time $i$ .
73
+ A new sequence $X_{1} = X_{1}(1),X_{1}(2),\dots ,X_{1}15)$ is generated,
74
+
75
+ $$
76
+ \text {w h e r e} X _ {1} (k) = \sum_ {m = 1} ^ {k} X _ {0} (m), k = 1, 2, \dots , 1 5.
77
+ $$
78
+
79
+ From $X_{1}$ we can form the first-order differential equation $\frac{dX_1}{dt} + aX_1 = u$ . From which it is possible to obtain $a$ and $u$ with $\begin{bmatrix} a \\ u \end{bmatrix} = (B^T B)^{-1} B^T y_N$ , where
80
+
81
+ $$
82
+ B = \left[ \begin{array}{c c} - \frac {1}{2} (X _ {1} (2) + X _ {1} (1)) & 1 \\ \vdots & \vdots \\ - \frac {1}{2} (X _ {1} (1 5) + X _ {1} (1 4)) & 1 \end{array} \right] \text {a n d} y _ {N} = X _ {0} (2), X _ {0} (3), \dots , X _ {0} (1 5) ^ {T}.
83
+ $$
84
+
85
+ The predictive function is $X_{1}(k) = \left(X_{0}(1) - \frac{u}{a}\right)e^{-a(k - 1)} + \frac{u}{a}$ .
86
+
87
+ With the help of MATLAB, we get $a = -0.092352$ , $u = 5.645140$ . Therefore, we obtain the certain growth trend sequence $Y_0$ (Show in Fig.3).
88
+
89
+ It is easy to calculate the value of certain growth trend in the 15 years (1985-1999).
90
+
91
+ $$
92
+ \begin{array}{l} Y _ {0} = Y _ {0} (1), Y _ {0} (2), \dots , Y _ {0} (1 5) \\ = 2, 6, 7, 7, 8, 9, 1 0, 1 1, 1 2, 1 3, 1 4, 1 5, 1 7, 1 8, 2 0 \\ \end{array}
93
+ $$
94
+
95
+ Therefore, the value of certain growth trend in the next 15 years (2000-2014) can be predicted:
96
+
97
+ $$
98
+ \begin{array}{l} Y _ {0} ^ {\prime} = Y _ {0} (1 6), Y _ {0} (1 7), \dots , Y _ {0} (3 0) \\ = 2 2, 2 4, 2 7, 2 9, 3 2, 3 5, 3 9, 4 2, 4 7, 5 1, 5 6, 6 1, 6 7, 7 4, 8 1 \\ \end{array}
99
+ $$
100
+
101
+ ![](images/f9787e90e83234acbb7fdb7fab690cce4429b82d80f35b191abcb275afc185f8.jpg)
102
+ Fig.3 Original and Grey Predictive Data
103
+
104
+ ![](images/24f480c99d8ad2db3796cc7df10cda76f31a9094be9cb9b72818781479ebe6ba.jpg)
105
+ Fig.4 Smooth Random Change Trend
106
+
107
+ # 3.2 Time Series Analysis Predicts Smooth Random Change Trend of $X_{0}$
108
+
109
+ Eliminating the influence of certain growth trend $Y_{0}$ from the original sequence $X_{0}$ , we can obtain smooth random change trend $Z_{0}$ of the former 15 years.
110
+
111
+ $$
112
+ Z _ {0} = Z _ {0} (1), \dots , Z _ {0} (1 5) = X _ {0} (1), \dots , X _ {0} (1 5) - Y _ {0} (1), \dots , Y _ {0} (1 5).
113
+ $$
114
+
115
+ Figure 4 shows smooth random change trend of sequence $Z_0$ , using the Time Series Analysis ARMA model to predict smooth random change trend of the later 15 years.
116
+
117
+ Taking the sequence $Z_0$ as the sample, we can obtain sample autocorrelation function and partial correlation function, show in Fig.5.
118
+
119
+ Figure 5 shows Autocorrelation coefficient denoted by $\rho_{k}$ is tailing and Partial Correlation coefficient $\varphi_{kk}$ is censored. Using censored property, we know the parameter $p = 2$ . Hence we can choose AR (2) model. With the help of EViews, we can get the predictive value of smooth random change trend $Z_{0}$ , its change trend shows in Fig.6.
120
+
121
+ ![](images/e5b6ff5c78fefcbe28dbf38c607980893e087ee88cacfccb7ef45e55f80dbfb0.jpg)
122
+ Fig.5 Autocorrelation and Partial Correlation
123
+
124
+ ![](images/2eef156f4701d5dea02b2e3fa7888142bae6e72cc18854a00a2b5be382a3d728.jpg)
125
+ Fig.6 Smooth Random Change Trend
126
+
127
+ Finally, combining the predictive value of certain growth trend $Y_0$ and the predictive value of smooth random change trend $Z_0$ , we can obtain the predictive value of sequence $X_0$ . The predictive value of sequence $X_0$ in the later 15 years is:
128
+
129
+ $$
130
+ \begin{array}{l} X _ {0} ^ {\prime} = X _ {0} (1 6), X _ {0} (1 7), \dots , X _ {0} (3 0) \\ = 2 6, 2 6, 2 5, 2 6, 3 4, 3 8, 3 9, 3 9, 4 6, 5 4, 5 7, 5 9, 6 6, 7 6, 8 3 \\ \end{array}
131
+ $$
132
+
133
+ # 3.3 Result Analysis
134
+
135
+ We validate our model by examining historical entanglements number between 1985 and 1999. Result of the comparison between the actual and predictive data, in the former 15 years, is shown in Fig.7, the change trends of the two data are similar. The correlation coefficient between the actual and predictive data comes up to 0.9767.
136
+
137
+ # Error Analysis-obvious deviation since 2000
138
+
139
+ However, there is an obvious deviation between the actual and predictive data since 2000. This difference can be explained by Fig.8, since 1999 year the amount of recovered derelict fishing gear begins to increase. Correspondingly, the number of entanglements will decrease. Yet our prediction is mainly based on the former years when there are not large scale recovery program.
140
+
141
+ ![](images/a5932af7fb387971c55a32b52e0e2c82ce83014c174cafe30004f57e5b8d4d73.jpg)
142
+ Fig.7 Comparative Result Diagram
143
+
144
+ ![](images/5d23d348f09b55d21f943cb3ce0992a0678235d6d03e869279650b1b4e4a4646.jpg)
145
+ Fig.8 Amount of Recovered Fishing Gear
146
+
147
+ # 3.4 Analysis of the Impacts on the Ocean Ecosystem
148
+
149
+ In short-term, more and more monk seals will be entangled along with the time passing, according to the predictive data. Hawaiian monk seals are one of the most endangered mammals in the world, the population of Hawaiian monk seals is in decline. In 2008, it is estimated that only 1200 individuals remain. Combining with our predictive data, it is not hard to predict that the Hawaiian monk seal will vanish in the near future if we do not take some measures to prevent it.
150
+
151
+ In the long-term, the extinction of Hawaiian monk seal is not just a single event. Since the Hawaiian monk seal is a member of the food chain, its extinction will cause serious influence to other species of the food chain. Moreover, as a significant segment of ocean ecosystem, the destruction of the food chain probably leads to the ecosystem disorder. What we must be reminded is the Hawaiian monk seal is not the only specie that is impacted by marine debris, many other marine creatures such as reef, whales and seabirds are all impacted by the marine debris. Therefore, if considered more species, the negative impacts of marine debris on ocean ecosystem would be huge.
152
+
153
+ # 4. Ocean System Evaluation Model
154
+
155
+ # 4.1 Analytical Hierarchy Process
156
+
157
+ - Divide Layers. We divide the Debris and Impacts into several layers as Fig.9-10 show.
158
+
159
+ ![](images/dc8ced693332ba3b26af0b508754731439c608ecff4b0b036ca063275bd994e3.jpg)
160
+ Fig.9 Debris
161
+
162
+ ![](images/7ba68854687f98352b9a2e053727f032686eb50fc66374fb7938b3b0bca0641b.jpg)
163
+ Fig.10 Impacts
164
+
165
+ # Determine Weights
166
+
167
+ We specify the calculation of Debris; impacts can be calculated in the same way. Understanding the impacts of different types of marine debris is important. Not all types of debris are equally harmful and not all organisms or regions are equally vulnerable. After comparing the effect of two criteria in the same layer to the higher layer, we can construct the conjugated-comparative matrix with Saaty's Rule. For example, $a_{13}$ indicates the difference of the effect on direct data between plastics and textiles. Let $M_{1}$ be the conjugated-comparative matrix of Debris:
168
+
169
+ $$
170
+ M _ {1} = \left[ \begin{array}{c c c} 1 & 1 / 3 & 1 / 2 \\ 3 & 1 & 3 \\ 1 / 2 & 1 / 3 & 1 \end{array} \right]
171
+ $$
172
+
173
+ After calculating the matrix using the summation method, we obtain the weight vectors:
174
+
175
+ $$
176
+ \omega_ {1} = (0. 5 3 9, 0. 1 6 4, 0. 2 9 7)
177
+ $$
178
+
179
+ So we can obtain the formula:
180
+
181
+ $$
182
+ D i r e c t \quad D a t a = 0. 5 3 9 \times P D + 0. 1 6 4 \times R L D + 0. 2 9 7 \times T D \tag {1}
183
+ $$
184
+
185
+ Where, our symbols are defined in Table1.
186
+
187
+ Table1. Symbols definitions
188
+
189
+ <table><tr><td>Abbreviations</td><td>Meaning</td><td>Abbreviations</td><td>Meaning</td></tr><tr><td>DD</td><td>Direct Data</td><td>RLD</td><td>Rubber and Leather Data</td></tr><tr><td>ID</td><td>Indirect Data</td><td>TD</td><td>Textiles Data</td></tr><tr><td>PD</td><td>Plastics Data</td><td>GPOGP</td><td>Great Pacific Ocean Garbage Patch</td></tr><tr><td>POD</td><td>Population Data</td><td>OEI</td><td>Ocean Ecosystem Impacts</td></tr><tr><td>ED</td><td>Economy Data</td><td>CEI</td><td>Coastal Economy Impacts</td></tr></table>
190
+
191
+ # - Formulas
192
+
193
+ Using a similar method, we arrive at equations as follows:
194
+
195
+ $$
196
+ D e b r i s = 0. 8 \times D D + 0. 2 \times I D \tag {2}
197
+ $$
198
+
199
+ $$
200
+ I n d i r e c t \quad D a t a = 0. 5 \times P D + \times 0. 5 \times E D \tag {3}
201
+ $$
202
+
203
+ $$
204
+ I m p a c t s = 0. 5 \times O E I + 0. 5 \times C E I \tag {4}
205
+ $$
206
+
207
+ # Data Disposal
208
+
209
+ For the sake of consistency, we need to process the original data, which we denote as $V_{or}$ . Finding the maximum and minimum values in the whole table, denoted by $V_{\max}$ and $V_{\min}$ . The adjusted value is
210
+
211
+ $$
212
+ V _ {a d} = \frac {V _ {o r} - V _ {\min}}{V _ {\max} - V _ {\min}} \tag {5}
213
+ $$
214
+
215
+ # 4.2 Evaluation Vector and Comparison Function
216
+
217
+ A good Ocean System should have little negative impacts on ocean ecosystem and coastal economy. So Impacts can be used to evaluate the situation of Ocean System. Debris is also an important factor that affects the situation of Ocean System. Since the two metrics may not have the same magnitude, it is not appropriate to add or multiply them.
218
+
219
+ Hence, we form an evaluation vector (EV) consisting of the two metrics:
220
+
221
+ $$
222
+ E V = \left(D e b r i s, I m p a c t s\right) \tag {6}
223
+ $$
224
+
225
+ This is our final composite measure to evaluate the situation of Ocean System. When both components of the vector are lower, the system is better.
226
+
227
+ Let $EV_{i}$ be the evaluation vector of Ocean System in the year $i: EV_{i} = D_{i}, I_{i}$ , where $D_{i}$ is Debris and $I_{i}$ is Impacts.
228
+
229
+ In order to evaluate the situation of annul Ocean System and compare the Ocean System of each year, we construct the comparison function as follow:
230
+
231
+ $$
232
+ f \left(E V _ {i}\right) = D _ {i} ^ {2} + I _ {i} ^ {2} \tag {7}
233
+ $$
234
+
235
+ As $D_{i}$ and $I_{i}$ are two components of the evaluation vector, $D_{i}^{2} + I_{i}^{2}$ means the square of the vector length. The lower value of $f(EE_{i})$ is, the system is better.
236
+
237
+ # 4.3 Result
238
+
239
+ # Data Collection
240
+
241
+ We obtain the discarded debris data of the USA from Statistics Abstract of the United States, so $D_{i}$ can be determined. Impacts data we use is the number of entanglements of Hawaiian monk seal. It contains the actual number of entanglements and the predictive number of entanglements.
242
+
243
+ Through the following steps we can obtain the evaluation vector (EV) and the comparison function $f(EV_{i})$ :
244
+
245
+ - All data should be disposed with formula (5);
246
+ Calculating $D_{i}$ and $I_{i}$ according to formula (1), (2), (3) and (4);
247
+ - Evaluation vector $EV_{i}$ can be obtained with formula (6).
248
+ - Calculating $f(EV_{i})$ with the comparison function (7).
249
+
250
+ The results show in Fig.11-13.
251
+
252
+ ![](images/58f7a8af4164aa646d3f7f59c3fe12b40a19d59ca907a5e3d3bc50ad2fa6cc31.jpg)
253
+ Fig.11 Result of $EV_{i}$
254
+
255
+ ![](images/787e4960d0fb1b9df9f8c31cb799f99082344f66d9d134fc757e5154b16a6e87.jpg)
256
+ Fig.12 Result of $f(EV_{i})$ Based on Predictive Data
257
+
258
+ ![](images/e01da4b786b5a62d35a86a9e3002e278d4eadea90243953dfec7f570404de892.jpg)
259
+ Fig.13 Result of $f(EV_{i})$ Based on Actual Data
260
+
261
+ From the above Fig.11-13, we can conclude some important points as follows:
262
+
263
+ - The amount of discarded debris increases year by year, the predictive impacts are also increasing, the trend of actual impacts increases before 1999 year and decreases after 2000 year.
264
+ The increasing trend of $f(EV_{i})$ is obvious based on the predictive data, there is
265
+
266
+ also an increasing trend based on the actual data, but not obvious.
267
+
268
+ - The value of $f(EV_{i})$ is becoming larger and larger, which means the situation of Ocean System is becoming worse and worse.
269
+
270
+ # 4.4 Result analysis
271
+
272
+ The above conclusions indicate the Ocean System is becoming worse and worse. Since the Ocean System is evaluated by two metrics: Debris and Impacts, to improve the situation of Ocean System, we need to take the two metrics into consideration.
273
+
274
+ From Fig.13, we find the increasing Debris will lead to the worse situation of Ocean System, as a result, to improve the Ocean System, decreasing the amount of Debris is essential.
275
+
276
+ Analyzing Fig.12 and Fig.13, we find that the situation of actual Ocean System is better than the predictive one since 2000 year; the reason is the same with that we have already analyzed in section 3.3: Derelict fishing gear recovery program has been carried out in Hawaii.
277
+
278
+ # 4.5 Feedback Ocean System
279
+
280
+ According to the above analysis, we conclude the ways to improve the situation of Ocean System, i.e., decreasing the amount of Debris and carrying out artificial recovery program.
281
+
282
+ For further study, we regard the Ocean System as a Feedback System, showed in Fig.14. As long as decreasing the amount Debris or increasing the Recovery, the Ocean system will improve.
283
+
284
+ ![](images/34c7daf5b4bd5f36fbb4dca9316035d9f74d74c449306454c74860c9af6d9fb0.jpg)
285
+ Fig.14 Ocean as a Feedback System
286
+
287
+ # Strength and Weakness
288
+
289
+ - Combination of grey model GM (1, 1) and time series analysis method contribute to generate a good prediction.
290
+ - The AHP method is a good combination of qualitative and quantitative analysis, and it gives the weights conveniently. But it possesses certain subjectivity.
291
+ - Evaluation vector and comparison function is convenient to get quantitative evaluation of the situation of the Ocean System.
292
+
293
+ # 5. A Research Report to Our Expedition Leader
294
+
295
+ # 5.1 Our Finding
296
+
297
+ # Impacts on Ocean Ecosystem
298
+
299
+ In order to study the impacts of marine debris on ocean ecosystem, we model and
300
+
301
+ analyze the change in the number of entangled Hawaiian monk seal. The results indicate, in short-term, the amount of Hawaiian monk seal will have a rapid decline in the next decades if there are no artificial protection program to be carried out. In long-term, the decline of Hawaiian monk seal will affect the food chain of the ecosystem, finally may lead to ecosystem disorder.
302
+
303
+ # Change in Ocean System in Recent Years
304
+
305
+ We build an Ocean System Evaluation Model to analyze the change in Ocean System. The result we find is the Ocean System is becoming worse and worse, but through decreasing the amount of Debris or increasing Recovery, we would see the light and have confidence to rescue the Ocean System.
306
+
307
+ # 5.2 Proposals for Solutions
308
+
309
+ The feasible solutions to improve the Ocean System should focus on two aspects: decreasing the amount of Debris and increasing Recovery according to our finding. Some suggestive solutions are:
310
+
311
+ - Clean-up the coastal debris;
312
+ - Reducing the generation and amount of debris that enter the stream/river;
313
+ Recycling the debris from the ocean;
314
+
315
+ # 5.3 Government Policies and Practices
316
+
317
+ Solutions we provide in section5.2 are feasible but not binding. In order to thoroughly solve the marine debris problem, Government has many works to do:
318
+
319
+ - Establish incentives for people and fishing vessel that recycle the marine debris,
320
+ Educating or funding industries to set up recovery system of solid waste;
321
+ - When necessary, legislating to ensure the relative policies implement successfully;
322
+
323
+ # Reference
324
+
325
+ [1] Peter G. R, Charles J. Moore, Jan A. van Franceker & Coleen L. Moloney (2009).
326
+ [2] Sheavly S.B (2005). Sixth Meeting of the UN Open-ended Informal Consultative Processes on Oceans & the Law of the Sea. Marine debris – an overview of a critical issue for our oceans. June 6-10.
327
+ http://www.un.org/Depts/los/consultative_process/consultative_process.htm
328
+ [3] Masahisa Kubota, Katsumi Takayama, Daisuke Namimoto (2005). Pleading for the use of biodegradable polymers in favor of marine environments and to avoid an asbestos-like problem. Appl Microbiol Biotechnol 67: 469-476.
329
+ [4] Sheavly S. B, Register K. M (2007). Marine Debris & Plastics: Environmental Concerns, Sources, Impacts and Solutions. J Polym Environ 15: 301-305.
330
+ [5] NOAA, Marine Debris, http://marinedebris.noaa.gov/info/impacts.html
331
+ [6] NOAA, National Marine Fisheries Service (2007). Recovery Plan for the Hawaiian Monk Seal.
332
+ http://www.fpir.noaa.gov/Library/PRD/Hawaiian%20monk%20seal/SHI%20MS%20
333
+ Recovery% 20Plan% 20FINAL% 20August% 202007% 20pdf.pdf
MCM/2010/C/8088/8088.md ADDED
@@ -0,0 +1,244 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Abstract
2
+
3
+ Massive amounts of plastic waste have been accumulating in the Great Pacific Garbage Gyre and are posing a threat to the marine environment. Since little is known about the degradation of plastics in this marine setting, we adopted the goal of modeling the photodegradation of a common plastic, polyethylene, in seawater. The plastic in the ocean is exposed to UV light form the sun which causes photodegradation, which is a natural source of plastic decomposition. We developed two models to describe the rate of photodegradation of polyethylene floating in seawater; the Low Transmittance of Light Model (LTM) and the High Transmittance of Light Model (HTM). Using the constant rate of UV irradiance and the average bond dissociation energy of carbon-carbon single bonds (C-C) we calculate the mass lost per unit of time by triple integrating the rate of mass lost per unit of time over area and time. The outputs of our models were realistic and resulted in the change of mass over time. The HTM predicated that a 1x1x2cm rectangular prism of polyethylene weighing 1.86g will lose 1.26g of mass in one year. The LTM predicts a hollow sphere with thickness 0.0315cm and a radius of 5cm weighing 9.145g, and is partially submerged in low transmittance water to lose 0.1897g of mass in the first year. The design of our models allows us to change the limits of integrations to model other shapes, adjust the intensity of UV light, and can realistically predict the photodegradation of polyethylene.
4
+
5
+ # Shedding Light on Marine Pollution
6
+
7
+ # Team #8088
8
+
9
+ ICM 2010
10
+
11
+ # Table of Contents
12
+
13
+ I. Introduction 3
14
+ II.Description of the Problem 3
15
+ III. Photodegradation of Polyethylene 3
16
+ IV. General Assumptions 5
17
+ V. High Transmittance Model 5
18
+ VI. Low Transmittance Model 6
19
+ VII. Comparisons and Limitations 8
20
+ VIII. Discussion of Impacts 9
21
+ IX. Conclusion 9
22
+ X.References 10
23
+ XI. Appendix A 11
24
+
25
+ # I Introduction
26
+
27
+ The accumulation of plastic debris in our oceans is quickly coming to light as one of the most prevalent and devastating threats to the marine environment. The "Great Pacific Ocean Garbage Patch" is one of many areas of wind current convergence where massive amounts of debris collect and stew. The "garbage" is not primarily found in the form of bottles and bags, but rather as tiny particles referred to as neustonic plastics. These neustonic plastics are the products of degradation of post-consumer and industrial wastes and may pose great risk for marine life. The nature of the degradation of plastics has thus become an important element in the study of this environmental catastrophe. In this study, we focus specifically on the photolytic degradation of plastic accumulating in the gyre. This involves the consideration of ultraviolet radiation reaching the surface of the ocean, the energy required to break the bonds that form the common plastic polyethylene, and physical considerations concerning buoyancy, mass, and surface area of the plastic particles. Specifically, this study models the loss of mass of polyethylene plastics by photolytic degradation per year.
28
+
29
+ # II Description of the Problem
30
+
31
+ In this problem we consider the degradation of floating polyethylene fragments by photolytic degradation. The fragments are considered to be hollow spheres partially filled with seawater, to represent common post-consumer waste containers. The fragments are partially submerged in water with either low or high percent transmittance of light. High transmittance water can use the entire effective surface area of the fragment to model degradation while fragments in low transmittance water only the portion of the fragment above water will be susceptible to photolytic degradation<sup>16</sup>. Ultraviolet light is assumed to hit the fragment orthogonally to the plane of the ocean, thereby exposing a 2-dimensional effective surface area, from now on referred to as $\pmb{c}$ , to the rays. Thus, $\pmb{c}$ represents the portion of the fragment susceptible to photolytic degradation. We relate $\pmb{c}$ to the radius of the fragment, $\pmb{r}$ , since surface area of a sphere varies with radius. The mass of the fragment, $\pmb{M}$ , then depends on $\pmb{c}$ and $\pmb{r}$ . The goal then, is to effectively model these relationships with respect to each other and to time in a way that will input some time, $\pmb{t}$ , and output a mass, $\pmb{M}$ , which will describe the loss of mass experienced by a polyethylene fragment.
32
+
33
+ # III Photolytic Degradation of Polyethylene
34
+
35
+ Polyethylene is a polymer consisting of long chains of the monomer ethylene $^{1}$ . There are two types of bonds present in polyethylene; carbon-carbon single bonds (C-C) and carbon-hydrogen single bonds $(\mathrm{C - H})^{4}$ . See Figure 1 for polyethylene structure.
36
+
37
+ Figure 1: Structure of polyethylene where $n =$ number of monomers in the chain
38
+
39
+ $$
40
+ - \left(\mathrm {C H} _ {2} - \mathrm {C H} _ {2}\right) _ {\mathrm {n}} -
41
+ $$
42
+
43
+ Photodegradation is a process by which chemical bonds are broken when struck by light $^{1,8}$ . In order to break an intermolecular bond the light must carry enough energy to cleave the bond. The energy
44
+
45
+ Team # 8088
46
+
47
+ needed to cleave a bond can be estimated using the average bond dissociation energy (B.D.E) $^4$ . Once the energy needed to cleave a bond is known, the equation $E = hc / \lambda$ can be used to find the minimum wavelength, $\lambda$ , of light that carries enough energy to break the bond $^{10}$ . See Figure 2 for calculation of $\lambda$ .
48
+
49
+ Figure 2: Calculation of $\lambda$
50
+
51
+ $$
52
+ \lambda (m) = \frac {6 . 6 2 6 \cdot 1 0 ^ {- 3 4} \frac {k g \cdot m ^ {2}}{s} \cdot 3 \cdot 1 0 ^ {8} \frac {m}{s}}{5 . 7 8 \cdot 1 0 ^ {- 1 9} \frac {k g \cdot m ^ {2}}{s ^ {2}}} = 3 4 4 n m
53
+ $$
54
+
55
+ When polyethylene is exposed to ultraviolet light (UV) with a $\lambda$ of $344\mathrm{nm}$ , C-C single bonds are cleaved and free radicals are formed to react quickly with $\mathrm{O}_2$ to form peroxy radicals. The peroxy radicals then either continue a chain reaction of radical formation or two free radicals can react to terminate the chain reaction<sup>1,7,12</sup>. The pathway of free radical chain reactions and termination reactions can be seen in Figure 3.
56
+
57
+ Figure 3: Photo oxidative reaction mechanism<sup>1</sup>.
58
+
59
+ $$
60
+ \mathrm {R} - \mathrm {H} + U V \longrightarrow \mathrm {R} \cdot
61
+ $$
62
+
63
+ $$
64
+ \mathrm {R} \cdot + \mathrm {O} _ {2} \longrightarrow \mathrm {R} - \mathrm {O} - \mathrm {O} \cdot
65
+ $$
66
+
67
+ $$
68
+ \mathrm {R} - \mathrm {O} - \mathrm {O} \cdot + \mathrm {R} - \mathrm {H} \longrightarrow \mathrm {R} - \mathrm {O} - \mathrm {O} - \mathrm {H} + \mathrm {R} \cdot
69
+ $$
70
+
71
+ Photo oxidation termination reactions.
72
+
73
+ $$
74
+ \mathrm {R} \cdot + \mathrm {R} \cdot \longrightarrow \mathrm {R} - \mathrm {R}
75
+ $$
76
+
77
+ $$
78
+ \mathrm {R} - \mathrm {O} - \mathrm {O} \cdot + \mathrm {R} \cdot \longrightarrow \mathrm {R} - \mathrm {O} - \mathrm {O} - \mathrm {R}
79
+ $$
80
+
81
+ An integral part of the model is the cleavage of C-C single bonds to break polyethylene into fragments. This accounts for the loss of mass as the polyethylene is degraded over time. The rate of degradation can be estimated by assuming that every time a C-C bond is cleaved by UV light, a monomer is removed from the original mass of polyethylene. The rate at which the C-C bonds can be cleaved is dependent on the amount of UV light emitted by the sun in watts/m² which is a unit of irradiance. See Figure 5 for unit conversion<sup>1,7,12</sup>.
82
+
83
+ Figure 5: Unit conversion used to find the rate of photodegradation in $(\mathrm{sec}^{*}\mathrm{cm}^{2} / \mathrm{g})$
84
+
85
+ $$
86
+ \frac {1 \: C - C \: b o n d}{5 . 7 7 8 * 1 0 ^ {- 1 9} J} * \frac {. 0 0 0 5 J}{1 S e c * 1 c m ^ {\wedge} 2} * \frac {1 \: m o l \: p o l y e t h y l e n e}{6 . 0 2 2 * 1 0 ^ {2 3} C - C \: b o n d s} * \frac {2 8 g \: m o n o m e r \: p o l y e t h y l e n e}{1 \: m o l \: p o l y e t h y l e n e} = \frac {4 . 0 2 * 1 0 ^ {- 8} \: g \: p o l y e t h y l n e}{S e c * 1 c m ^ {\wedge} 2}
87
+ $$
88
+
89
+ # IV General Assumptions
90
+
91
+ For the sake of simplicity we make the following assumptions:
92
+
93
+ - Mechanical degradation due to torque on plastic is minimal due to small size of plastic particles $^{11}$ , and colliding plastic particles are rare due to low particle density $^{5}$ , therefore our model neglects mechanical degradation
94
+ - Polyethylene particles will float in seawater since polyethylene's density is $93\mathrm{g / ml}$ and average density of 35ppt saline seawater at $15^{\circ}\mathrm{C}$ is about $1.0255\mathrm{g / ml}^{9}$ . This neglects water currents.
95
+ Source of UV light is a constant average at sea level pacific northwest3.
96
+ - Polyethylene in model does not contain UV stabilizers and is medium density.
97
+ - Polyethylene is composed of ethylene monomers and the average bond dissociation energy for C-C single bonds is used to predict energy needed to cleave the C-C bonds $^{4}$ .
98
+ - Only the portions of the plastic fragments that are perpendicular to the UV light are subject to photolytic degradation.
99
+ - We are assuming transmittance of light through the water is high in the Simple Model and very low in the Final only the effective surface area above water can receive UV light.
100
+ - The photolytic cleavage of C-C bond in the model is assumed to be a fast forward reaction $\mathrm{K}_{\mathrm{rxn}} >> 1$ and the reverse reaction very slow. Immediately after the bond is cleaved the free radical forms and is quenched by any of the termination reactions that also have a $\mathrm{K}_{\mathrm{rxnb}} >> 1$ .
101
+
102
+ # V High transmittance Model
103
+
104
+ This model considers the degradation of a square prism of polyethylene on a flat surface on land or in water with a very high percent transmittance of light. One of the faces of the prism facing directly perpendicular to the UV light source. For this Low Transmittance model, we will define the rate of degradation from Figure 5 as the constant $\mathrm{K}_{1}$ . This expression can be modeled over time and area as a triple integral:
105
+
106
+ $$
107
+ \int_ {0} ^ {y} \int_ {0} ^ {x} \int_ {0} ^ {t} K _ {1} d t d x d y = \int_ {0} ^ {y} \int_ {0} ^ {x} \int_ {0} ^ {t} \frac {4 . 0 2 * 1 0 ^ {- 8} g p o l y e t h y l n e}{S e c * 1 c m ^ {\wedge} 2} d t d x d y
108
+ $$
109
+
110
+ The result is a simple, linear model of degradation based on time and area:
111
+
112
+ $$
113
+ M (x, y, t) _ {l o s s} (g) = K _ {1} (\sec * \mathrm {c m} 2 / \mathrm {g}) * t (\sec) * x (c m) * y (\mathrm {c m})
114
+ $$
115
+
116
+ Consider a square prism with the dimensions of $1 \times 1 \times 3$ cm with an initial mass of $1.86 \mathrm{~g}$ . After 365 days of UV exposure, 1.26 grams of polyethylene are lost as a result of photolytic degradation<sup>9</sup>.
117
+
118
+ # VI Low transmittance Model
119
+
120
+ A primary factor in describing the amount of UV light that a fragment of plastic absorbs is the effective surface area perpendicular to direct sunlight. Since the North Pacific Gyre is a collection of floating debris, we used Archimedes' principle to relate the buoyancy of a piece of plastic to its effective surface area. We have chosen to neglect the effect of air in the container on buoyancy for simplicity. Archimedes' principle states that "the buoyant force on a submerged object is equal to the weight of the fluid that is displaced by that object." In addition, since the plastic is in kinetic equilibrium in the vertical direction, its buoyant force must be equal in magnitude to its weight:
121
+
122
+ $$
123
+ F _ {b u y o a n t} = W _ {p l a s t i c} = M _ {p l a s t i c} \cdot (g) = W _ {w a t e r d i s p l a c e d} = M _ {w a t e r d i s p l a c e d} \cdot (g) = V _ {p l a s t i c s u b m e r g e d} (D _ {w a t e r}) \cdot (g)
124
+ $$
125
+
126
+ By selecting that we want to define volume submerged in terms of mass we choose;
127
+
128
+ $$
129
+ M _ {p l a s t i c} = V _ {p l a s t i c s u m b e r g e d} \cdot (d _ {w a t e r})
130
+ $$
131
+
132
+ Where $\mathbf{M}$ is mass, $\mathbf{W}$ is weight, $\mathbf{V}$ is volume, $\mathbf{d}$ is density, and $\mathbf{g}$ is the force due to gravity.
133
+
134
+ We can define the right side of this equation with the triple integral
135
+
136
+ $$
137
+ \int_ {x} ^ {\pi} \int_ {0} ^ {2 \pi} \int_ {r - h} ^ {r} d \rho^ {2} s i n \phi d \rho d \Theta d \phi
138
+ $$
139
+
140
+ With limits of integration where $h$ is the thickness of the plastic and $x$ is the angle in radians from the zenith to the point on the sphere's surface where it contacts the water level. Setting this integral equal to the total mass of the plastic, you can solve for $x$ . Trigonometry relates the radius of the sphere, $r$ , and the angle, $x$ , to the effective surface area of the sphere exposed perpendicularly to UVB rays (c). This is depicted in Diagram 1 below.
141
+
142
+ ![](images/2863fe4bff4a724b1d6696d5c83db416065b83ab0975dc3a2f53c2efd6ecb471.jpg)
143
+ Diagram 1: Finding the Effective Solar Radius
144
+
145
+ Team # 8088
146
+
147
+ As you can see from Diagram 1, the effective solar radius is
148
+
149
+ $$
150
+ r _ {e} = r \cdot \cos \left(\frac {\pi}{2} - x\right)
151
+ $$
152
+
153
+ To consolidate the necessary input arguments, we have defined $\pmb{r}$ in terms of $\pmb{m}$ by subtracting the volumes of two concentric spheres with $\Delta r = h$ , then multiplying by the density of the plastic, $l$ .
154
+
155
+ $$
156
+ \left(\left(\frac {4}{3} \pi * r ^ {3}\right) - \left(\frac {4}{3} \pi (r - h) ^ {3}\right)\right) l = M
157
+ $$
158
+
159
+ The equation above can easily be rearranged to solve for $r$ :
160
+
161
+ $$
162
+ r = \frac {\left(4 \pi h ^ {2} \pm \sqrt {1 6 \pi^ {2} h ^ {4} - 1 6 \pi h \left(\frac {4}{3} \pi h ^ {3} - \frac {m}{l}\right)}\right)}{8 \pi h}
163
+ $$
164
+
165
+ We created a constant $K$ based on our fundamental relationship between the mass of plastic in grams, and the total bond energies within that mass in Joules. $K$ has of the units $\frac{g}{J}$ . $K$ is easily calculated using the molar mass and average bond energy of polyethylene as shown in this dimensional analysis:
166
+
167
+ $$
168
+ 1 J \cdot \frac {1 m o l}{3 4 8 0 0 0 J} \cdot \frac {2 8 g}{1 m o l} = K = 8. 0 4 6 \cdot 1 0 ^ {- 5} \frac {g}{J}
169
+ $$
170
+
171
+ Ultraviolet light is the source of energy in this model. We have defined it as a variable, $U$ , only substituting in a value when necessary to show example outputs from the model. A true experiential value for $U$ would need to be calculated on site and calibrated to represent the average Joules of UV light that reaches the Pacific Gyre per year. $U$ is the amount of energy available to impact the system in J/cm $^2$ yr. UK uses the constant $K$ to define that energy as amount of mass in grams, cleaved off by the breaking of a total number of Joules of C-C single bonds. The term UK therefore, has units of g/cm $^2$ year. To solve for a total change in grams over a specific time and area, we integrate the term with respect to time from zero to $t$ , and then with respect to area, a double integral in polar coordinates.
172
+
173
+ $$
174
+ \int_ {0} ^ {t} \int_ {0} ^ {2 \pi} \int_ {0} ^ {R _ {e}} U K r d r d \Theta d t = \Delta m = U K R _ {e} ^ {2} \pi t
175
+ $$
176
+
177
+ Furthermore, if you take this change in mass and subtract it from the initial mass, the result is a final mass of plastic for given initial mass subject to ultraviolet light of intensity $\mathbf{U}$ over a time $t$ .
178
+
179
+ $$
180
+ \text {F i n a l M a s s} = \text {I n i t i a l M a s s} - (U V * K * E f f e c t i v e R a d i u s ^ {2} * \pi * t)
181
+ $$
182
+
183
+ Where the units are: $g = g - \left( \frac{J}{cm^2 \cdot year} * \frac{g}{J} * cm^2 * \pi \times year \right)$
184
+
185
+ Using the LTM consider a hollow sphere with thickness .0315cm, a radius 5cm, initial mass of 9.145g after one year of UV exposure the loss in mass is .1897g of polyethylene. Rate of degradation can be visualized in Diagram 2.
186
+
187
+ ![](images/7f6b103c645bf72726362a534c6bb652872214b144e1c0252e976a12d605b58e.jpg)
188
+ Diagram 2: The Relationship between Mass of Sphere and Time
189
+
190
+ # VII Comparisons and Limitations
191
+
192
+ Two models were developed to determine the mass lost due to photodegradation from a piece of polyethylene plastic after a certain length of time exposed to UV light. The first model is the High Transmittance Model (HTM), where the fragment is a rectangular prism, and is treated as if it is on land or in $100\%$ transmittance water. This model uses a triple integral over time and area, giving an output of grams of original polyethylene lost. The Low Transmittance Model (LTM) describes a fragment submersed in low transmittance water where only the exposed effective surface area of the fragment can undergo photodegradation. Like the HTM, this model uses a triple integral to find grams of original polyethylene lost. The difference is that the LTM models the exposed effective surface area of a hollow sphere of a particular thickness, partially submerged in water due to buoyancy. Depending on the conditions of the water (density and transmittance) and the shape of the object, the models can be altered to describe photodegradation of polyethylene in many other shapes in high or low transmittance water.
193
+
194
+ Although both models give reasonable outputs, each is particularly applicable in specific situations. Many assumptions about the reaction conditions had to be made to simplify the mathematics. The assumptions were not perfect and decrease the accuracy of the model. In order to increase the accuracy of our model a few main points need additional research and refinement:
195
+
196
+ - Value (U) used for Irradiance of UV light in the North Pacific Ocean needs to be verified primary research in order to find a solid value for UV irradiance.
197
+
198
+ # Team # 8088
199
+
200
+ - Mechanical degradation will also have an effect on the degradation and should be included along with the photo degradation function to increase the accuracy of mass lost.
201
+ - Models do not describe the fact that particles tend to converge on a similar size around $3 - 5\mathrm{mm}^{14}$ .
202
+ - Many polyethylene products have UV stabilizers that increase the longevity of the plastic by inhibiting the free radical chain reaction $^{1}$ .
203
+ - Polyethylene can be one of many types varying in density. Our model used medium density polyethylene (MDPE).
204
+ Plastics are not just on the ocean surface but also at depths up to 100ft.
205
+ - Polyethylene, although very common, is not the only plastic in the North Pacific Gyre.
206
+
207
+ # VIII Discussion of Impacts
208
+
209
+ Our models effectively describe the rate at which UV light breaks down polyethylene. The process is slow and there is inconclusive evidence as to whether plastics ever degrade entirely in the gyre. Plastics are thus a prevalent long-term environmental antagonist. Possible ecologic effects of the accumulation of massive amounts of plastic in the Pacific Ocean Gyre include ingestion of plastic particles by marine life, the disturbance of the transmittance of light below the surface of the water, which may affect many organisms' ability to synthesis energy from photosynthesis, and the distribution of hydrophobic pollutants. Our model relates to the ingestion of plastic particles by marine life since it predicts the mass of fragments at a given time and marine organisms may confuse plastic fragments that are similar in size to their normal food source.
210
+
211
+ Contributing to the growing problem of plastic pollution in the ocean is the lack of governmental regulation on pollution by cruise ships. During a one-week trip, a typical cruise ship produces 50 tons of garbage<sup>17</sup>. Regulations are tricky though, because international waters do not have well-defined environmental authority structures, and monitoring is minimal<sup>17</sup>. Stronger regulations and monitoring systems are required to decrease the impact of pollution by cruise ships.
212
+
213
+ Land-based sources contribute to $80\%$ of marine debris, $65\%$ of which is from post-consumer plastics that were improperly disposed of<sup>18</sup>. This means that the plastics are littered, not just that they are not recycled. Many states have laws against littering, but monitoring efforts need to be improved. Education about the devastating impacts of littering is a must. Education and monitoring programs may be expensive, but the cost would likely be small when compared to the potential for environmental protection.
214
+
215
+ # IX Conclusion
216
+
217
+ In this study we proposed two realistic models for the photodegradation of polyethylene. The first model is used for a solid chunk of polyethylene either on land, or in water with $100\%$ transmittance of light. The second model is more complex and considers a partially submerged hollow sphere of polyethylene that is only degraded over the effective surface area. Each model has an output in terms of mass in grams. Practical application of these models includes the investigation of the
218
+
219
+ photodegradation of plastic currently in the gyre as well as future accumulations. This means that our models can accurately describe degradation of partially degraded or intact plastic products since the initial physical properties (size, mass, etc.) of polyethylene can be manipulated in both models. The ease of manipulation and thorough consideration of realist variables make our models appropriate for the study of photodegradation of polyethylene.
220
+
221
+ # X References
222
+
223
+ 1. Carey, Francis A., and Richard J. Sundberg. Advanced Organic Chemistry: Part A: Structure and Mechanisms (Advanced Organic Chemistry / Part A: Structure and Mechanisms). 5th ed. Vol. A. New York: Springer, 2007. Print.
224
+ 2. "The Chemical Composition of Seawater." Seafriends home page. 2006. Web. 21 Feb. 2010. <http://www.seafriends.org.nz>.
225
+ 3. Karam, Andrew. "What Percentage of Sunlight is UV Light." MadSciNet: The 24-hour exploding laboratory. 27 July 2005. Web. 20 Feb. 2010. <http://www.madsci.org/>.
226
+ 4. Leeming, William. Thermal and Photolytic Degradation of Polypropylene. Thesis. Univeristy of Glasgow, 1973. Glasgow: Glasgow Thesis Service, 1973. Print.
227
+ 5. Moore, C. J., G. L. Lattin, and A. F. Zellers. "Density of Plastic Particles found in zooplankton trawls from." Algalita Marine Research Foundation. Print.
228
+ 6. Moore, C. J., S. L. Moore, M. K. Leecaster, and S. B. Weisberg. "A Comparison of PLastic and Plankton in the North Pacific Central Gyre." Marine Pollution Bulliton 42.12 (2001):1297-1300. Print
229
+ 7. McNaught, A. D., and A. Wilkinson. IUPAC. Compendium of Chemical Terminology. 2nd ed. Oxford: Blackwell Scientific Publications, 1997. Print.
230
+ 8. Okabe, Hideo. Photochemistry of Small Molecules. New York: Wiley-Interscience Publication, 1978. Print.
231
+ 9. "Polyethylene EMEA." Chevron Phillips Chemical Company. 2009. Web. 20 Feb. 2010. <http://www.cpchem.com/>.
232
+ 10. Skoog, Holler, and Crouch. Principles of Instrumental Analysis. 6th ed. Vol. 1. Thompson Brooks/Cole, 2007. Print.
233
+ 11. Tipler, Paul Allen. Physics for scientists and engineers. New York: W.H. Freeman, 2004. Print.
234
+ 12. Trozzolo, A., and F. Winslow. "A Mechanism for the Oxidative Photodegradation of Polyethylene." Macromolecules (1967): 98-100. Print.v
235
+ 13. Wade, L. G. Organic chemistry. 7th ed. Upper Saddle River, N.J: Pearson Prentice Hall, 2010.
236
+ 14. Yamashita, Rei, and Atusushi Tanimura. "Floating plastic in the Kuroshio Current area, western North Pacific Ocean." Marine Pollution Bulletin 54.4 (2007): 485-88. Print
237
+ 15. Hodanbosi, Carol. "Buoyancy: Archimedes Principle." NASA. Aug. 1996. Web. 22 Feb. 2010. <http://www.grc.nasa.gov/WWW/K-12/WindTunnel/Activities/buoy_Archimedes.html>.
238
+ 16. Ivanhoff, Alexander, Nils Jerlov, and Talbot Waterman. "A Comparative Study of Beam Transmittance and Scattering In the Sea Near Bermuda." Luminology and Oceanography 2.2 (1961): 129-48. Print.
239
+ 17. Cruise Ship Pollution State Activity Page." State Environmental Resource Center - Welcome. Web. 22 Feb. 2010. <http://www.seronline.org/cruiseShipPollution.html>
240
+ 18. Algalita Marine Research Foundation - Marine Research, Education and Restoration. Web. 22 Feb. 2010. <http://algalita.org/AlgalitaFAQs.htm>.
241
+
242
+ <table><tr><td>Variable</td><td>m</td><td>h</td><td>x</td><td>t</td><td>r</td><td>c</td><td>k</td><td>u</td><td>d</td><td>l</td><td>re</td><td>dm</td><td>mf</td></tr><tr><td>Units</td><td>grams(g)</td><td>(cm)</td><td>(radians)</td><td>(years)</td><td>(cm)</td><td>(cm)2</td><td>(g/l)</td><td>J/(cm2*years)</td><td>(g/cm2)</td><td>(g/cm2)</td><td></td><td>(g)</td><td>(g)</td></tr><tr><td>Description</td><td>Total Initial Mass of Plastic</td><td>Thickness of Plastic</td><td>Angle from vertical to contact with water</td><td>Time</td><td>Outer Radius of Sphere</td><td>Effective Above Water Surface Area</td><td>Unit Conversion, Proportional Constant</td><td>UV light energy</td><td>Density of the Water the Plastic is Buoyed in</td><td>Density of the Plastic</td><td>Effective Radius of The Water-Line of the Sphere</td><td>Change in Mass after time (t)</td><td>Mass after time (t)</td></tr><tr><td>Value for Model</td><td>9.145</td><td>0.0315</td><td>x=acos(3*m/(2*pi*d*(3*r^2-3*r*h^2)) *h)-1</td><td>1</td><td>r=4*pi*h^2+SQRT((4*pi*h^2)^2-4*4*pi*h*(4/3*pi*h^3-m/0.93)) /(2*4*pi*h)</td><td>c=pi*(r*cos(-1)*(acos(3*m/(2*d*pi*h*(3*r^2-3*r*h^2)) -1)+pi(2)))^2</td><td>8.046*10^-5</td><td>1400</td><td>1.0255</td><td>0.93</td><td>re=r*cos (pi/2-x)</td><td>mf=m-(u*k*re^2*pi*t)</td><td>mf=m-(u*k*re^2*pi*t)</td></tr><tr><td rowspan="2">Input</td><td>mi</td><td>h</td><td>t</td><td>r(t-1)</td><td>c</td><td>k</td><td>u</td><td>d</td><td>l</td><td>RE(t-1)</td><td>Change in M</td><td>Mfinal(t)</td><td></td></tr><tr><td>9.145</td><td>0.0315</td><td>x(t-1)</td><td>0</td><td></td><td>0.00008046</td><td>1400</td><td>1.0255</td><td>0.93</td><td></td><td></td><td>9.145</td><td></td></tr><tr><td></td><td></td><td>0.0315</td><td></td><td>0.616098</td><td>1</td><td>4.996599986</td><td>2.6489E-30</td><td>0.00008046</td><td>90</td><td>1.0255</td><td>0.93</td><td>2.887309</td><td>0.189653</td></tr><tr><td></td><td></td><td>0.0315</td><td></td><td>0.616054</td><td>2</td><td>4.944647618</td><td>5.79493981</td><td>0.00008046</td><td>90</td><td>1.0255</td><td>0.93</td><td>2.857112</td><td>0.185706</td></tr><tr><td></td><td></td><td>0.0315</td><td></td><td>0.61601</td><td>3</td><td>4.893240429</td><td>5.6747479</td><td>0.00008046</td><td>90</td><td>1.0255</td><td>0.93</td><td>2.827231</td><td>0.181842</td></tr><tr><td></td><td></td><td>0.0315</td><td></td><td>0.615965</td><td>4</td><td>4.842372699</td><td>5.55705691</td><td>0.00008046</td><td>90</td><td>1.0255</td><td>0.93</td><td>2.797665</td><td>0.178059</td></tr><tr><td></td><td></td><td>0.0315</td><td></td><td>0.61592</td><td>5</td><td>4.792038768</td><td>5.44181471</td><td>0.00008046</td><td>90</td><td>1.0255</td><td>0.93</td><td>2.768409</td><td>0.174354</td></tr><tr><td></td><td></td><td>0.0315</td><td></td><td>0.615875</td><td>6</td><td>4.742233035</td><td>5.32897026</td><td>0.00008046</td><td>90</td><td>1.0255</td><td>0.93</td><td>2.739459</td><td>0.170727</td></tr><tr><td></td><td></td><td>0.0315</td><td></td><td>0.615829</td><td>7</td><td>4.692949958</td><td>5.2184736</td><td>0.00008046</td><td>90</td><td>1.0255</td><td>0.93</td><td>2.710814</td><td>0.167175</td></tr><tr><td></td><td></td><td>0.0315</td><td></td><td>0.615782</td><td>8</td><td>4.644184053</td><td>5.11027579</td><td>0.00008046</td><td>90</td><td>1.0255</td><td>0.93</td><td>2.682469</td><td>0.163697</td></tr><tr><td></td><td></td><td>0.0315</td><td></td><td>0.615735</td><td>9</td><td>4.595929896</td><td>5.00432893</td><td>0.00008046</td><td>90</td><td>1.0255</td><td>0.93</td><td>2.654421</td><td>0.160292</td></tr><tr><td></td><td></td><td>0.0315</td><td></td><td>0.615688</td><td>10</td><td>4.548182115</td><td>4.90058611</td><td>0.00008046</td><td>90</td><td>1.0255</td><td>0.93</td><td>2.626668</td><td>0.156958</td></tr><tr><td></td><td></td><td>0.0315</td><td></td><td>0.61564</td><td>11</td><td>4.500935398</td><td>4.79900139</td><td>0.00008046</td><td>90</td><td>1.0255</td><td>0.93</td><td>2.599206</td><td>0.153693</td></tr><tr><td></td><td></td><td>0.0315</td><td></td><td>0.615592</td><td>12</td><td>4.454184488</td><td>4.69952981</td><td>0.00008046</td><td>90</td><td>1.0255</td><td>0.93</td><td>2.572032</td><td>0.150496</td></tr><tr><td></td><td></td><td>0.0315</td><td></td><td>0.615543</td><td>13</td><td>4.407924184</td><td>4.60212733</td><td>0.00008046</td><td>90</td><td>1.0255</td><td>0.93</td><td>2.545143</td><td>0.147366</td></tr><tr><td></td><td></td><td>0.0315</td><td></td><td>0.615493</td><td>14</td><td>4.362149337</td><td>4.50675084</td><td>0.00008046</td><td>90</td><td>1.0255</td><td>0.93</td><td>2.518536</td><td>0.144301</td></tr><tr><td></td><td></td><td>0.0315</td><td></td><td>0.615443</td><td>15</td><td>4.316854854</td><td>4.41335811</td><td>0.00008046</td><td>90</td><td>1.0255</td><td>0.93</td><td>2.492209</td><td>0.1413</td></tr></table>
243
+
244
+ Appendix A: Excel Spreadsheet of Variables and LTM
MCM/2011/2011MCM&ICM/2011MCM&ICM.md ADDED
The diff for this file is too large to render. See raw diff
 
MCM/2011/A/9159/9159.md ADDED
@@ -0,0 +1,451 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # 2011 Mathematical Contest in Modeling (MCM) Summary Sheet
2
+
3
+ (Attach a copy of this page to each copy of your solution paper.)
4
+
5
+ Type a summary of your results on this page. Do not include
6
+
7
+ the name of your school, advisor, or team members on this page.
8
+
9
+ In this paper, we study the shape of a snowboard course and its influential factors under energetic view. The main idea is to measure the "vertical air" by final energy. Before calculation, several assumptions are made for simplicity.
10
+
11
+ The snowboarder is treated as a mass point;
12
+ The halfpipe is regarded as a perfectly rigid body;
13
+ $\succ$ Sliding friction, collision and air resistance are introduced step by step, while other sources of energy loss are ignored.
14
+
15
+ With these assumptions, we build our basic model, which is simple but sufficient to offer a fundamental idea and reveal essential interactions among different factors. We make force analysis, set up kinematical equations and obtain the general form of final mechanical energy analytically.
16
+
17
+ After that, two specific types of transition curve, quarter circle and quarter ellipse, are introduced to calculate the optimal design. Various factors are systematically analyzed in both models. In quarter-circle model, we obtain analytical solutions and find the best radius $R^*$ of the circle. While in quarter-ellipse model, we use numerical simulation to get the optimal semi-major axis $a^*$ and semi-minor axis $b^*$ . Then we further develop the 2D quarter-ellipse model into a more actual and complex case, involving the third dimension of the halfpipe. Through numerical computation, we find the most suitable slope angle of the halfpipe.
18
+
19
+ Next, by introducing the sensitivity coefficient into these models, the sensitivities of different variables are analyzed both qualitatively and quantitatively. It helps us to compare the significance of these variables in building a best halfpipe.
20
+
21
+ At last, the tailors and tradeoffs to develop a "practical" course are discussed. To further optimize the shape of the halfpipe, we take various requirements into consideration, including construction difficulty, players' safety and their performance. After adjustment, our result becomes more practical.
22
+
23
+ In conclusion, through systematic and comprehensive analysis of various factors, we find a reasonable shape of the snowboard course, which fits the real data well.
24
+
25
+ # Higher in the Air: Design of Snowboard Course
26
+
27
+ Team # 9159
28
+
29
+ February 15, 2011
30
+
31
+ # Contents
32
+
33
+ 1 Introduction 4
34
+ 2 Symbols, Hypothesis and Explanations 5
35
+
36
+ 2.1 Symbols Used in this Paper 5
37
+ 2.2 Hypothesis 5
38
+ 2.3 Some Explanations 6
39
+
40
+ 3 Basic Model 7
41
+ 4 Application in 2D Case 9
42
+
43
+ 4.1 Quarter-Circle Type 9
44
+ 4.2 Quarter Ellipse Type 11
45
+
46
+ 4.2.1 Snow Friction Only 12
47
+ 4.2.2 Snow Friction and Collision 13
48
+ 4.2.3 Snow Friction, Air Friction and Collision 13
49
+
50
+ 4.3 Sensitivity 14
51
+
52
+ 5 Application in 3D Case 16
53
+ 6 Tailors and Tradeoffs. 18
54
+
55
+ 6.1 Safety Analysis 19
56
+ 6.2 The Construction 19
57
+ 6.3 The Twist 19
58
+
59
+ 7 Strengths and Weaknesses 20
60
+
61
+ 7.1 Strengths 20
62
+ 7.2 Weaknesses 20
63
+
64
+ 8 Conclusion 21
65
+
66
+ # List of Figures
67
+
68
+ 1 Snowboard competition[1] 4
69
+ 2 Cross section of the halfpipe, perpendicular to the ground 5
70
+ 3 $R^{*}$ -w curve 11
71
+ 4 $R^{*}$ - $\mu$ curve 11
72
+ 5 $R^{*}$ -hicurve 11
73
+ 6 $R^{*} - x_{0}$ curve 11
74
+ 7 Left: $E_{t} - a$ curve under different $b$ ; Right: $E_{t} - b$ curve under different $a$ . Only the snow friction is considered 12
75
+ 8 Left: $E_{t} - a$ curve under different $b$ ; Right: $E_{t} - b$ curve under different $a$ . The snow friction and collision are considered 13
76
+ 9 Left: $E_{t}$ -a curve under different $b$ Right: $E_{t}$ -b curve under different a. The snow friction, wind friction and collision are considered 14
77
+ 10 $a^* - x_0$ curve 15
78
+ 11 $a^* - h_i$ curve 15
79
+ 12 $a^* - \alpha$ curve 15
80
+ 13 $a^* - h_0$ curve 16
81
+ 14 $a^* - \mu$ curve 16
82
+ 15 $a^* - E_i$ curve 16
83
+ 16 3D halfpipe 16
84
+ 17 Left: $E_{t}$ -a curve under different $b$ ; Right: $E_{t}$ -b curve under different a. The snow friction and collision are considered 17
85
+ 18 Left: $E_{t} - a$ curve under different $b$ ; Right: $E_{t} - b$ curve under different $a$ . The snow friction and collision are considered 18
86
+ 19 Left: $E_{t}$ -a curve under different $b$ ; Right: $E_{t}$ -b curve under different $a$ . The snow friction and collision are considered 18
87
+
88
+ # 1 Introduction
89
+
90
+ Snowboarding is an adventurous and exciting sport that has been contested at the Winter Olympic Games since 1998. Till now, the events are held in three specialities: parallel giant slalom, snowboard cross and halfpipe.[2] What we are interested here is the halfpipe, in which competitors perform tricks while going from one side of a ditch to the other. For both ornamental and safety
91
+
92
+ ![](images/b3ee84455ce00559b2aab31fdd8834ba3f7ea0d4af52cadaf96dd68e688ba090.jpg)
93
+ Fig 1: Snowboard competition[1]
94
+
95
+ purpose, the shape of the halfpipe is of vital importance.
96
+
97
+ In its most basic form, a halfpipe consists of two concave ramps (or quarterpipes), topped by copings and decks, facing each other across a transition and an extended flat ground (the flat bottom) added between the quarterpipes. For snowboarding, the plane of the transition is oriented downhill at a slight grade to allow riders to use gravity to develop speed and facilitate drainage of melt.
98
+
99
+ There are three main tasks in this paper as follows:
100
+
101
+ - Determine the shape of a snowboard course which can generate the highest jump above the edge of the halfpipe.
102
+ - Optimize the shape to satisfy other requirements of the tricks.
103
+ - In practical, what properties of the halfpipe should be abandoned.
104
+
105
+ We analyze the problem from the viewpoint of energy translation. The competition between the influx from gravitational potential energy and the loss caused by friction and collision determines the process of sliding. To study the energy change in this process, we firstly consider about the 2D case. We build a simple quarter circle-type model to study what factors involve in and what role they play. Next the sliding process is further simulated by introducing an elliptic-type trail. The energy loss is calculated numerically to obtain an ideal shape for 2D trail.
106
+
107
+ Then we take the third dimension into consideration. We work out the slope angle by balancing the energy gaining and losing in new 3D model, which is developed from former 2D case.
108
+
109
+ At last, to optimize the shape of the halfpipe, we take various requirements into consideration, including construction difficulty, players' safety and their performance.
110
+
111
+ # 2 Symbols, Hypothesis and Explanations
112
+
113
+ # 2.1 Symbols Used in this Paper
114
+
115
+ We list the quantities we use as follows. Some are geometric parameters of the halfpipe, some are physical quantities in the sliding process. Symbols and corresponding parameters are shown in Tab. 1 and Fig. 2.
116
+
117
+ ![](images/664ac69731def99daaf579b27eac9483db9805daeb881de8852a975acba68838.jpg)
118
+ Fig 2: Cross section of the halfpipe, perpendicular to the ground
119
+
120
+ # 2.2 Hypothesis
121
+
122
+ - We treat the snowboarder as a mass point with mass $m$ , ignoring body twists of snowboarder and other geometrical properties.
123
+ - The mass point moves right on the snowboard course rather than traveling along the trajectory of the original gravity center of a snowboarder above the snow surface.
124
+
125
+ <table><tr><td>Symbol</td><td>Quantity</td></tr><tr><td>m</td><td>Mass of snowboarder and the snowboard</td></tr><tr><td>μ</td><td>Friction coefficient between snow and the board</td></tr><tr><td>N</td><td>Pressure on the snow surface caused by snowboarder</td></tr><tr><td>E</td><td>Total mechanical energy of the snowboarder</td></tr><tr><td>h0</td><td>The height of the halfpipe</td></tr><tr><td>x0</td><td>Horizontal distance between the landing point and the coping</td></tr><tr><td>hi &amp; Ei</td><td>Initial height &amp; energy of the snowboarder</td></tr><tr><td>W &amp; w</td><td>Width and half width of the pipe</td></tr><tr><td>L</td><td>Length of the half pipe</td></tr></table>
126
+
127
+ Tab 1: Symbols used in this paper
128
+
129
+ - Only collision and sliding friction account for the mechanical energy loss, ignoring other possible causes such as air friction.
130
+ - The halfpipe is treated as perfectly rigid body, which can be modeled just by curve lines.
131
+ - The player does free falling from $h_i$ with initial velocity 0.
132
+ - We ignore the collision time and treat the collision between snowboarder and halfpipe as perfect inelastic collision.
133
+
134
+ # 2.3 Some Explanations
135
+
136
+ All the Hypothesis above are designed for simplifying the calculation. Qualitative analysis can be made based on these assumptions. Because some of the Hypothesis may seem a little confusing at the first glance, we give some explanations here.
137
+
138
+ - The mass point is mainly used to simplify calculation. Since our aim is at first to determine the shape of a snowboard course to maximize the production of vertical air, all the geometrical properties such as body size, shape and body twists of snowboarders can be regarded as irrelevant, at least not decisive. In fact, since the body of a snowboarder is not rigid, deformation leads to an
139
+
140
+ increase in computation complexity and the uncertainty in centre-of-gravity path, which covers up the essential aspects of this problem. Mass point assumption enables us to focus on the essential interaction between course shape and maximum of vertical air.
141
+
142
+ - In this problem, sliding friction, inelastic collision and air friction are three main factors accountable for the energy loss. As for the most basic model, we just take sliding friction and collision into consideration for simplicity. In further sections, we will introduce air friction to give some more practical results.
143
+ - According to the halfpipe standard released by International Ski Federation, in order to build a halfpipe, one should frequently drive on the decks to compact the snow before the pipe is stepped and wet new snow is preferable for construction since it is easy to bond together well. Therefore, the hard snow makes it feasible to assume that the halfpipe is a perfectly rigid body.
144
+
145
+ # 3 Basic Model
146
+
147
+ We use curve $\phi(x), 0 \leq x \leq W$ to denote the halfpipe. According to our assumption, the energy decay after landing is due to the friction between the snowboard and the snow surfaces. Therefore, if we assume $\mu$ is a constant, then
148
+
149
+ $$
150
+ \mathrm {d} E = - \mu N \mathrm {d} s = - \mu N \sqrt {1 + (\phi^ {\prime}) ^ {2}} \mathrm {d} x. (1)
151
+ $$
152
+
153
+ $N$ has two parts in itself, one is the part against the vertical component of the gravity, the other is centripetal force. To make force analysis, we define $\theta$ the angle between the velocity of snowboarder and the ground, and $\rho$ the radius of curvature. Then kinematical equation for the perpendicular direction is written as
154
+
155
+ $$
156
+ N = m g \cos \theta + \frac {m v ^ {2}}{\rho}, \tag {2}
157
+ $$
158
+
159
+ and $\rho$ can be calculated as
160
+
161
+ $$
162
+ \frac {1}{\rho} = \frac {\phi^ {\prime \prime}}{\left(1 + \left(\phi^ {\prime}\right) ^ {2}\right) ^ {\frac {3}{2}}}. \tag {3}
163
+ $$
164
+
165
+ According to the law of conservation of energy, the reduction in energy is equal to the energy loss caused by friction, then
166
+
167
+ $$
168
+ E (x) = \frac {1}{2} m v ^ {2} + m g \phi (x), \tag {4}
169
+ $$
170
+
171
+ where we assume the land surface is the zero point of the gravitational potential energy. Substitute Eq.(2),(3),(4) into Eq.(1), we get an ordinary differential equation for $E$ ,
172
+
173
+ $$
174
+ \frac {\mathrm {d}}{\mathrm {d} x} E + \frac {2 \mu \phi^ {\prime \prime}}{1 + (\phi^ {\prime}) ^ {2}} E = - \mu m g [ 1 - \frac {2 \phi \phi^ {\prime \prime}}{1 + (\phi^ {\prime}) ^ {2}} ]. \tag {5}
175
+ $$
176
+
177
+ To get the initial condition for Eq. (5), notice that the collision happens at $(x_0, \phi(x_0))$ where the collision angle is $\frac{\pi}{2} + \theta_0$ . The collision is assumed perfect inelastic, that is to say the slider loses his velocity perpendicular to the snow surface. Then we have
178
+
179
+ $$
180
+ v _ {0} = \sqrt {\frac {2 m (h - y _ {0})}{g}} \sin \theta , \tag {6}
181
+ $$
182
+
183
+ and then the initial mechanical energy
184
+
185
+ $$
186
+ E _ {0} [ \phi ] = m g y _ {0} + m g (h - y _ {0}) \frac {\phi^ {\prime} (x _ {0}) ^ {2}}{1 + \phi^ {\prime} (x _ {0}) ^ {2}}. \tag {7}
187
+ $$
188
+
189
+ Solution to Eq. (5) and (7) is written as
190
+
191
+ $$
192
+ E [ \phi , x ] = e ^ {- 2 \mu \int_ {x _ {0}} ^ {x} F [ \phi (s) ] \mathrm {d} s} \left[ - \mu m g \int_ {x _ {0}} ^ {x} (1 - 2 \phi (t) F [ \phi (t) ]) e ^ {2 \mu \int_ {x _ {0}} ^ {t} F [ \phi (s) ] \mathrm {d} s} \mathrm {d} t + E _ {0} [ \phi ] \right], \tag {8}
193
+ $$
194
+
195
+ where functional $F$ is defined as
196
+
197
+ $$
198
+ F [ \phi (x) ] = \frac {\phi^ {\prime \prime} (x)}{1 + \phi^ {\prime} (x) ^ {2}}. \tag {9}
199
+ $$
200
+
201
+ One part of Eq. (8) can be calculated analytically,
202
+
203
+ $$
204
+ \int_ {x _ {0}} ^ {t} \frac {\phi^ {\prime \prime}}{1 + (\phi^ {\prime}) ^ {2}} \mathrm {d} s = \arctan \left. \phi^ {\prime} \right| _ {t} - \arctan \left. \phi^ {\prime} \right| _ {x _ {0}} = \theta (t) - \theta \left(x _ {0}\right). \tag {10}
205
+ $$
206
+
207
+ Then $E[\phi, W]$ , the mechanical energy at the other end becomes
208
+
209
+ $$
210
+ E [ \phi , W ] = - \mu m g e ^ {- 2 \mu \theta (W)} \int_ {x _ {0}} ^ {W} (1 - 2 \phi (t) F [ \phi (t) ]) e ^ {2 \mu \theta (t)} d t + e ^ {- 2 \mu [ \theta (W) - \theta (x _ {0}) ]} E _ {0} [ \phi ]. \tag {11}
211
+ $$
212
+
213
+ Larger $E[\phi, W]$ means more energy before the next jump, which is what we are pursuing.
214
+
215
+ Eq. (11) provides us a fundamental way to analyze the problem. However, $E[\phi, W]$ is a functional of $\phi(x)$ , containing its zero, first, second derivatives and their integrals, hence a direct variational method can hardly be directly applied to it. In order to avoid a variational problem in the function space, we limit some feature of $\phi(x)$ and reduce the problem to a lower dimensional space.
216
+
217
+ # 4 Application in 2D Case
218
+
219
+ # 4.1 Quarter-Circle Type
220
+
221
+ From a circle series expansion point of view, we can use a circle to simulate a curve as the first-order approximation. Thus in this issue we firstly assume the trail consists of a vertical wall, a horizontal path and a quarter circle. This is simple but can provide us much information about the interactions among the relevant factors. In this case, no energy is added to the snowboarder, therefore the player may not reach the other edge of the course. So we only study process in $(x_0, w)$ , where $w$ is the midpoint of the path. Our goal here is to analyze how the related factors, $\mu, h_i, x_0$ and $w$ , influence the energy loss.
222
+
223
+ Notice that on the quarter circle, we have
224
+
225
+ $$
226
+ F [ \phi (x) ] = \frac {\phi^ {\prime \prime} (x)}{1 + \phi^ {\prime} (x) ^ {2}} = \frac {1}{R - y}, \tag {12}
227
+ $$
228
+
229
+ where $R$ is the radius. Then according to Eq.(11), we have
230
+
231
+ $$
232
+ \begin{array}{l} E (w) = e ^ {- 2 \mu \theta (x _ {0})} E (x _ {0}) - \mu m g (w - R) - m g R (1 - e ^ {- 2 \mu \theta (x _ {0})}) \\ - \frac {3 \mu m g}{1 + 4 \mu^ {2}} [ 2 \mu R - (x _ {0} - R + 2 \mu (R - y _ {0})) e ^ {- 2 \mu \theta (x _ {0})} ], \tag {13} \\ \end{array}
233
+ $$
234
+
235
+ where
236
+
237
+ $$
238
+ \theta (x) = - \arcsin \left(1 - \frac {x}{R}\right). \tag {14}
239
+ $$
240
+
241
+ Eq. (13) is a useful result to analyze the effect of $w$ , $x_0$ , $h_i$ , and $\mu$ . We calculate the energy $E(w)$ for different radius to search for the best $R$ , which we denote as $R^*$ . For most parameters, $E(R)$ has a maximum inside the interval $[x_0, W]$ , which is chosen as the $R^*$ . What we are interested here is the relationship between
242
+
243
+ $R^{*}$ and $(W, h_{i}, x_{0}, \mu)$ . According to the halfpipe standard set by International Ski Federation [3], w is around $8.5\mathrm{m} \sim 9\mathrm{m}$ for 18 FT pipe and $9.5\mathrm{m} \sim 10\mathrm{m}$ for 22 FT pipe. $x_{0}$ and $h_{i}$ is related to the trick and the slider himself. $\mu$ is related to the quality of the snow, the temperature and the intensity of pressure between the board and snow. Usually, this value is between 0.08 and 0.11 [4]. Based on Eq. (13), we analyze the influence of pipe width $W$ , friction coefficient $\mu$ , initial height $h_{i}$ and landing position $x_{0}$ , respectively.
244
+
245
+ - Half width of the pipe $w$ . According to Eq. (13), there is no $w$ in $\partial E / \partial R$ . Therefore, we expect that $R^*$ has nothing to do with $w$ . For $f = 9.0\mathrm{m}$ , $\mu = 0.1$ and $x_0 = 0.5\mathrm{m}$ , the result is in Fig. 3. $R^*$ is a constant equals to $3.83\mathrm{m}$ .
246
+ - Friction coefficient $\mu$ . We calculate $R^{*}$ numerically to see of what role $\mu$ plays. The result is in Fig. 4. Our results indicate that for a smaller $\mu$ , a larger $R^{*}$ is needed to reduce the energy loss.
247
+ - Initial height $h_i$ . Initial height $h_i$ is a parameter we couldn't assume beforehand. A good halfpipe should be specified to satisfy all kinds of needs. Therefore, $h_i - R^*$ relationship is important in determining the shape. Our result in Fig.5 shows that $h_i$ and $R^*$ have a rough linear relationship with gradient 0.16.
248
+ - Landing position $x_0$ . Landing point $x_0$ determines the collision energy loss. Lower collision point and moderate slope means larger collision energy loss. A larger radius will provide higher collision point but suffers from steep collision surface, so $R^*$ is a balance between these two factors. According to our numerical results in Fig.6, when reducing $x_0$ , $R^*$ accelerates to decrease. While $x_0$ tends to 0, $R^*$ also tends to zero. On the other hand, when $x_0$ is big enough, $R^*$ hits the boundary $h_0$ , which means no verticals.
249
+
250
+ To sum up, a quarter circle path gives a direct access to the problem. If we assume the energy loss is due to the friction and collision loss, regard the slider as a mass point, then the total energy loss can be calculated analytically and provide us lots of information of the interactions among all kinds of influential factors.
251
+
252
+ ![](images/ca5bdc0fbd589f4124dd709b0da3147e42a47b334c70043b4ffc63bf9ac76764.jpg)
253
+
254
+ ![](images/7376a679b137f0b972d756f438c5f982688e12c4ae7cade81cf104148962e3c4.jpg)
255
+
256
+ ![](images/7b0dab4b89484cdb77487affbfff993c5461aea78330a75e5e1bbcebd93a5269.jpg)
257
+ Fig 3: $R^{*}$ -w curve
258
+ Fig 5: $R^{*} - h_{i}$ curve
259
+
260
+ ![](images/dad44d2806051d70702a50a5163ebc67e66e45e18b80bf93d81c85d29715209c.jpg)
261
+ Fig 4: $R^{*} - \mu$ curve
262
+ Fig 6: $R^{*}$ - $x_{0}$ curve
263
+
264
+ Within the quarter circle style, we claim that for $h_i = 9.0\mathrm{m}$ , $x_0 = 0.5\mathrm{m}$ , $\mu = 0.1$ and $w = 9.0\mathrm{m}$ , the best radius is $3.831\mathrm{m}$ .
265
+
266
+ However, a quarter circle path is too simple for the real halfpipe. In addition, according to our computation, when $x_0$ is large ( $> 0.5$ ), $E$ is approximately a constant if $R \in [3, h_0]$ . This implies that $R$ no longer affects so much on $E$ , thus other factors may be dominant. Therefore, in order to pursue better shape, we turn to look for more complicated curve.
267
+
268
+ # 4.2 Quarter Ellipse Type
269
+
270
+ We now study the case in which the transition curve of the ramp is a quarter section of a ellipse with semi-major axis $a$ and semi-minor axis $b$ . For further calculation, here we specify some parameters defined before. According to the halfpipe size standard released by International Ski Federation[3], we set $w$ equals
271
+
272
+ to $9\mathrm{m}$ , $h_0$ equals to $5.4\mathrm{m}$ . According to the pressure density on the snow and the sliding velocity, we assume $\mu = 0.1$ . $h$ is set to be $9\mathrm{m}$ , which is common for professional snowboarders. We assume $m = 75\mathrm{kg}$ , which brings $6.6\mathrm{kJ}$ of the initial mechanical energy. It should be pointed out that these assumptions are not essential for the following analysis.
273
+
274
+ We solve the Eq. (5) numerically using forward Euler's method[5], the step is chosen to be $10^{-4}\mathrm{m}$ . $a^*$ and $b^*$ , the best values of $a$ and $b$ , are obtained using bisection method.
275
+
276
+ # 4.2.1 Snow Friction Only
277
+
278
+ Landing is a rapid but complex process. There will be deformation of board and snow, and temperature changes. For simplicity, we ignore the energy loss caused by collision first to see its effect. In this case, only sliding friction account for the energy loss. We fix $b$ to find the best $a$ and fix $a$ to find the best $b$ , respectively.
279
+
280
+ ![](images/3519d08e52c8ab52350b5c18752de47708db9639f1f08c8e698edc134eab7e8c.jpg)
281
+ Fig 7: Left: $E_{t}$ -a curve under different $b$ ; Right: $E_{t}$ -b curve under different $a$ . Only the snow friction is considered
282
+
283
+ ![](images/bc1327602398f052538f537010749262070a465d7251b4dac6174cc7df497cd9.jpg)
284
+
285
+ Define the final energy at $x = w$ as $E_{t}$ , which is a gauge to measure the quality of the course. It is shown in Fig. 7 that, $E_{t}$ reaches its maximum when $a$ tends to zero with $b$ fixed or $b$ tends to zero with $a$ fixed. Therefore, we come to the conclusion that collision energy loss is essential in the whole process. Ignoring it will lead the trail to be a right angle, which is unreasonable.
286
+
287
+ # 4.2.2 Snow Friction and Collision
288
+
289
+ Now we take the collision into consideration. Here we fix the landing point $x_0 = 0.5$ . By re-conducting the calculation before, we get Fig. 8. It is shown that for each $b$ , $E_t$ has a peak value in the interval $[x_0, w]$ , and the peak shifts to the right when $b$ becomes larger. On the other hand, for each $a$ in Fig. 8, $E_t$ ever increases as $b$ gets larger. In this case, our conclusion is that the longer $b$ is, the smaller the energy loss there will be, but the value for $a$ is inside the interval $[x_0, w]$ and is determined by related parameters.
290
+
291
+ ![](images/7c2a8bd3f9964196929ab094b22081b7876f643848c8d7362d2b0083334987c7.jpg)
292
+ Fig 8: Left: $E_{t}$ -a curve under different $b$ ; Right: $E_{t}$ -b curve under different $a$ . The snow friction and collision are considered
293
+
294
+ ![](images/9c862bf00a9a568c75b09b6197d46347de7a23ee1dfe21ee979c82a79f439a6a.jpg)
295
+
296
+ In this way, we vary $a$ and compare the peak value for $b$ , then a best $(a^{*}, b^{*})$ is obtained.
297
+
298
+ # 4.2.3 Snow Friction, Air Friction and Collision
299
+
300
+ Furthermore, we add air friction to make our model more practical and to analyze the role wind plays. In this case, we assume the air resistance is proportional to the velocity since the speed is not fast. The damping coefficient $\alpha$ is defined as resistance force applied on unit area under unit velocity, which is affected by the velocity of the moving object. In the low speed case, the change of $\alpha$ due to the velocity can be ignored, thus we assume a constant $\alpha$ in the following calculation.
301
+
302
+ To study the influence of wind friction, we vary $\alpha$ from $0\mathrm{Ns} / \mathrm{m}^3$ to $1.5\mathrm{Ns} / \mathrm{m}^3$ with other parameters fixed, and assume the windward area $A$ is $0.5\mathrm{m}^2$ .
303
+
304
+ In this case, the differential equation of $E$ becomes
305
+
306
+ $$
307
+ \frac {\mathrm {d}}{\mathrm {d} x} E + \frac {2 \mu \phi^ {\prime \prime}}{1 + (\phi^ {\prime}) ^ {2}} E = - \mu m g \left[ 1 - \frac {2 \phi \phi^ {\prime \prime}}{1 + (\phi^ {\prime}) ^ {2}} \right] - \alpha A v \sqrt {1 + (\phi^ {\prime}) ^ {2}}. \tag {15}
308
+ $$
309
+
310
+ We calculate the influence on $a^*$ and $b^*$ by $\alpha$ . Our result in Fig.9 shows that $a^*$ shifts to the right a little bit as $\alpha$ increases, and $b^*$ is still on the boundary. From now on, for simplicity, we use the average value of $\alpha$ under low speed, which, according to the Wind Scale Table [6], is about $0.75\mathrm{Ns} / m^3$ .
311
+
312
+ ![](images/f6b4654dce31de7476ba302dd98f6ae7d6262473268abc43eb072f91fef9f980.jpg)
313
+ Fig 9: Left: $E_{t}$ -a curve under different $b$ ; Right: $E_{t}$ -b curve under different $a$ . The snow friction, wind friction and collision are considered
314
+
315
+ ![](images/6862b26bdb710bb613320167f8d24213c399af4fedf18a5cd9189800e831f3e7.jpg)
316
+
317
+ In conclusion, for given $(\mu, x_0, h_i, \alpha, w)$ , we can provide the best value of $a$ and $b$ . In the common case, $b^*$ reaches to $h_0$ and $a^*$ is between $x_0$ and $w$ . The existence of wind resistance will alter the final energy $E_t$ , on the other hand, the wind does not affect much of $a^*$ and $b^*$ , or the shape of the snow course. At the parameters $(h_0, h_i, x_0, \mu, \alpha) = (5.4, 9.0, 0.5, 0.1, 0.75)$ , we claim that the best ellipse parameter is $(a^*, b^*) = (4.19, 5.4)$
318
+
319
+ # 4.3 Sensitivity
320
+
321
+ In the following section, we will analyze the sensitivity of shape-parameters $a^*$ and $b^*$ , according to friction coefficient $\mu$ , landing position $x_0$ , initial height $h_i$ , halfpipe height $h_0$ , damping coefficient $\alpha$ and initial energy $E_i$ , respectively. In order to compare the influence of these factors, we introduce a non-dimensional variable $s$ , the sensitivity coefficient. Sensitivity coefficient of two quantities A and B is defined as
322
+
323
+ $$
324
+ s = \left. \frac {\partial A}{\partial B} / \frac {A}{B} \right| _ {B = B _ {0}}, \tag {16}
325
+ $$
326
+
327
+ where $B_0$ is a fixed point and $A$ is a function of $B$ .
328
+
329
+ Cause $b^{*}$ always equals to the halfpipe height, we only estimate the sensitivity of $a^{*}$ according to the six factors mentioned above.
330
+
331
+ - $x_0$ . To our surprise, there is a minimum in the $a^*$ - $x_0$ curve at around $x_0 = 0.45\mathrm{m}$ . This is approximately the value of common landing point. Longer or shorter $x_0$ will both lead to larger $a^*$ . The sensitivity coefficient $s$ is 0.552 at $x_0 = 0.5\mathrm{m}$ . The result is shown in Fig. 10
332
+ - $h_i$ . A higher initial height means larger speed before collision, therefore we expect steeper collision angle or higher collision height, which lead to larger $a^*$ . The result is shown in Fig.11. The sensitivity coefficient $s$ is -2.110 at $h_0 = 9.0\mathrm{m}$ .
333
+ - $\alpha$ . As we talked in the last section, $\alpha$ effects little on $a^*$ . The corresponding $s$ is 0.011 at $\alpha = 0.75\mathrm{Ns} / m^3$ . The result is shown in Fig. 12
334
+ - $h_0$ . When $h_0$ changes, so does $b_0$ . Our result in Fig. 13 shows that $a^*$ is sensitive to $h_0$ , and $s = 2.374$ .
335
+ - $\mu$ . Changes of $\mu$ will affect the energy loss significantly as shown in Fig. 14. For larger $\mu$ , $a^*$ needs to increase to reduce the energy loss caused by friction. $s = 0.993$ at $\mu = 0.1$ .
336
+ - $E_{i}$ . In the case where no rotation involves, the effect of $E_{i}$ is equivalent to $h_{i}$ . The result is shown in Fig. 15, and $s = -2.110$ .
337
+
338
+ ![](images/824cc6b5e343a2402a0599b7b0d460fdc6fa62c42e496d992d02336765bd12fe.jpg)
339
+ Fig 10: $a^* - x_0$ curve
340
+
341
+ ![](images/62163652b7ab4abb59e6c20f9144a177af71c0c25eabafa6b3cd147b5d85b4ef.jpg)
342
+ Fig 11: $a^*$ - $h_i$ curve
343
+
344
+ ![](images/a4a11c3d36dc7ec08c82213504b14488a5e725838f87c6bcc530bcf9847edc35.jpg)
345
+ Fig 12: $a^* - \alpha$ curve
346
+
347
+ ![](images/c8c982003c2a990aff2c3d49f82ebd5b1e123daec069b6ed3a1e55a0fe499ec8.jpg)
348
+ Fig 13: $a^* - h_0$ curve
349
+
350
+ ![](images/8f2975432c9e0f52150c70a6ae7751d93f6c7902016634ca183fdfeb8a3bf060.jpg)
351
+ Fig 14: $a^* - \mu$ curve
352
+
353
+ ![](images/54771c6fcd9f7f40ea96b798455c5270100abefd5cc8c888b36d68e85c56dff0.jpg)
354
+ Fig 15: $a^* - E_i$ curve
355
+
356
+ # 5 Application in 3D Case
357
+
358
+ Now we move our study to a more reasonable and complex case involving the third dimension of the halfpipe. In fact, the halfpipe does not lie horizontally, but is oriented downhill with a slope angle $\varphi$ . In this way, the sliding is not a pure energy loss process, but can gain energy from the gravity. Therefore, the third dimension will alter the velocity of the snowboarder, which will bring changes to $a^*$ and $b^*$ .
359
+
360
+ Fig. 16 is our sketch for a 3-dimensional halfpipe. For a standard 18 FT halfpipe, the length of the halfpipe $L$ is about 100-150m, and for professional snowboarders, there will be 6-7 tricks in a round. Therefore, in a whole trick, the player will travel about 20m to the downhill direction, which is defined as $l$ .
361
+
362
+ ![](images/2f928b5eee60febbc1100185f65405394ff762bb7389eaafe445107d9e3abb56.jpg)
363
+ Fig 16: 3D halfpipe
364
+
365
+ We use $(x', y')$ to describe the new course. It is the upper part of the curve edge trapezoid formed by a plane and the halfpipe. Under the coordinates transformation in Eq. (5), the problem changes to a 2-D case.
366
+
367
+ $$
368
+ \left\{ \begin{array}{l} x ^ {\prime} = \sqrt {1 + \frac {l ^ {2}}{W ^ {2}}} x, \\ y ^ {\prime} = y - \frac {x}{W} l \sin \varphi . \end{array} \right.
369
+ $$
370
+
371
+ By adopting Eq. (11), we can work out the corresponding $a^*$ and $b^*$ similarly. $a^*$ and $b^*$ are calculated numerically according to different slope angle $\varphi$ , and the result is in Fig. 17.
372
+
373
+ ![](images/6d06b328116c6c4547d0318af7a1310f3d96e845b9669fe802c3f8ba705e2c79.jpg)
374
+ Fig 17: Left: $E_{t}$ -a curve under different $b$ ; Right: $E_{t}$ -b curve under different $a$ . The snow friction and collision are considered
375
+
376
+ ![](images/b727a3342b0b22c8663aedf4a0a01c0478ba9f6d2b5fc010ad6281939d471693.jpg)
377
+
378
+ The conclusion we make in 2D case holds for the 3D case here. $b^{*}$ reaches $h_0$ and $a^{*}$ lies in $[x_0, w]$ .
379
+
380
+ Now we begin to analyze the influence brought by slope angle $\varphi$ . With other parameters fixed, we calculate the $a^*$ and $b^*$ according to $\varphi$ , respectively.
381
+
382
+ It turns out that a steeper slope requires smaller $a^*$ , which is shown in Fig. 18. Meanwhile, $b^*$ is always equal to $h_0$ regardless of $\varphi$ . The sensitivity coefficient of $a^*$ according to $\varphi$ is -0.4605 at $\varphi = 17.0^{\mathrm{o}}$
383
+
384
+ Now it's time for us to determine the suitable $\varphi$ for a halfpipe. The calculation above only concerns half of the course, that is from the hanging to the nadir of the curve. Here we complete the whole curve for one trick to work out the best $\varphi$ , denoted as $\varphi^{*}$ .
385
+
386
+ ![](images/c28638e1120c247b9a33f42438c8ec60087f0260b0f93e71ffc875b0aaa5c1be.jpg)
387
+ Fig 18: Left: $E_{t}$ -a curve under different $b$ ; Right: $E_{t}$ -b curve under different $a$ . The snow friction and collision are considered
388
+
389
+ ![](images/540b702e89a4a28c23147faf4c6f9ebd4dc417ceba0c730a50488d4a9bbfbff6.jpg)
390
+ Fig 19: Left: $E_{t}$ -a curve under different $b$ ; Right: $E_{t}$ -b curve under different $a$ . The snow friction and collision are considered
391
+
392
+ We claim that a similar hanging height above the deck on each side of the halfpipe is a sign for good $\varphi$ . The reason is as the follows. In a round of snowboard contest, the player will perform 6-7 tricks on both sides of the halfpipe. Thus small $\varphi$ will result in the condition that the player never takes air. Also, too large $\varphi$ will lead to higher and higher hanging height, which is dangerous for the player. Therefore, we look for $\varphi^{*}$ that makes the same hang height above the deck on both sides. The initial hang height above the ground $h_i$ is still assumed to be 9.0, cause the halfpipe is symmetric, $\varphi^{*}$ should make the final height above the ground $h_f$ equals to $9.0m$ . We calculate the $h_f - \varphi$ relationship in Fig. 19, fit the data by using cubic spline interpolation and finally get the point intersected by the $h_f - \varphi$ curve and the line $h_f = 9.0m$ . Thus $\varphi^{*}$ is captured, which is $17.1^{\circ}$ .
393
+
394
+ The corresponding $(a^{*},b^{*})$ for $\varphi^{*} = 17.1^{\mathrm{o}}$ is $(4.42\mathrm{m},5.4\mathrm{m})$
395
+
396
+ # 6 Tailors and Tradeoffs
397
+
398
+ Above analysis is purely theoretical, based on some ideal assumptions. In the real game, the player is not a mass point without rotation, and the goal is not simply to reach the highest point. Practical situations are much more complicated and more factors should be taken into consideration. So we have to tailor the halfpipe to optimize various requirements.
399
+
400
+ # 6.1 Safety Analysis
401
+
402
+ No one wants to get hurt in games. The safety requirements should always be put in the first place. Take the design of edge angle as an example. In theoretical halfpipe the slope angle near edges, named as $\theta_{e}$ , should be close to $90^{\circ}$ . However, such large $\theta_{e}$ means less holding force from halfpipe to player, which will increase the possibility for player to fly away the track too early. At the same time, when the player falls onto the surface of the halfpipes edge from air, the shock is in proportional to $\cos \theta$ . So small $\theta_{e}$ may increase the shock, hurting players or making them lose their balance. In conclusion, $\theta_{e}$ should neither be too large nor too small. Usually this is set from $83^{\circ}$ to $88^{\circ}$ [3].
403
+
404
+ The slope angle of the halfpipe, $\varphi$ , is another factors related to safety. Larger $\varphi$ means higher jump but also higher danger. Therefore, while adjusting $\varphi$ in our design, we could only slightly increase it from the theoretical value $17.1^{\mathrm{o}}$ to about $17.5^{\mathrm{o}}$ . This is also the actual slope angle used in regular game nowadays.
405
+
406
+ Besides, to minimize the energy loss caused by friction, the bottom part should be abandoned (in other words, $a$ should equals $w$ ). However, we maintain this part, to give the player more time to prepare the next jump.
407
+
408
+ # 6.2 The Construction
409
+
410
+ The shape of ellipse is not a big problem challenge in actual construction. The building of precipitous ramp may increase the difficulty, since the sharp cliff cannot hold the snow firmly on its surface. Therefore, decreasing $\theta_{e}$ from nearly $90^{\circ}$ to $85^{\circ}$ can also meet construction requirements.
411
+
412
+ # 6.3 The Twist
413
+
414
+ In regulation games, players are demanded to play more twists in the air. The angular velocity for twists is obtained by wriggle the waist and stomp the ramp. To help players perform more twists, we should provide them more time in the air, and offer them a more safe side wall to step on. So on one hand, we could increase $\varphi$ to speed the player up within the safe range. On the other hand, as mentioned above, the edge angle should be a little smaller than $90^{\circ}$ .
415
+
416
+ # 7 Strengths and Weaknesses
417
+
418
+ # 7.1 Strengths
419
+
420
+ - We do not construct a complex model with all the variables together. Instead, we build a simple basic model first and then introduce corrections step by step. Therefore, it is easier to analyze the effect of different factors separately.
421
+ - We have taken many kinds of different factors into consideration and made all the analyses systematically and comprehensively.
422
+ - The halfpipe size and the slope angle obtained by our model fit the real data well. That's to say, our model is successful in application layer.
423
+ - A lot of figures are set to illustrate the relations between different variables visually. This is more accessible and easier to analyze than those complicated function expressions.
424
+ - We obtain the optimal size for elliptical half pipe, which is more general than circle curve and still easy and quick to construct in reality.
425
+ - We analyze the sensitivities of the results to different variables not only qualitatively, but also quantitatively, by introducing the sensitivity coefficients.
426
+
427
+ # 7.2 Weaknesses
428
+
429
+ - We ignore the influence of snowboarder's movement on the height of vertical air, which may reduce the computational accuracy. For example, twist and jump may affect the mechanical energy, stand up and down postures may affect the velocity.
430
+ - For simplicity, the direction of velocity is considered unchanging though out the movement, while in fact, one skilled snowboarder may change his moving direction to achieve some complex tricks, which may cause the moving path not in one plane.
431
+ - We use ellipse as our final approximation, which may not be the best curve to guarantee the highest vertical air.
432
+
433
+ # 8 Conclusion
434
+
435
+ In this paper, we study the shape of a snowboard course and its influential factors under energetic view. The main idea is to measure the "vertical air" by final energy. After making several assumptions, we build our basic model, which is simple but sufficient to offer a fundamental idea and reveal essential interactions among different factors. We make force analysis, set up kinematical equations and obtain the general form of final mechanical energy analytically.
436
+
437
+ After that, two specific types of transition curve are introduced to calculate the optimal design. In quarter-circle model, we obtain analytical solutions and find the best radius $R^*$ of the circle. While in quarter-ellipse model, we use numerical simulation to get the optimal semi-major axis $a^*$ and semi-minor axis $b^*$ . Then we further develop the 2D quarter-ellipse model into a more actual and complex case, involving the third dimension of the halfpipe. Through numerical computation, we find the most suitable slope angle of the halfpipe is $17.1^\circ$ . Next, the sensitivities of different variables are analyzed both qualitatively and quantitatively. It helps us to compare the significance of these variables in building a best halfpipe.
438
+
439
+ At last, the tailors and tradeoffs to develop a "practical" course are discussed. After taking construction difficulty, players' safety and their performance into consideration, our result becomes more practical.
440
+
441
+ In conclusion, through systematic and comprehensive analysis of various factors, we find a reasonable shape of the snowboard course, with $a = 4.42\mathrm{m}$ , $b = 5.4\mathrm{m}$ and $\varphi = 17.5^{\circ}$ . Our results fit the real date well.
442
+
443
+ # References
444
+
445
+ [1] http://www.stuff.co.nz/southland-times/sport/2705175/cardronas-halfpipe-opens-for-season.
446
+ [2] http://en.wikipedia.org/wiki/half-pipe.
447
+ [3] International Ski Federation. Snowboard resort information sheet.
448
+
449
+ [4] Zhiming Bai and Hailiang Yang. Study on several factors affecting the coefficient of friction between snow and ski. Research and Exploration in Laboratory, 25(11):1360-1362, 2006.
450
+ [5] Rainer Kress. Numerical analysis. Springer Press, 1998.
451
+ [6] http://baike.weather.com.cn.
MCM/2011/B/10496/10496.md ADDED
@@ -0,0 +1,552 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Summary
2
+
3
+ Our task is to construct an efficient network of radio repeaters that can accommodate 1,000 simultaneous users on a spectrum from 145 to $148\mathrm{MHz}$ in a circular, flat area of radius 40 miles. In the United States, this frequency range is used by licensed amateur "ham" radio operators. Because the United States government requires ham radio operators to undergo a certification process, we assume that we can implement protocols that the users will follow when using the network. Our goal is to minimize the number of repeaters used by the network while also providing a reasonable amount of service to users.
4
+
5
+ We approach this problem by trying to construct a network that places more repeaters in regions with the highest population density. We utilize privacy lines in a systematic way to allow parallel communication and minimize interference across our network. To form the basis of our network, we use a clustering algorithm to group points in areas of dense population and then place repeaters at the centroids of each region. If the resulting graph is disconnected, this preliminary network is connected using a minimum spanning tree algorithm.
6
+
7
+ In order to test our network we came up with sample population data using a clumped distribution model. The model produces data that had a similar distribution to actual density data taken from a flat region in the Southeastern United States. We measure the coverage of our network by the percentage of users within range of our network's repeaters. We calculate the transmission capacity of the network is through repeated simulations of users in the network attempting to send messages to other users in the network. Sensitivity analysis was performed on this to determine the accuracy of this metric to be within $1.5\%$ error. Our model was able to significantly increase the transmission capacities up to $98\%$ transmission over the network in our model. This compares to a maximum of $32\%$ for a naive approach of evenly distributing repeaters throughout the area.
8
+
9
+ We provide color-gradient plots for different population distributions and plots of our generated networks for these distributions in our paper. Our algorithm works very well for most clumped populations. However, it is limited in the extent to which we can cluster the data. In addition to this information, we provide a method for constructing subnetworks around nodes with high load, a strategy that can improve the ability of the network to accommodate larger numbers of users.
10
+
11
+ # Clustering on a Network
12
+
13
+ # MCM Contest Question B
14
+
15
+ Team # 10496
16
+
17
+ February 14, 2011
18
+
19
+ # Contents
20
+
21
+ 1 Introduction 3
22
+ 1.1 Background 3
23
+ 1.2 The Problem 4
24
+ 2Assumptions 4
25
+ 3 Metric 7
26
+
27
+ 3.1 Measuring Transmission Capacity 7
28
+ 3.2 Generating Requests 9
29
+ 3.3 Measuring Overall Quality 9
30
+
31
+ 4 Approach 10
32
+
33
+ 4.1 Modeling Population Density 10
34
+ 4.2 Linking Strategies 10
35
+
36
+ 4.2.1 Two Different Strategies 10
37
+ 4.2.2 Bi-directional and Extended Chains 12
38
+
39
+ 4.3 Naive Approach 13
40
+ 4.4 Implementing Privacy Lines to Reduce Interference 14
41
+ 4.5 K-Means Clustering 16
42
+
43
+ 4.5.1 Connecting Clusters 18
44
+ 4.5.2 Assigning Channels 19
45
+
46
+ 5 Results 20
47
+
48
+ 5.1 Procedure 20
49
+ 5.2 Number of Repeaters 21
50
+ 5.3 Coverage 21
51
+
52
+ 5.4 Transmission Capacity 22
53
+
54
+ # 6 Discussion 23
55
+
56
+ 6.1 Effects of 10,000 Users 23
57
+ 6.2 Effects of Mountainous Terrain 26
58
+
59
+ # 7 Conclusion 26
60
+
61
+ # 8 Future Work 27
62
+
63
+ 8.1 Limitations of our model 27
64
+ 8.2 Ways to improve the model 29
65
+
66
+ # A Appendix 30
67
+
68
+ # 1 Introduction
69
+
70
+ # 1.1 Background
71
+
72
+ The goal of this paper is to present an extendible, flexible algorithm for the optimal placement of a repeater network intended to serve users along the 145-148MHz range. In the United States, this particular band of frequencies corresponds to those used by amateur "ham" radio users [6]. Because of how radio repeaters work, interference is a significant, persistent problem. With more than 700,000 licensed amateur radio operators in the United States, there is a real need to further develop strategies to optimize the number of repeaters needed to service an area as well as methods for managing interference [3]. This need is especially acute in extremely flat areas, where the repeaters' limited maximum range requires chains of repeaters to be used.
73
+
74
+ # Technological Background
75
+
76
+ Radio waves in the 145-148MHz range are considered very high frequency (VHF). Very high frequency waves are often used for short-range radio communication. For one thing, they are generally not reflected much by the ionosphere, so communication via VHF waves is largely limited to slightly more than line-of-sight communication. The line-of-sight property is so important that in a flat area, you can make a reasonable approximation of a transmitter's broadcast radius simply based on its height. A rough approximation of the line-of-sight horizon (in miles) for an antenna of height $h$ (in feet) is:
77
+
78
+ $$
79
+ d i s t a n c e = \sqrt {1 . 5 \cdot h}
80
+ $$
81
+
82
+ [4]
83
+
84
+ While the limited range of VHF transmissions provides one avenue for avoiding interference with other transmissions, it also limits the broadcasting range of an individual user. On flat ground, the broadcast radius of an average handheld amateur radio is somewhere in the neighborhood of 3-5 miles [2]. For amateur radio enthusiasts who want to be able to communicate with other people like them, this is an extremely limited range.
85
+
86
+ To address this issue, amateur radio users can tap into any of hundreds of radio "repeaters" distributed across the country. What a repeater does is clear from its name: it repeats the signals it receives, often with additional power or from a higher vantage point. The type of repeater available for our use is called a "duplex" repeater. The repeater is tuned to pick up a particular frequency (in the 145-148MHz range), and then outputs the same
87
+
88
+ signal at a frequency offset by $600\mathrm{KHz}$ . Because repeaters have a much larger range than the average user due to their increased height, repeaters allow an amateur radio operator broadcast to a much wider audience.
89
+
90
+ The ability to broadcast across a much wider area is a positive one for amateur radio operators, but it does present certain challenges. Most importantly, offering users a greater broadcast range greatly amplifies the likelihood of interference. One tool that has been used to address the problem is the "continuous tone-coded squelch system" (CTCSS) technology. The idea is clever and simple: each repeater is associated with a particular subaudible tone. A given repeater remains in a deactivated state until it hears a signal that is of the proper frequency and contains its subaudible tone. As long as the repeater hears the signal with the subaudible tone, it will remain active and will rebroadcast any signals it receives at its input frequency.
91
+
92
+ # 1.2 The Problem
93
+
94
+ We have been tasked with determining the minimum number of repeaters needed to accommodate a particular number of users - initially, 1000 - spread across a flat, circular area of radius 40 miles. In part because we are dealing with the relatively narrow range of 145-148 MHz, we have 54 distinct privacy lines at our disposal to help mitigate potential interference problems.
95
+
96
+ The problem statement mentions that this network must be able to accommodate a certain number of "simultaneous users". This is a vague phrase that is open to a number of interpretations. One way to interpret it is to say that there are 1,000 people making demands on the radio network at any given time. For a network of repeaters used for amateur radio, this is completely unrealistic from an interference standpoint. As a consequence, for the purpose of this problem, we defined "1,000 simultaneous users" to mean "1,000 people who might make demands on the system", where the average number of requests per user is a parameter of our model.
97
+
98
+ # 2 Assumptions
99
+
100
+ Over the course of our analysis, we make a number of assumptions both to simplify the problem and to more clearly define it. We believe that the
101
+
102
+ majority of these assumptions are reasonable and based on solid real-world justification; we will note when we deviate from this practice.
103
+
104
+ First, over the course of this paper we are assume that both repeaters and individual users broadcast in a radially symmetric manner. That is, their broadcast range can be represented by a circle on a two-dimensional map. Given the fact that transmissions in the VHF range are largely dependent on having a line-of-sight connection, this makes sense for a flat area - transmission range is going to be limited mostly by the curvature of the earth, something that is roughly constant in each direction. In addition, we will make the assumption that if the broadcast radii of two repeaters touch, then they are able to communicate with one another. This is a reasonable assumption as well - the broadcast radius measures how far a source can broadcast to a point on the ground. Provided that power is not an issue and the radio waves don't fade too much with distance, the distance a 40-foot-high antenna will be able to broadcast to another 40-foot-high tower will be roughly twice its broadcast range [9].
105
+
106
+ Second, we assume that the radio waves are traveling instantaneously. Because radio waves travel at near the speed of light, this is not a huge distortion, especially at scales of eighty miles or less [8]. While a wave that needs to travel through several repeaters to get to its destination will suffer a noticeable delay, it will not be more than a second or two. In practice, this requires users to build in a buffer between receiving and transmitting signals. For the purpose of analyzing interference between two 30-second or minute-long signals, this buffer is rather small. As a consequence, we will use a set interval and ignore any requisite buffer when running any network load simulations.
107
+
108
+ Third, any repeater we use can have a maximum range of about 9 miles. This corresponds to an antenna height of about 40 feet [9]. Across a flat area that does not have any significant rises upon which to place an antenna, this is a reasonable maximum antenna height. An "average" user — presumably with in possession of a radio at least as powerful as a handheld amateur radio — is assumed to have a radial broadcast distance of roughly 3 miles. For the purposes of this paper, we assumed that a handheld radio could transmit a maximum of 5 miles to another handheld and assumed that a handheld radio could only transmit to a repeater if that repeater could reach the handheld radio. To account for power constraints on these mobile devices that could cause degradation of waves over large distances [4], we rounded down to 3 miles when considering the maximum transmission distance from a handheld device.
109
+
110
+ Fourth, the very real threat of interference as well as amateur radio li
111
+
112
+ censing requirements have led to very widely-adopted broadcasting codes of conduct [10]. Before broadcasting on a particular frequency or using a nearby repeater, users are required to listen on all privacy lines for several minutes to see if it is in use. If it is not, then the user will make his or her broadcast. Therefore, during analysis, we assumed that users are both capable and polite. That is, they do not make requests when a local tower is in use.
113
+
114
+ Fifth, while this rules out the possibility of interference at a local level, it does not rule out the possibility of interference across a long chain of connected repeaters. Ideally, this type of interference would count as a "failed attempt to communicate". However, to simplify and accelerate the model-development process, we made a temporary assumption that, due to time constraints, ended up becoming permanent. Instead of counting the interfering signals as "failed" requests, we simply make the network unable to process two conflicting requests at the same time. So while this type of interference counts against a network, it isn't penalized as much as it should be.
115
+
116
+ Sixth, it makes sense to think about a minimum spacing such that two frequencies are clearly distinguishable from one another. In our case, we simply went with the information that was given to us. Presumably, there is a good reason that the repeaters we are given output a signal that is offset $600\mathrm{kHz}$ from the signal they receive. For the purposes of this paper, we assume that it is because $600\mathrm{kHz}$ is a good minimum discrepancy.
117
+
118
+ Seventh, we assume that each user places a roughly equal strain on the network. While this would certainly not hold true in the real-world, over a large body of users things will average out and this will tend to be a reasonable approximation.
119
+
120
+ Eight, we assume that the location we are dealing with is in the United States. This assumption is what tells us that we are in fact dealing with constructing a repeater network for use by amateur radio operators.
121
+
122
+ Finally, we make the assumption that the distribution of calls being made is strongly biased toward local connections. However, we feel that it is also a very reasonable one. In the real world, extended chains of repeaters have very difficult issues managing interference — unless the signal is passing solely through a user's local repeater, there's no way to tell when it will be possible to make a call to a distant location. As a consequence, linked chains of repeaters are most frequently used for public service announcements during emergencies or in areas where there is a very limited amount of demand CITE. Simply looking at the numbers, it is extremely unlikely that it is possible to accommodate a large number of users
123
+
124
+ unless each makes a minimal demand on the network as a whole. In this light, a heavy bias toward calls that use a small number of repeaters is reasonable.
125
+
126
+ # 3 Metric
127
+
128
+ # 3.1 Measuring Transmission Capacity
129
+
130
+ In order to get some sense for whether or not a given solution was valid, we needed to develop some way to measure a given solution's efficacy. More importantly, for a given network we needed to be able to return a yes or no answer for whether or not that network "accommodates" its users. In order to develop a reasonable metric, we first asked "what, fundamentally, does an amateur radio operator want a repeater network to do?" At a basic level, the answer is very simple: an amateur radio user is going to want to broadcast something. The user will be satisfied if he or she is able to do this in a reasonable, timely manner, with a minimum number of interference-related issues.
131
+
132
+ With this definition of accommodation in mind, there is a very straightforward way to define our metric. One network is better at accommodating its users than another if it can successfully process more transmission attempts than another. Similarly, we can designate a network as satisfactory if it can handle a minimum number of transmission attempts over a given time period. Expressing this as a ratio, we have:
133
+
134
+ $$
135
+ Q = \frac {\text {t r a n s m i s s i o n s} p r o c e s s e d}{\text {t r a n s m i s s i o n s} a t t e m p t e d}
136
+ $$
137
+
138
+ Note that requests made by or to a user that is not covered by the network are automatically counted as "failed" attempts at communication. The pseudocode in Algorithm 1 describes how we calculated $Q$ .
139
+
140
+ # Algorithm 1 Measure Network Quality
141
+
142
+ Require:UserPositions, UserRepeaterAssignment, UserTransmissionNumber, SimulationLength, NetworkAdjacencyMatrix, UserRanges
143
+
144
+ Ensure: UnfulfilledTransmissions equals the number of desired user transmissions that could not be completed during the simulation.
145
+
146
+ Run GenerateUserTransmissions() with inputs
147
+
148
+ UserRepeaterAssignments, UserTransmissionNumber and
149
+
150
+ SimulationLength to get Transmissions, which represents where each user will transmit to and when they will try to do so.
151
+
152
+ Set TransmissionQueue be an empty array.
153
+
154
+ Set Unful filled Transmissions to zero.
155
+
156
+ for all timesteps in SimulationLength do
157
+
158
+ for all Transmissions in Transmissions do
159
+
160
+ If this request is scheduled for this timestep, add it to TransmissionQueue
161
+
162
+ end for
163
+
164
+ Set CurrentTransmissions to be an empty array.
165
+
166
+ for all Transmissions in TransmissionQueue in order do
167
+
168
+ if this transmission is being made by or to a user who is not covered by the network then
169
+
170
+ Increment Unful filled Transmissions
171
+
172
+ Remove this transmissions from TransmissionQueue
173
+
174
+ else if this transmission will not interfere with any transmissions in Current Transmissions then
175
+
176
+ Add this transmission to CurrentTransmissions and remove it from TransmissionQueue.
177
+
178
+ end if
179
+
180
+ end for
181
+
182
+ end for
183
+
184
+ Add the number of elements remaining in TransmissionQueue to UnfulfilledTransmissions.
185
+
186
+ Though the metric itself is pretty straightforward, it is less clear what exactly a reasonable proportion of successful communications should be required in order to consider a repeater network successful. As a result, our model has a parameter for $Q$ . This allows someone who has a better idea for how accommodating they need the network to be to set it at whatever threshold they desire.
187
+
188
+ # 3.2 Generating Requests
189
+
190
+ In order to generate requests for using the network we need to assign a starting location and an ending location for this request. Typically people are more likely to make requests closer to their current location since they will have more interaction with people locally then over a long distance. To model this type of distribution of requests by a given person, we use a Poisson distribution to calculate the probability of a call going a distance $l$ between two towers,
191
+
192
+ $$
193
+ \binom {n} {l} \frac {\lambda^ {k}}{l !} \left(1 - \frac {\lambda}{n}\right) ^ {n - l}.
194
+ $$
195
+
196
+ Here, $n$ corresponds to the maximum number of towers that one can send a request. The number of towers was used for calculating this since people with closer towers will generally have shorter Euclidean distances given us the desired effect. In addition to this, amateur radio users might consider the amount of network they are using for a call which would also result in a decay as $l$ increases. By choosing $\lambda = 1$ to provide the desired decay of the requests as $l$ increases.
197
+
198
+ # 3.3 Measuring Overall Quality
199
+
200
+ There is more that goes into determining the overall quality of a network than whether or not it is able to handle a certain percentage of user requests. For example, there are a huge number of distinct networks (infinite if we don't specify a grid of coordinates) that will all satisfy the minimum accommodation requirements. We need to have some method to distinguish between two "passing" networks. Fortunately, the problem statement makes it very clear what the primary criterion should be, saying: "determine the minimum number of repeaters necessary to accommodate 1,000 simultaneous users". To address this, we examined the solutions generated by our algorithm for a particular number of maximum repeaters used. Using this approach, it is easy to find a lower bound for the number of repeaters needed to adequately accommodate present users with a network generated by our algorithm. Other factors that might be worth considering when comparing the overall quality of networks include factors such as coverage percentage and how they perform with different levels of demand.
201
+
202
+ # 4 Approach
203
+
204
+ # 4.1 Modeling Population Density
205
+
206
+ Our preliminary model for modeling the population distribution within our region was to simply randomly locate 1000 people inside of our region with a uniform distribution. However, humans are social animals, so assigning each person a random location independently of each other is a poor model for what actual population distributions look like. It is more common to model a human population by using a clumped population distribution. This involves people aggregating around certain areas which is common in the formation of communities. Population data on the density of people in different communities suggests that within different regions we will have different population size [5].
207
+
208
+ We modeled different types of population distributions that can occur by choosing a set number of clumps that occur within our region. We used a variety of different numbers of clumps to account for different distributions that can occur depending on the given region. The mean of a each clump was selected by generating a random point inside the circle. For a given clump we modeled the distribution of the population around the center by using a multivariate Gaussian distribution. Since the $x$ and $y$ coordinate of the person's location are independent of each other, we used the multivariate distribution:
209
+
210
+ $$
211
+ f (x, y) = \frac {1}{2 \pi \sigma_ {x} \sigma_ {y}} e ^ {\left[ \frac {(x - \mu_ {x}) ^ {2}}{\sigma_ {x} ^ {2}} + \frac {(y - \mu_ {y}) ^ {2}}{\sigma_ {y} ^ {2}} \right]}.
212
+ $$
213
+
214
+ We used values for $\sigma_{x} = \sigma_{y} = 20$ , which seemed to provide a reasonable range of models for the data. This was determined by looking at actual data for population density in flat areas of the United States and comparing our density distributions to the actual distribution [1].
215
+
216
+ # 4.2 Linking Strategies
217
+
218
+ # 4.2.1 Two Different Strategies
219
+
220
+ Before jumping into our overall approach, it makes sense to talk a little bit about the methods at our disposal for creating a chain of repeaters allowing for two-way communication. In order for a repeater to be communicate to another, its transmission frequency must be the same as the
221
+
222
+ ![](images/9b6356b0c576686869359d771f0637bb9cf3c63fe0d8c691a2cebd14d846b3f5.jpg)
223
+ (a) Random Population Distribution
224
+
225
+ ![](images/eca66854e308b585c6385e5645b1536536972f25addc22655db8b4ff44404cf2.jpg)
226
+ (b) Clumped Population Distribution with 5 Clumps
227
+
228
+ ![](images/e103d080e0ae365bbd873abe2687a6a07c44e421bccb40380cf5a87ee527c322.jpg)
229
+ (c) Clumped Population Distribution with 10(d) Clumped Population Distribution with 20 Clumps Clumps
230
+
231
+ ![](images/b70a3223562d44d7d3d28b3ef3226c0d4d8fed8b7683b6f19169593274865bec.jpg)
232
+
233
+ ![](images/a2fcc845f2c6e58928f6e7cafbcec4245c28ce14afdc21859f06030d67e14467.jpg)
234
+ (e) Clumped Population Distribution with 40 Clumps
235
+
236
+ ![](images/fcf086b2c110072a555931552fe9e201109c54ab52f31e9628d28dcefbbbe1e6.jpg)
237
+ (f) Actual Population Distribution
238
+ Figure 1: Pictures of the different population distributions we generated and a comparison to the actual population distribution taken from the Southeastern United States [1]
239
+
240
+ other's receiver frequency. The most basic way to create two-way capability is to, for example, give receiver A a receiving frequency of $145\mathrm{MHz}$ and a transmission frequency of $145.6\mathrm{MHz}$ and have receiver A' be the inverse of that (145.6MHz receiving, 145.0MHz transmission). A deeper analysis shows an immediate, serious flaw with this approach. When a signal is being transmitted from one to the other, it will be (essentially) immediately transmitted back to the first, creating an feedback loop of transmission. As a repeater is actively receiving whenever it is transmitting, there will be some amount of feedback even if we attempt to implement privacy lines in some clever way.
241
+
242
+ The other approach to creating a chain of is to "stagger" the receiving and transmission frequencies such that the first repeater in a chain transmits at the frequency the second chain receives on, and so on. From our assumed minimum spacing of $600\mathrm{KHz}$ , we have 5 distinct pairs of frequencies to work with. Let us assign names to the types of repeaters as follows:
243
+
244
+ - A - receiving frequency $= {145.0}\mathrm{{MHz}}$ ,transmitting frequency $= {145.6}\mathrm{{MHz}}$
245
+ - B - receiving frequency $= {145.6}\mathrm{{MHz}}$ ,transmitting frequency $= {146.2}\mathrm{{MHz}}$
246
+ - C - receiving frequency $= {146.2}\mathrm{{MHz}}$ ,transmitting frequency $= {146.8}\mathrm{{MHz}}$
247
+ D - receiving frequency $= {146.8}\mathrm{{MHz}}$ ,transmitting frequency $= {147.4}\mathrm{{MHz}}$
248
+ - E - receiving frequency $= {147.4}\mathrm{{MHz}}$ ,transmitting frequency $= {148.0}\mathrm{{MHz}}$
249
+
250
+ Using these sets of frequencies, we see that a chain like A-B-C-D-E is a viable method for providing more long-range communication. Indeed, this method of chaining is the one we adopt in our approach to the problem.
251
+
252
+ # 4.2.2 Bi-directional and Extended Chains
253
+
254
+ One issue yet to be addressed by the chaining method described above is how one can transmit in the opposite direction. An effective way of dealing with this issue - and the one we've adopted for the rest of this report - is using the "inverses" of the repeaters enumerated above to transmit in the opposite direction. If we pair an $A'$ repeater with every $B, B'$ with $C, C'$ with $D$ and $D'$ with $E$ , then we can get two-way transmission along this line. Without using private lines in a clever way, we still run into the feedback issue we ran into above - we'll describe how we plan to get around
255
+
256
+ this below.
257
+
258
+ In addition to bi-directionality, it's not immediately clear from the above description how to create a chain of repeaters longer than 5. First, given the size of area involved, the range of the repeaters being used, and how each repeater is assigned a type, it's unlikely there will be very many longer chains. However, in the interest of covering extreme cases and allowing this algorithm to be generalized to larger regions, we provide a method for creating longer chains. We could create a very weak-broadcasting $\mathrm{E}'$ tower at the edge of the range of the end of an A-B-C-D-E chain (weak enough that its transmission back to the E repeater is not an issue), pairing it with both a D and a $\mathrm{D}'$ repeater at the same location. The D and $\mathrm{D}'$ repeaters would be set to different privacy lines so they wouldn't be activated simultaneously - if they are activated simultaneously, we can simply shut them both down or have the repeaters time out after a fixed amount of time. After this point, the chain could continue to be extended through linking in a $\mathrm{C} / \mathrm{C}'$ node, then a $\mathrm{B} / \mathrm{B}'$ node, and so on.
259
+
260
+ # 4.3 Naive Approach
261
+
262
+ An extremely simplified way to view this problem is as follows: let us provide a way for anybody to broadcast anywhere by simply blanketing the entire area under consideration with a network of evenly-spaced, connected repeaters (and then removing those repeaters that don't cover any people). There are a number of issues with this approach. First, it's far from efficient - depending on the population distribution, it's very likely that at least some of the given area is at most sparsely populated, and unnecessary to cover with repeaters. Second, if it is implemented by alternating "inverse" repeaters (repeaters who can speak to each other), it's likely there would be significant issues in practice. Without using some kind of privacy lines, there would be no way to prevent some kind of feedback loop from occurring. Using a staggered repeater approach without running into the feedback issue would allow for only one-way transmission, something that would be unacceptable for most amateur radio use.
263
+
264
+ ![](images/c98aa704b55063c47a0e2c84cf27ed355f04479303e8a7edd333ad9ae513bfdb.jpg)
265
+ Figure 2: The network generated by the naive approach described above
266
+
267
+ Finally, even if there were no issue with feedback, a single, blanketing network would allow for very little usage. The user would have two options: either he or she could talk locally without using a repeater or he or she could broadcast to the entire area. As a consequence, only a single user could use the network at any given time. For a large number of users, this would be untenable.
268
+
269
+ # 4.4 Implementing Privacy Lines to Reduce Interference
270
+
271
+ As mentioned earlier in the document, using privacy lines can help mitigate interference. However, they need to be used carefully in order to be of much use. Two repeaters with differing privacy lines that are within range of one another can still interfere with one another if both are activated and broadcasting at the same time. Indeed, with the node layout above, there is really no "direction" of talk, and broadcasts take multiple paths to get to the same location. Because of these complexities, there's really no way to assign privacy lines to organize communications in a controlled, rational manner.
272
+
273
+ One common method of organization for repeater networks is to assign one repeater to be the central""hub" of the entire network CITE. Assign-
274
+
275
+ ing a hub affords us a number of benefits. For one thing, it lets us assign a direction to each broadcast - each transmission can either be "toward" or "away from" the hub. In addition, it lets us create subnetworks across which users can talk without monopolizing the whole network, something that has a very important benefits. Along with our assumption about the distribution of calls, these subnetworks let us localize demand - a call between two people along a certain subnetwork can be entirely confined to that subnetwork, leaving the rest of the network free for others to use.
276
+
277
+ Being able to categorize a signal as moving in a particular "direction" does no good if we cannot actually control how the signal propagates through the network. Below is an image illustrating the method we use to achieve directional control:
278
+
279
+ ![](images/5a64541f117e10bf86d2c299e573828b47cac40504252da167a8c65b74ef51f8.jpg)
280
+ Figure 3: A typical network generated by our algorithm. The colors correspond to the particular type of repeaters present at that location. Dark blue $= \mathrm{A}$ , light blue $= \mathrm{B} / \mathrm{A}'$ , yellow $= \mathrm{C} / \mathrm{B}'$ and maroon $= \mathrm{D} / \mathrm{C}'$ .
281
+
282
+ As can be seen above, most repeaters have two privacy lines associated with them. This is a destination label rather than the particular privacy lines accepted there. The above system works as follows. First, the red numbers represent the privacy lines associated with incoming calls destined for other sub-branches of the network. For example, if repeater 6/13
283
+
284
+ wants to send a message to repeater 8/15, it would broadcast a signal on privacy line 15. Depending on which $\mathrm{B / A^{\prime}}$ repeater it is primarily linked with one it is linked with, the signal from 6/13 will be picked up by both 5/11 and 4/12 and then rebroadcast with privacy line 15 (under our assumption of instantaneous transmission, this is OK. However, in practical applications, this is a potential issue, and is addressed in our "Limitations" section). When it reaches the hub, the hub hears the signal and rebroadcasts it with the outgoing privacy line associated with repeater 15 (in this case, 8). The node 7/14 is the only $\mathrm{B / A^{\prime}}$ node to recognize that privacy line, and so it is the only one to pick it up and rebroadcast it (again, with privacy line 8). This is heard by the 8/15 node, which in turn broadcasts a signal with privacy line 54, indicating that the transmission is not to be passed any further.
285
+
286
+ That example was a little bit long, so let us review the core idea. The location of each pair of repeaters is associated with two privacy lines, one ingoing and one outgoing. Outgoing repeaters (A-B-C-D-E) accept the privacy line associated with their current location as well as the outgoing privacy lines associated with locations that they are "on the way" to, from the perspective of the hub. Ingoing repeaters also accept the ingoing privacy lines associated with repeaters that they are "on the way" to. This includes locations on the way to the hub on their subnetwork as well as those repeaters on other subnetworks. Effectively, users are free to communicate within their subnetworks without having to worry about affecting broadcasts in other subnetworks. The net result is a reduction of interference and an increase in the network's overall capacity.
287
+
288
+ # 4.5 K-Means Clustering
289
+
290
+ One of the weaknesses of the naive algorithm is its failure to adapt to the population distribution that it is presented with. Heuristically speaking, the best way to cover as many users as possible with as few repeaters as possible is to center repeaters in areas with high population density. This approach is especially effective in realistic population distributions since users are likely to be clumped together in regions of high density.
291
+
292
+ For this reason, clustering is an important part of our repeater network generation algorithm (which is outlined in the Main Network Generation Algorithm, included below). We use the $k$ -means clustering algorithm, which seeks to group $n$ data points into $k$ clusters (where $k$ must be specified) so that each data point is assigned to the cluster with mean closest
293
+
294
+ ![](images/869bd9b32667581bb86b4fb4f57fb8d94445cafed9db62ef18bb1417e3aeafa3.jpg)
295
+ Figure 4: k-means clustering on a population
296
+
297
+ to its value, to assign each user to a cluster. Conceptually, $k$ -means clustering assigns data points of similar value to the same cluster [7]. In our case, this means that users situated close together are grouped together. We then place a repeater in the mean location (or centroid) of each cluster. In general, this gives us a list of repeater locations located in regions of high density. In realistic population distributions, these repeater locations are likely to encompass population centers.
298
+
299
+ # Algorithm 2 Main Network Generation Algorithm
300
+
301
+ Require:UserPositions, MaximumRepeaterRange, UserRanges, NumberOfClusters
302
+
303
+ Ensure: RepeaterPositions contains a list of all of the repeater positions, Channels contains their channel assignments.
304
+
305
+ Run Cluster() algorithm with inputs UserPositions and NumberOfClusters to cluster the data, obtaining the positions of the centroids of the clusters.
306
+
307
+ Set RepeaterPositions equal to the positions of the centroids.
308
+
309
+ Run GetRepeaterRanges() with inputs UserRanges, RepeaterPositions, and MaximumRegionRange to obtain RepeaterRanges
310
+
311
+ Run ConnectComponents() with inputs RepeaterPositions, RepeaterRanges and MaximumRepeaterRange to obtain new values for those variables. This connects any disconnected repeaters.
312
+
313
+ Obtain RepeaterAdjacencies, the adjacency matrix of the network by determining which repeaters are within range of each other.
314
+
315
+ Run AssignChannels() with input RepeaterAdjacencies to obtain Channels, which contains the channel assignment for each repeater.
316
+
317
+ Note: The Cluster() algorithm referred to above is simply the $k$ -means clustering method described earlier in this section.
318
+
319
+ # 4.5.1 Connecting Clusters
320
+
321
+ By using a clustering process we can obtain some given number of positions at which to place repeaters to extend coverage to users. While some of these repeaters might be within range of each other, it is unlikely that they form a network with a single component. It is far more likely, especially when working with the clumped population models discussed above, that there exist at least two disconnected components in our repeater network. As mentioned previously, we would like all users within range of a repeater to be able to communicate with any other user that is within range of a repeater. Thus, we must somehow connect the disconnected components of our network. Since our goal is to minimize the number of repeaters used, we would like to do so using as few repeaters as possible.
322
+
323
+ Let $d_{i,j}$ be the minimum distance between any node in component $i$ and any node in component $j$ . Consider the components of our network to be the nodes of a graph where the weight of the edge between component $i$ and component $j$ is $d_{i,j}$ . In conceptual terms, the most efficient way to span this graph is by constructing a minimum spanning tree. Similarly, the shortest way to connect the components in our network is by constructing a minimum spanning tree using repeaters. This will ensure that all of our network components are connected while using as few repeaters to do so as possible.
324
+
325
+ We used Algorithm 3 (detailed in pseudocode below) to accomplish this goal.
326
+
327
+ # Algorithm 3 Connect Components
328
+
329
+ Require: RepeaterPositions, RegionRanges, MaxRegionRange
330
+
331
+ Ensure: All of the repeaters in RepeaterPositions are part of the same component of the repeater graph. Repeaters are connected if they are within range of each other.
332
+
333
+ while NumberOfComponents $>1$ do
334
+
335
+ Let minDist be the minimum distance between two repeaters in different components. Let startRepeaterPosition and endRepeaterPosition be the positions of these repeaters.
336
+
337
+ Let numRepeatersRequired be the number of repeaters required to span minDist if their ranges are MaxRegionRange.
338
+
339
+ Add a repeater minDist numRepeatersRequired miles away from startRepeaterPosition in the direction of endRepeaterPosition. Note that this repeater will be part of the same network component as startRepeater.
340
+
341
+ # end while
342
+
343
+ Note: This algorithm essentially constructs a minimum spanning tree of a graph containing the disconnected repeater components in our network as nodes. We then use that minimum spanning tree as part of our network to ensure continuity.
344
+
345
+ # 4.5.2 Assigning Channels
346
+
347
+ Once the network is connected, channels must be assigned to implement the interference-reducing structure described in Section 4.4.
348
+
349
+ First, we must choose a repeater to serve as the hub of the network. We prefer a hub which has the least average geodesic distance to other nodes in the network (also known as the greatest closeness centrality) for a number of reasons. First, this reduces the likelihood that we will need to include an extended chain (see section 4.2) in our network, which requires more nodes than a bi-directional chain. Also, choosing a hub with the least average geodesic distance minimizes the number of nodes that will be tied up by a message passing through the hub.
350
+
351
+ Once we have chosen a hub, we assign channels of increasing number stepping outward from the hub as described in Algorithm 4 below. Note that repeater frequencies are assigned according to the table below:
352
+
353
+ <table><tr><td>Channel Number</td><td>Frequencies</td></tr><tr><td>1</td><td>A</td></tr><tr><td>2</td><td>B/A&#x27;</td></tr><tr><td>3</td><td>C/B&#x27;</td></tr><tr><td>4</td><td>D/ C&#x27;</td></tr><tr><td>5</td><td>E/D&#x27;</td></tr></table>
354
+
355
+ # Algorithm 4 Assign Channels
356
+
357
+ Require: RepeaterAdjacencies, the adjacency matrix for our repeater network.
358
+
359
+ Ensure: Channels contains a list of the channel assignments of each repeater such that each repeater is of the proper channel to communicate with its neighbor closest to the hub of the network.
360
+
361
+ Find the repeater with the highest closeness centrality in the network. This is our hub repeater. Assign it channel 1.
362
+
363
+ Let CurrentChannel $= 1$
364
+
365
+ Let ConnectedRepeaters contain all of the repeaters connected to the hub.
366
+
367
+ while At least one repeater does not have a channel do
368
+
369
+ Let NextConnectedRepeaters be an empty array
370
+
371
+ for all Repeaters in ConnectedRepeaters do
372
+
373
+ Assign the repeater to channel CurrentChannel $+1$
374
+
375
+ Add all of the repeaters connected to the repeater that do not have channels to NextConnectedRepeaters
376
+
377
+ end for
378
+
379
+ Increment CurrentChannel
380
+
381
+ Let ConnectedRepeaters equal NextConnectedRepeaters
382
+
383
+ end while
384
+
385
+ # 5 Results
386
+
387
+ # 5.1 Procedure
388
+
389
+ We evaluate our model by attempting to measure the overall quality of the network. The first metric we measure is the coverage of the network by looking at the number of people not covered the network. In addition to this, we generate results of our algorithm for different cluster sizes which effects the number of repeaters used in the network. The last metric we use to measure this is the transmission capacity for our network, which is very
390
+
391
+ <table><tr><td rowspan="2"># of clusters</td><td colspan="5">Population Distribution</td></tr><tr><td>Random</td><td>5 clumps</td><td>10 clumps</td><td>20 clumps</td><td>40 clumps</td></tr><tr><td>5</td><td>17.00 ± 0.00</td><td>12.60 ± 4.98</td><td>16.60 ± 0.89</td><td>17.00 ± 0.00</td><td>17.00 ± 0.00</td></tr><tr><td>10</td><td>36.20 ± 1.10</td><td>24.60 ± 5.90</td><td>27.80 ± 3.63</td><td>26.60 ± 2.61</td><td>31.00 ± 4.24</td></tr><tr><td>15</td><td>43.40 ± 2.19</td><td>37.00 ± 5.10</td><td>39.00 ± 1.41</td><td>37.00 ± 2.83</td><td>40.60 ± 2.97</td></tr><tr><td>20</td><td>55.00 ± 2.45</td><td>49.00 ± 6.48</td><td>50.60 ± 2.97</td><td>50.60 ± 2.97</td><td>51.00 ± 3.16</td></tr><tr><td>25</td><td>74.60 ± 3.85</td><td>61.80 ± 4.82</td><td>62.20 ± 1.79</td><td>64.20 ± 3.35</td><td>65.80 ± 3.35</td></tr><tr><td>Naive</td><td>43.20 ± 0.84</td><td>25.2 ± 1.92</td><td>33.0 ± 2.45</td><td>33.6 ± 2.610</td><td>40.0 ± 1.58</td></tr></table>
392
+
393
+ Table 1: Mean number of repeaters that were used in constructing the network
394
+
395
+ important for the network to be usable. We measure this for both our algorithm and the naive algorithm to provide a comparison between the two. Sensitivity analysis is calculated for each of the metrics by determining the variability of each metric within a given population.
396
+
397
+ We generate clumped population distributions containing different number of clumps sizes (5, 10, 20, and 40) to test the network. A random population distribution is used as well to see how our network deals with a disperse population. We test the algorithm on this variety of different distributions to account for the variability of population distributions with different locations. For each of these 5 types of population distributions we generate a 5 test populations data set to allow us to measure the variability of our algorithm within a given population distribution.
398
+
399
+ # 5.2 Number of Repeaters
400
+
401
+ The number of repeaters used in the naive algorithm is simply determined by the number that it takes to completely cover the network. In our algorithm the number of repeaters initially placed is based on the number of clusters used in the k-means algorithm. More repeaters can also be added to connect disconnected regions of the graph. In general though, we see an increase in the number of repeaters being placed as the # of initial clusters increases.
402
+
403
+ # 5.3 Coverage
404
+
405
+ The naive algorithm is guaranteed to have $100\%$ coverage by construction. It is guaranteed to cover all of the people regardless of how many repeaters
406
+
407
+ <table><tr><td rowspan="2"># of clusters</td><td colspan="5">Population Distribution</td></tr><tr><td>Random</td><td>5 clumps</td><td>10 clumps</td><td>20 clumps</td><td>40 clumps</td></tr><tr><td>5</td><td>382.20 ± 25.11</td><td>18.40 ± 23.38</td><td>84.60 ± 56.18</td><td>183.0 ± 52.65</td><td>294.40 ± 51.93</td></tr><tr><td>10</td><td>88.80 ± 24.05</td><td>5.400 ± 1.14</td><td>8.80 ± 1.92</td><td>46.60 ± 11.50</td><td>50.60 ± 28.48</td></tr><tr><td>15</td><td>18.40 ± 8.17</td><td>5.800 ± 1.92</td><td>16.40 ± 20.49</td><td>17.60 ± 5.18</td><td>21.80 ± 4.09</td></tr><tr><td>20</td><td>10.20 ± 2.17</td><td>5.800 ± 2.17</td><td>7.80 ± 3.49</td><td>10.20 ± 2.28</td><td>12.20 ± 1.48</td></tr><tr><td>25</td><td>7.80 ± 1.64</td><td>6.00 ± 1.58</td><td>6.60 ± 0.89</td><td>10.80 ± 4.09</td><td>9.600 ± 2.61</td></tr><tr><td>Naive</td><td>0.00 ± 0.00</td><td>0.00 ± 0.00</td><td>0.00 ± 0.00</td><td>0.00 ± 0.00</td><td>0.00 ± 0.00</td></tr></table>
408
+
409
+ that it must place. Our k-means clustering algorithm has a decrease in the population that is not covered by the network as the number of clusters used increased. This corresponds to more repeaters being placed in our preliminary construction of the network, which corresponds to a larger region being covered by our network.
410
+
411
+ # 5.4 Transmission Capacity
412
+
413
+ The transmission capacity is measured through a simulation of potential requests made across the network. This metric has the potential to be very variable for different requests so it is run 10 times to get an average value. We perform sensitivity analysis by measuring the standard deviation in the results across the 10 simulations. This is done for each population distribution to see if this metric remains effective across all distributions.
414
+
415
+ Table 2: Mean number of people who were not covered by the network for different population distributions
416
+
417
+ <table><tr><td rowspan="2"></td><td colspan="5">Population Distribution</td></tr><tr><td>Random</td><td>5 clumps</td><td>10 clumps</td><td>20 clumps</td><td>40 clumps</td></tr><tr><td>K-means</td><td>0.0033</td><td>0.0144</td><td>0.0104</td><td>0.0104</td><td>0.0052</td></tr><tr><td>Naive</td><td>0.0065</td><td>0.0076</td><td>0.0083</td><td>0.0076</td><td>0.0075</td></tr></table>
418
+
419
+ Table 3: Standard deviation of the transmission capacity metric for different population distributions
420
+
421
+ The standard deviation of the transmission capacity was $\approx 1\%$ for most of the population distributions. For the naive algorithm the standard deviation for each one was approximately uniform illustrating that the metric
422
+
423
+ <table><tr><td rowspan="2"># of clusters</td><td colspan="5">Population Distribution</td></tr><tr><td>Random</td><td>5 clumps</td><td>10 clumps</td><td>20 clumps</td><td>40 clumps</td></tr><tr><td>5</td><td>0.621 ± 0.022</td><td>0.715 ± 0.102</td><td>0.803 ± 0.047</td><td>0.780 ± 0.050</td><td>0.687 ± 0.033</td></tr><tr><td>10</td><td>0.906 ± 0.029</td><td>0.850 ± 0.046</td><td>0.951 ± 0.030</td><td>0.921 ± 0.019</td><td>0.943 ± 0.029</td></tr><tr><td>15</td><td>0.977 ± 0.010</td><td>0.830 ± 0.074</td><td>0.938 ± 0.058</td><td>0.967 ± 0.010</td><td>0.974 ± 0.004</td></tr><tr><td>20</td><td>0.984 ± 0.004</td><td>0.828 ± 0.073</td><td>0.950 ± 0.034</td><td>0.967 ± 0.014</td><td>0.981 ± 0.003</td></tr><tr><td>25</td><td>0.985 ± 0.003</td><td>0.814 ± 0.042</td><td>0.900 ± 0.037</td><td>0.954 ± 0.011</td><td>0.980 ± 0.004</td></tr><tr><td>Naive</td><td>0.221 ± 0.002</td><td>0.313 ± 0.016</td><td>0.274 ± 0.008</td><td>0.252 ± 0.010</td><td>0.236 ± 0.002</td></tr></table>
424
+
425
+ Table 4: Percentage of transmission requests that were processed by the clustering network and naive network for different population distributions
426
+
427
+ was valid across different population distributions. For our model using the k-means algorithm the standard deviation was a bit larger for the 5 clump data set of populations. However, even this had a standard deviation of $< 1.5\%$ which illustrates that our metric for transmission capacity was consistent.
428
+
429
+ The results for the transmission capacity show a significant improvement in our algorithm to process requests over the network in comparison to the naive approach. In addition to this, we notice that as we increase the number of initial clusters the transmission capacity of our network increases. This creates a trade off between the number of repeaters and the transmission capacity and allows a balance to be chosen between the two based on the purpose of the network.
430
+
431
+ # 6 Discussion
432
+
433
+ # 6.1 Effects of 10,000 Users
434
+
435
+ In addition to developing a method for determining the minimum number of repeaters necessary to accommodate 1000 simultaneous users, we were also asked how our solution would change if there were instead 10000 users.
436
+
437
+ Our algorithm can easily be run to generate networks to serve 10000 users. However, the quality of service these networks provide is uninspiring. In general, our algorithm's networks were only able to successfully handle about half of the 10,000 transmissions attempted over 200 units of
438
+
439
+ time for any number of repeaters between 15 and 20. While this is significantly better than the naive algorithm which only handles around 700 of 10,000 requests, it is obviously not ideal.
440
+
441
+ Increasing the number of users by an order of magnitude increases network traffic by an order of magnitude as well, at least according to our assumptions. As a result, each repeater is roughly ten times as likely to be tied up in each timestep. This leads to many transmissions being blocked, especially transmissions through the hub of the network.
442
+
443
+ We have developed a number of approaches to dealing with this problem. Our modeling of user transmissions indicates that approximately one third of user communications are between users that are only one repeater apart (i.e. within range of the same repeater). These sorts of communications tie up that repeater, blocking transmissions through that part of the network. If such communications could be routed in a way that did not block the repeater, it follows that the network's capacity would increase significantly.
444
+
445
+ One strategy that we have considered adding to our algorithm is the ability to recursively add subnetworks to our network (using the same process as our main algorithm) at nodes where user density and therefore demand is high. These subnetworks are composed of repeaters with frequencies incompatible with the frequencies of the adjacent main node, eliminating the possibility of interference. (see figure 4 below). Unfortunately, time constraints have prevented us from finishing this feature or conducting an analysis of its effects.
446
+
447
+ Even though the idea hasn't been fully developed, we illustrate how a subnetwork would look in the figure below. The addition of the 3 repeaters occurs near the central hub, which, in this example, we are assuming is an area of high population density. Therefore by adding this subnetwork, we would be assuming that the benefit added by increasing the transmission capacity of the network would outweigh the cost of adding the extra repeaters. A nice feature of this structure is it would allow you to place cost parameters on the repeaters and the transmission capacity to determine in what situations that it would be beneficial to place an extra repeater.
448
+
449
+ ![](images/a57650d88873eede89d29b7c49975399bbcd1a3ff2176c05aab770cdb720c3d3.jpg)
450
+ (a) No subnetwork
451
+
452
+ ![](images/e920ab6b04aabb9c07f77111a6bba623d64f74fb97965ef274466545d2edb56e.jpg)
453
+ (b) With subnetwork
454
+ Figure 5: A typical network generated by our algorithm with a localized subnetwork included
455
+
456
+ One of the weaknesses of our network design is its dependence on a central hub, which in many cases serves as a chokepoint limiting the flow of cross-network communications. If more private lines were available for
457
+
458
+ use we could use them to add multiple hubs to our network, similar to the way IP addresses are used to route traffic to different hubs on the Internet.
459
+
460
+ # 6.2 Effects of Mountainous Terrain
461
+
462
+ Currently our algorithm assumes a flat terrain devoid of features that might impact the range of repeaters or mobile devices. However, we feel that our algorithm could be more or less readily adapted to deal with defects in line-of-sight-propagation caused by mountainous areas.
463
+
464
+ The main complication introduced by mountainous areas is the introduction of non-symmetrical maximum repeater ranges that vary with position. This would mainly affect the way in which we use clustering to place our initial repeaters and our method for connecting the disconnected components of our network. The other parts of our algorithm, specifically the use of private lines and the assignment of channels, are dependent only on the adjacencies of repeaters in our network and would not be affected.
465
+
466
+ While more experimentation is needed, we feel that the use of clustering would still serve well as an initial method for determining where to place repeaters. However, we would then attempt to optimize the position of the repeaters to encompass more users through an optimization technique such as simulated annealing.
467
+
468
+ A similar approach could be taken to connect disconnected network components. The straight-line minimum spanning tree approach could be used as a starting point for the placement of additional repeaters, with optimization then applied to try to maximize the reduction in the distance between disconnected components.
469
+
470
+ Obviously, this additional optimization would come with significant runtime penalties.
471
+
472
+ We feel that mountainous terrain could even decrease the number of repeaters needed, since repeaters could be placed on elevated terrain to achieve increased range.
473
+
474
+ # 7 Conclusion
475
+
476
+ Overall, the proposed algorithm tends to do a pretty good job of producing a network that passes the "eyeball test" from a heuristic viewpoint. That is, our algorithm proposes a solution that mimics the overall shape of the population distribution, covers the vast majority of potential users and uses few obviously wasteful repeater nodes.
477
+
478
+ While the algorithm performs well in many situations, when the number of clusters used to generate our initial repeaters is too large, we run into problems. Our network is heavily dependent upon using privacy lines to control the flow of signal traffic. Because there are a limited number of privacy lines available to us, this puts a hard cap - of having repeaters in at most about 25 distinct locations - on the number of repeaters that can be used. As a result, there is also an effective cap on the number of initial clusters that we can use to generate our initial repeater locations. Once we get past about 20, we no longer have enough privacy lines to implement the signal-control scheme outlined in section 4.4.
479
+
480
+ Though the cap on the number of initial clusters reduces our options when it comes to designing a network, it also greatly diminishes our problem space. Because of the limited number of options, we could very easily search across all possible numbers of initial clusters (in increasing order) to find the minimum value that gave us a network satisfying our transmissions capacity requirements for a given population distribution. This approach is not as direct as searching across the number of available nodes, but it should give us the best possible solution generated by our approach for a particular population distribution.
481
+
482
+ When our algorithm does receive a good value for the number of clusters in the population, the data shows that the network it creates does an excellent job of handling traffic under the assumptions that we have made. For example, for the case of 15 initial clusters above, the algorithm created a network that covers over $97\%$ of the population and is able to transmit $\approx 95\%$ of the requests on average. This performance corresponds to a network that typically contains only about 40 repeaters. Overall, when used properly, our algorithm can produce very efficient, effective networks. We feel that it could serve as a useful tool in VHF radio network planning.
483
+
484
+ # 8 Future Work
485
+
486
+ # 8.1 Limitations of our model
487
+
488
+ Though we feel our algorithm is quite strong in general, it does have a number of limitations. For one thing, $k$ -means clustering is a non-deterministic algorithm. It simply finds a "good" way to cluster the data and runs with it. Because of this, for a given population distribution, our algorithm doesn't return the same result each time. This isn't the most reassuring when we're supposed to be finding the "optimal" solution.
489
+
490
+ In addition to the concern above, the method of using privacy lines for controlling the flow of where a signal gets broadcasted contains a potential complication in certain circumstances (recall figure 3 above). In real-world applications, repeaters take a bit of time to process a signal before re-broadcasting it. When both the 5/11 and 4/12 towers pick up the signal from 6/13, there's a very real chance that the two repeaters will process the signal at a slightly different speed. If this happens, then 1 would receive two identical signals, slightly offset, which would be less than ideal. Using the same brand of repeater throughout the network might help mitigate the impact of this issue. In addition, with enough private lines (though admittedly, you'd need a lot), this effect can be completely addressed.
491
+
492
+ The limited number of private lines imposes a couple of very important limitations on our model. First of all, it introduces a bound on the number of repeaters that can be used in the overall network. Because each pair of repeaters needs two distinct private lines and private lines must be used to specify both local communication and communication across the entire network, the overall network can have at most 25 distinct repeater locations. In addition, we have limited tools to limit the amount of demand on key repeaters such as the hub. As mentioned above, we use privacy lines the way the internet uses IP addresses. With a larger number of them at our disposal, we could create a larger (if needed), more flexible network that could route much of its flow around the hub.
493
+
494
+ The algorithm as currently implemented can really only be used for optimizing with respect to the number of repeaters. Someone who is considering this problem may wish to, for example, enforce a steep penalty for not including a user in the network. Presently, the model doesn't allow for this. More generally, the model doesn't allow the user to weigh the cost of including an extra repeater versus the other benefits that extra repeater would bring.
495
+
496
+ Another area that the model fails to address is any kind of variation in atmospheric conditions or weather that could reduce the broadcast range of our repeaters. This is a much larger concern for chains of repeaters, because if the range of any is sufficiently reduced it could break the chain. Though it is not explicitly addressed, the risk of this concern can be mitigated by building a buffer into the maximum range of our broadcast towers (for example, assuming a 7-8 mile broadcast radius when we actually have a 9-mile radius).
497
+
498
+ Finally, the code currently doesn't allow the user to exactly specify the number of repeaters being used for a given network—rather, this is roughly controlled by the number of clusters given as input to the $k$ -means cluster-
499
+
500
+ ing algorithm. The ability to exactly specify could relatively easily added via a post-processing step in which the repeaters whose removal would lead to the least reduction in network quality are removed until the desired number of repeaters is reached.
501
+
502
+ # 8.2 Ways to improve the model
503
+
504
+ In addition to the issues raised in our "Discussion" section, there were several other considerations that we did not have time to fully address. First, we could extend our model to take into account the "blind interference" phenomenon discussed in the assumptions section. Second, in the real world, repeaters do not immediately transmit radio messages; ideally we would take into account this nontrivial switching time in a more rigorous fashion. Third, we assumed throughout that users make equal demand on the system, something that just isn't true in the real world. Including the option for a user to have higher demand than others - something that could be simulated in our model by placing "multiple users" at the location of a heavy user—would increase the value and accuracy of the model. Finally, it would be nice if our heavily-stressed hub repeater was located in a less populous area to isolate it from local demand.
505
+
506
+ # A Appendix
507
+
508
+ ![](images/6807cf8a91bd680d092c819dd33a7591578b9c6fb61744034bc24069314ca7c0.jpg)
509
+ (a) Random Population Distribution
510
+
511
+ ![](images/ccb160b0257fe71d303964e0816f574ccc4a1ee1fa135603f3d4c79664b5c515.jpg)
512
+ (b) Random Population Distribution
513
+
514
+ ![](images/7f700eb6755ac55c8427cdc30267ea6d31c3d8da02aec9a4b111355363ac2059.jpg)
515
+ (c) Clumped Distribution - 5 Clumps
516
+
517
+ ![](images/8d71dcc83e1fb9162e5a191128fd9d54349db5fe56c41a5464202943f7641d7f.jpg)
518
+ (d) Clumped Distribution - 5 Clumps
519
+ Figure 6: Examples of the network that our algorithm constructs on various population distributions
520
+
521
+ ![](images/a2706ae3caddd4e2b538a4220d6cea9fe2e1a070c75c92ad644d2826e21c1e8b.jpg)
522
+ (a) Clumped Distribution - 10 Clumps
523
+
524
+ ![](images/5ef8620d362987aca3c47f93a0bbcf9606c0ad61cd8a5813fa6b94ee8c3c3c65.jpg)
525
+ (b) Clumped Distribution - 10 Clumps
526
+
527
+ ![](images/17a0dfad10d5c864190165f8a6415c8c54d17588031337043871c6c80265db9d.jpg)
528
+ (c) Clumped Distribution - 20 Clumps
529
+
530
+ ![](images/047216cd72d7edf2e74ecb0f1f2ff650ca9222c2f0b086b39a70d27c540afdf1.jpg)
531
+ (d) Clumped Distribution - 20 Clumps
532
+ Figure 7: More examples of the network that our algorithm constructs on various population distributions
533
+
534
+ ![](images/283b76e8efd1b15e3c55c244b9ef271bd622521844948a227b141dfc8adb2ccf.jpg)
535
+ (a) Clumped Distribution - 40 Clumps
536
+
537
+ ![](images/e72159453f107231762896b7972629f0778511c77a051b2c97914f5ce8b281a4.jpg)
538
+ (b) Clumped Distribution - 40 Clumps
539
+ Figure 8: A few examples of the network that our algorithm constructs on various population distributions
540
+
541
+ # References
542
+
543
+ [1] Socioeconomic data and applications center. "http://sedac.ciesin.columbia.edu/gpw/". Viewed on 2011 February 13.
544
+ [2] Vhf radios: Why you need one. "www.boatus.com/boattech/vhf.htm", 2007. Viewed on 2011 February 12.
545
+ [3] Information for the amateur radio enthusiast. "www.hamdata.com/fccinfo.html", 2011. Viewed on 2011 February 11.
546
+ [4] The Pacific Crest Company. VHF/UHF Range Calculations. "http://www.mertind.com/argentina/Soporte20technico/ Instrumentos/PDL/PDL20UHF-VHF2ORANGE20CALCULATIONS.pdf". Viewed on 2011 February 11.
547
+ [5] Otis Dudley Duncan. The measurement of population distribution. Population Studies, 11(1):pp. 27-45, 1957.
548
+ [6] The American Radio Relay League. Us amateur radio frequency allocations. "www.arrl.org/frequency-allocations". Viewed on 2011 February 11.
549
+ [7] David MacKay. Information theory, inference and learning algorithms, chapter 20. "http://www.inference.phy.cam.ac.uk/mackay/itprnn/ps/284.292.pdf", 2003. Viewed on 2011 February 11.
550
+ [8] The National Radio Astronomy Observatory. How radio waves are produced. "http://www.nrao.edu/index.php/learn/radioastronomy/radiowaves", 2002. Viewed on 2011 February 11.
551
+ [9] John Owen. VHF/UHF TV Line of Sight Calculator. "http://www.naval.com/sight/index.htm", 2003. Viewed on 2011 February 11.
552
+ [10] The International Amateur Radio Union. Ethics and operating procedures. "http://www.hamradio secrets.com/support-files/ham-radio-operator-guidelines.pdf", 2009. Viewed on 2011 February 11.
MCM/2011/B/11759/11759.md ADDED
@@ -0,0 +1,967 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Abstract
2
+
3
+ We propose and evaluate two models to determine the minimum number of very high frequency (VHF) repeaters necessary to accommodate a given geographic distributon of users. By utilizing cluster analysis, each model uniquely designs a network of open and "continuous tone-coded squelch system" (CTCSS) repeaters to simultaneously accommodate the desired user load. In addition, the models are mindful of connectivity issues and seek to establish the best connectivity for the set of users. Through the comparison of these two models, we seek to establish the minimum number of repeaters required.
4
+
5
+ In the "Bender" Snaking Model, a network is established by creating a "snake" or chain of open repeaters across the area. The model determines the most effective placement for each open repeater and is mindful to maintain channel availability cy placing CTCSS repeaters when necessary.
6
+
7
+ In the Branching Model, a backbone network is established between the two most populous areas and branch networks are subsequently added to existing network. After the branched network has been completed, CTCSS lines are placed to both mitigate channel saturation and establish dedicated long-distance lines. This model seeks to create the best connectivity for the users in the area with the minimum number of repeaters used.
8
+
9
+ We test our models on two likely area distributions: a city/suburban-like user distribution and a rural-like user distribution. We compare the results and propose the minimum number of repeaters necessary for each scenario. By comparing the two models, we are able to decide if the number is realistic and what the benefits of a different network design may correlate to for the users.
10
+
11
+ Finally, we stress-test our models with 10,000 users in the same area and discuss the defects of line-of-sight propagation caused by mountains in the area.
12
+
13
+ # Optimizing VHF Repeater Coordination Using Cluster Analysis
14
+
15
+ MCM Competition Problem B
16
+
17
+ Control Number: 11759
18
+
19
+ # Contents
20
+
21
+ 1 Problem Restatement 4
22
+ 2Assumptions and Justifications 4
23
+ 3 Available Technology 5
24
+
25
+ 3.1 Repeaters 5
26
+ 3.2 Continuous Tone-Coded Squelch System 6
27
+
28
+ 4 The "Bender" Snake Model 6
29
+
30
+ 4.1 Description 7
31
+ 4.2 Mathematical Interpretation 7
32
+
33
+ 5 The Branching Model 11
34
+
35
+ 5.1 Description 11
36
+ 5.2 Mathematical Interpretation 12
37
+
38
+ 6 Model Comparison Summary 14
39
+
40
+ 7 Case Studies 15
41
+
42
+ 7.1 City Distribution 15
43
+ 7.2 Rural Distrubition 19
44
+
45
+ 8 10,000 Simultaneous Users 22
46
+ 9 Mountainous Terrain 23
47
+ 10 Sensitivity to Parameters 24
48
+ 11 Strengths & Weaknesses 25
49
+ 12 Conclusion 26
50
+ 13 Appendix A: Full-Page Plots 29
51
+ 14 Appendix B: Source Code 36
52
+
53
+ # 1 Problem Restatement
54
+
55
+ Without the aid of repeaters, VHF radio would only permit low-power users to communicate when direct line-of-sight could be established between the transmitter and receiver. Repeaters alleviate this restriction by amplifying and rebroadcasting these signals in order to make them available to a larger geographical area. By accounting for mutual repeater interference due to geographical proximity and utilizing a "continuous tone-coded squelch system" (CTCSS) or "private line" (PL), we seek to find the minimal number of repeaters required to support 1,000 simultaneous users. The users inhabit a 40 mile radius flat circular area and are permitted to broadcast between 145-148 MHz. The repeaters transmit frequency is $600\mathrm{kHz}$ above or below the received frequency and 54 different CTCSS tones are available. We then examine how our model adapts to accommodate 10,000 users and consider the potential defects of line-of-sight propagation in the presence of mountainous areas.
56
+
57
+ # 2 Assumptions and Justifications
58
+
59
+ - Geometry is Euclidean. Since the area is given to be flat, we assume that Euclidean geometry may be used.
60
+ - The system is closed. We assume that all signals originate from within our system. We also assume that there will be no outside interference in the system.
61
+ - Antennas are isotropic. Because the effective transmission area of each user and each antenna is relatively small compared to the whole area, we assume that antennas operate isotropically. This is a fundamental assumption made in modern network design [4].
62
+ - Each user is a "low-power" user. Typical VHF radio transmitters are effective across small towns without repeater support [8]. Since the area being considered is over 5000 square miles, the area of a small town is negligible and we can assume that users will not be able to communicate with one another without repeater support, i.e. each user is a low-power user.
63
+ - Each user "plays nice." In order to avoid users purposefully or accidentally drowning out the signals of others, we require that all users have the same equipment, e.g. all users broadcast at the same intensity and all users have the same range limitations. As a result, the requirements to connect a single user will be static. Specifically, each users will have to be within some fixed maximum distance from a repeater to connect to the network.
64
+ - There are more than 1,000 (or 10,000) potential users. It would be unrealistic for any company or group of individuals to place an appreciable number of repeaters in
65
+
66
+ order to accommodate one individual. We assume that the number of potential users in the area exceeds the number that must be simultaneously accommodated.
67
+
68
+ - The geographic distribution of users is known. The geographical distribution of users must be a known constraint before repeater requirements can be determined. The demand for network connectivity will not exist unless there is a preexisting community present.
69
+ - Users and repeaters are distinct entities. In reality, most VHF radio repeaters are maintained by individual users or a localized group of users. These repeaters are openly available and are typically not used as mobile stations [9]. Thus we assume that users are not broadcasting from repeaters and that they may be treated as two distinct entities.
70
+ - VHF signals are not affected by physical entities in the area. The physical presence of users and antennas will not interfere with the propagation of waves in the VHF spectrum. However, land features such as mountains will affect propagation [11].
71
+
72
+ # 3 Available Technology
73
+
74
+ The problem statement makes two different pieces of technology available: repeaters and continuous tone-coded squelch systems. We will now outline these technologies.
75
+
76
+ # 3.1 Repeaters
77
+
78
+ Repeaters are stationary devices that pick up weak signals (i.e. signals from users), amplify them, and retransmit them on a different frequency. This allows users to circumvent the line-of-sight limitation of VHF and broadcast their signal greater distances. To avoid interference with the incoming (weak) signal, the repeater rebroadcasts the new (strong) signal $600\mathrm{kHz}$ above or below the received signal. To avoid repeaters interfering with one another the Metropolitan Coordination Association states that repeaters must be at least 10 miles apart [1]. Overlapping repeater "zones" will allow signals to pass from one repeater to another, allowing signals to travel significant distances.
79
+
80
+ Note that the range of a repeater is directly correlated to its height. The line-of-sight calculation to determine the effective distance is given by disance in miles $= \sqrt{1.5A_f}$ , where $A_{f}$ is the height of the antenna in feet [11].
81
+
82
+ # 3.2 Continuous Tone-Coded Squelch System
83
+
84
+ Continuous Tone-Coded Squelch Systems (CTCSS), often called Private Lines (PL), further mitigates interference by associating a subaudible tone with signals being received/transmitted by the repeater. In order to communicate through a private line repeater, users must also broadcast this tone. This allows users in a densely populated area to communicate on the same channel with minimal interference. Private line repeaters are not necessarily closed [2] as these CTCSS tones are often published. Since it is our intention to increase the number of available channels, CTCSS tones will be common knowledge to all users.
85
+
86
+ # 4 The "Bender" Snake Model
87
+
88
+ The "Bender<sup>1</sup>" Snake Model seeks to maximize the number of connected users by efficiently creating a snake-like chain of open repeaters across the area.
89
+
90
+ ![](images/36eb439f36825f7d7d3657fb770a05b431df2661ab6b2f356dbc995376ecbb10.jpg)
91
+ Figure 1 describes this model without reclustering.
92
+ Figure 1: Flow Chart for "Bender" Snake Model
93
+
94
+ # 4.1 Description
95
+
96
+ Using $k$ -means clustering[10], we establish clusters of users to determine the optimal initial open repeater placement. Optimal placement is determined by the greatest number of users we can cover when placing a single open repeater. This may also be referred to as "scoring" a cluster where a higher score correlates to coverage for more users. A second open repeater is optimally placed along the perimeter of the newly established network area. This process is repeated iteratively by placing a new open repeater along the perimeter of the most recently established open repeater. The placement of the second open repeater is always in the direction of a cluster point, i.e. other high density locations. This ensures that a network is established between the most users using the least number of repeaters. The model is mindful of ensuring that enough channels are available to users by placing CTCSS repeaters accordingly.
97
+
98
+ By design, network growth tends toward establishing connectivity near and between cluster points. As more users around each cluster are accommodated, the score associated with their respective cluster should reflect that change. To account for this, cluster scores may be reevaluated to encourage intelligent networking in the model. This is known as reclustering.
99
+
100
+ # 4.2 Mathematical Interpretation
101
+
102
+ The model is designed to be highly versatile and supports a number of different parameters that may be changed based on a given situation. They are:
103
+
104
+ <table><tr><td>Parameter</td><td>Description</td></tr><tr><td>n</td><td>Number of users within 40 miles</td></tr><tr><td>k</td><td>Number of k-means cluster points</td></tr><tr><td>ds</td><td>Maximum distance for user-to-repeater communication</td></tr><tr><td>hr</td><td>Height of repeater towers</td></tr><tr><td>dh</td><td>Repeater output distance</td></tr><tr><td>Δf</td><td>Frequency separation / channel width</td></tr></table>
105
+
106
+ Before we begin, we must define a few additional terms. Let $N_{c}$ be the number of people with network connectivity (i.e. within range of an open repeater) and let $N_{f}$ be the number of people with access to an available frequency range. Let $O$ denote the number of available channels. Initially, we will have $N_{c} = 0$ , $N_{f} = 0$ and $O = 0$ .
107
+
108
+ So let $n$ be the number of users within the 40 mile radius. For user $i$ with $1 \leq i \leq n$ , let $(x_{i}, y_{i})$ denote the user's location in Cartesian coordinates. We arrange these coordinates into an $n \times 2$ matrix $M$ where
109
+
110
+ For each user, determine the number of users within the seperation distance $d_{s}$ . Recall that we are using Euclidean geometry, so when considering the $j$ th user if
111
+
112
+ $$
113
+ d _ {s} \geq \sqrt {(M _ {j , 1} - M _ {k , 1}) ^ {2} - (M _ {j , 2} - M _ {k , 2}) ^ {2}}
114
+ $$
115
+
116
+ then the $k$ th user is within range of the $j$ th user. We denote the user with the greatest number of additional users in range as the $p$ th user.
117
+
118
+ We place an open repeater at the location of the $p$ th user and set $R_{1} = (M_{p,1}, M_{p,2})$ . The allowable frequency range is 3 MHz so there will be $3 / \Delta f$ channels available. The action of placing the first repeater makes this many channels available. Now $O = 3 / \Delta f$ .
119
+
120
+ Let $N_{1}$ be the number of people who are within range of our first repeater. We must update our $N_{c}$ and $N_{f}$ values accordingly. Thus
121
+
122
+ $$
123
+ N _ {c} = N _ {1} \mathrm {a n d} N _ {f} = \min (N _ {c}, O)
124
+ $$
125
+
126
+ Now we calculate the metric by which this model determines Private Line (CTCSS) repeater placement. Let
127
+
128
+ $$
129
+ D = \max (N _ {c} - O, 0)
130
+ $$
131
+
132
+ be the deficit of available channels (i.e. the number of users who do not have access to a repeater channel). We update our matrix $M$ by removing the users who are within range of the repeater. Now $M$ is a $(n - N_c) \times 2$ matrix. From here, if $D > 0$ we add CTCSS repeaters to mitigate this deficit and if $D = 0$ , we continue to place open repeaters and expand the network.
133
+
134
+ If $D > 0$ , we will add a Private (CTCSS) Line. We calculate the optimal angle to place the CTCSS line a distance of $d_p$ from our first repeater. We then determine the location to place the CTCSS repeater.
135
+
136
+ The explicit CTCSS algorithm is given below.
137
+
138
+ Data: Previously placed open repeater location, $R_{i-1}$
139
+
140
+ Result: The optimal location to place a new CTCSS Line $R_{i} = (x_{i},y_{i})$
141
+
142
+ Let $P$ be a partition of [0, 360] (in our case studies $|P| = 360$ );
143
+
144
+ for $\theta \in P$ do
145
+
146
+ $$
147
+ R _ {\theta} = \left(x _ {\theta}, y _ {\theta}\right) = R _ {i - 1} + \left(d _ {p} \cos (\theta), d _ {p} \sin (\theta)\right);
148
+ $$
149
+
150
+ for $j \in [1, n - N_c]$ do
151
+
152
+ Let $x_{j} = M_{j,1}$ and $y_{j} = M_{j,2}$
153
+
154
+ if $\sqrt{(x_{\theta} - x_j)^2 + (y_{\theta} - y_j)^2}\leq d_s$ then
155
+
156
+ $u_{\theta} = u_{\theta} + 1;$
157
+
158
+ end
159
+
160
+ end
161
+
162
+ end
163
+
164
+ Let $\theta_c$ be the $\theta$ for which $u_{\theta}$ is maximum. Then $R_i = R_{i-1} + (d_p \cos(\theta_c), d_p \sin(\theta_c))$ is the optimal location for the new repeater.
165
+
166
+ # Algorithm 1: Finding CTCSS Repeater Location
167
+
168
+ The addition of a CTCSS line corresponds to an additional $3 / \Delta f$ channels. Therefore our new value for $O$ is $O = O' + 3 / \Delta f$ where $O'$ is the previous value of $O$ . Now we recalculate $N_{c}$ , $N_{f}$ , and our deficit $D$ using our updated matrix $M$ (which does not include points already covered by repeaters).
169
+
170
+ If $D = 0$ then all users who desire access have access. To increase the number of supported users, we will add an open repeater to expand our network. We use "k-means" cluster analysis to determine the most densely populated areas. After the cluster points are determined, we collect their locations and let $\{c_1, \dots, c_k\}$ be that collection of coordinates. The model snakes around the map by adding repeaters to include more users in the network.
171
+
172
+ The explicit open repeater placement algorithm is given below.
173
+
174
+ Data: Previously placed open repeater location, $R_{i-1}$
175
+
176
+ Result: The optimal location to place a new open repeater $R_{i} = (x_{i},y_{i})$
177
+
178
+ for $t\in [0,k]$ do
179
+
180
+ Let $\theta_{t}$ be the angle from the current repeater $R_{i - 1}$ to $c_{t}$
181
+
182
+ $$
183
+ R _ {t} = \left(x _ {t}, y _ {t}\right) = R _ {i - 1} + \left(d _ {h} \cos \left(\theta_ {t}\right), d _ {h} \sin \left(\theta_ {t}\right)\right);
184
+ $$
185
+
186
+ for $j\in [0,n - N_c]$ do
187
+
188
+ Let $x_{j} = M_{j,1}$ and $y_{j} = M_{j,2}$ ;
189
+
190
+ if $\sqrt{(x_t - x_j)^2 + (y_t - y_j)^2} \leq d_s$ then
191
+
192
+ $u_{t} = u_{t} + 1$
193
+
194
+ end
195
+
196
+ end
197
+
198
+ end
199
+
200
+ Let $t_c$ be the $t$ for which $u_t$ is maximum. Then $R_i = R_{i-1} + (d_h \cos(\theta_{t_c}), d_h \sin(\theta_{t_c}))$ is the optimal location for the new repeater.
201
+
202
+ # Algorithm 2: Finding Open Repeater Location
203
+
204
+ The action of adding an open line did not add any new channels. We have, however, added new users to the network and must recalculate $N_{c}$ , $N_{f}$ , and our deficit $D$ using our updated matrix $M$ (which does not include points already covered by repeaters). If $D > 0$ , then we apply Algorithm 1 again and if $D = 0$ , we apply Algorithm 2.
205
+
206
+ We repeat this process until $N_f \geq 1000$ . This would mean that there are at least 1,000 simultaneously supported users on our network.
207
+
208
+ # 5 The Branching Model
209
+
210
+ The aptly named Branching Model creates a backbone network of open repeaters that supports a number of branch connections. All open repeater branches are designed along the shortest distance possible. This model provides us with a point of comparison for the first model.
211
+
212
+ ![](images/03f62102bfffa183446b0b63ec29719b007b0f8d9540d3941386692aceec8b25.jpg)
213
+ Figure 2 describes the creation of the backbone and branches.
214
+ Figure 2: Flow Chart for Branching Model
215
+
216
+ # 5.1 Description
217
+
218
+ Akin to the first model, the Branching Method uses $k$ -means cluster analysis and scoring to determine the optimal placement of repeaters. The difference, however, is the process in which this model creates the network. The model creates a backbone network of open repeaters between the two highest scoring clusters. After the backbone has been established, the model reclusters and rescores the remaining users and creates a branch of open repeaters between the existing network and the highest scoring cluster. This ensures that the fewest repeaters are used in branching out the network to high density locations. After the entire network has been established, the model places CTCSS repeaters to ensure chan-
219
+
220
+ nel availability is not a concern in user-dense locations so that all users may be supported simultaneously.
221
+
222
+ This model requires reclustering after every iteration.
223
+
224
+ # 5.2 Mathematical Interpretation
225
+
226
+ The Branching Model supports even more customization than the first model. The parameters relevant to this model are listed below.
227
+
228
+ <table><tr><td>Parameter</td><td>Description</td></tr><tr><td>n</td><td>Number of users within 40 miles</td></tr><tr><td>k</td><td>Number of k-means cluster points</td></tr><tr><td>ds</td><td>Maximum distance for user-to-repeater communication</td></tr><tr><td>hr</td><td>Height of repeater towers</td></tr><tr><td>dh</td><td>Repeater output distance</td></tr><tr><td>Δf</td><td>Frequency separation/ channel width</td></tr><tr><td>ln</td><td>Number of Long Distance Lines</td></tr><tr><td>lc</td><td>Number of Locations in Long Distance Connections</td></tr></table>
229
+
230
+ We will review a few definitions for consistency. Let $N_{c}$ be the number of people with network connectivity (i.e. within range of a repeater) and $N_{f}$ be the number of people with access to an available frequency range. The number of channels available will be denoted again by $O$ . Initially, $N_{c} = 0$ , $N_{f} = 0$ and $O = 0$ .
231
+
232
+ Let $n$ be the number of users within a 40 mile radius. With $k$ -means clustering, we identify and score clusters of users.. Letting $\{c_1, \dots, c_k\}$ be the locations of the $k$ cluster points, the model creates a backbone of repeaters between the two highest scoring cluster locations.
233
+
234
+ For each cluster point $c_{i}$ , we calculate how many users are within $d_{s}$ of $c_{i}$ , i.e. how many users will benefit from the placement of a repeater there. We set $R_{1} = c_{i}$ for whichever $i$ has the most people within range. Now calculate $N_{c}$ as before. The algorithm continues until the desired number of users have been added (in our case this is 1000 users). This only means that users are within range of a repeater. This does not ensure that all users have available channels. This will be resolved after the branched network has been established.
235
+
236
+ The explicit open repeater branching algorithm is given below.
237
+
238
+ Data: First open repeater location, $R_{1} = (x,y)$
239
+
240
+ Result: The locations of the repeaters
241
+
242
+ for $i\in [1,k]$ do
243
+
244
+ Let $(x_{i},y_{i}) = c_{i}$
245
+
246
+ $$
247
+ \phi (i) = \sqrt {(x - x _ {i}) ^ {2} + (y - y _ {i}) ^ {2}};
248
+ $$
249
+
250
+ end
251
+
252
+ Then $T = (x_{i},y_{i})$ st $\phi (i) = \max (\phi ([1,k]))$
253
+
254
+ $\theta =$ angle between $R_{1},T$
255
+
256
+ $s =$ distance between $R_{1}$ $T$
257
+
258
+ Set $j = 2$
259
+
260
+ while $s > d_h$ do
261
+
262
+ $$
263
+ R _ {j} = R _ {j - 1} + \left(d _ {h} \cos (\theta), d _ {h} \sin (\theta)\right);
264
+ $$
265
+
266
+ Recalculate $s$
267
+
268
+ $$
269
+ j = j + 1;
270
+ $$
271
+
272
+ end
273
+
274
+ while $N_{c} < 1000$ do
275
+
276
+ Rerun " $k$ -means" cluster analysis to obtain new $\{c_1,\dots ,c_k\}$
277
+
278
+ $T = c_{i}$ st $c_{i}$ has the most users in range;
279
+
280
+ Choose $i$ st $R_{i}$ is the existing repeater closest to $T$ ;
281
+
282
+ Let $s$ be the distance between $R_{T}$ and $T$ ;
283
+
284
+ Let $\theta$ be the distance between $R_{T}$ and $T$ ;
285
+
286
+ while $s > d_h$ do
287
+
288
+ $$
289
+ R _ {j} = R _ {j - 1} + \left(d _ {h} \cos (\theta), d _ {h} \sin (\theta)\right);
290
+ $$
291
+
292
+ Recalculate $s$
293
+
294
+ $$
295
+ j = j + 1;
296
+ $$
297
+
298
+ end
299
+
300
+ end
301
+
302
+ # Algorithm 3: Branching Method Open Repeater Placement
303
+
304
+ With the open network established, we must resolve the issue of channel availability. This is easily accomplished by placement of CTCSS repeaters in high user density areas. In our opinion, the method in which this model establishes long-distance CTCSS lines is outstanding. By specifying a different value for $l_{n}$ , one can change the number of CTCSS lines connecting the most highly populated areas. While this potentially increases the number of repeaters, it offers greater connectivity between regions. This is accomplished by first running $k$ -means cluster analysis on the user data and choosing the $l_{c}$ clusters with the most users in range. Starting from the cluster with the most users, we create a chain of repeaters with a particular CTCSS channel, a distance of $d_{h}$ apart, until the next closest repeater is in range (similar to the second half of Algorithm 3). This creates a long distance connection on a specific CTCSS channel. This allows long distance users to communicate with more densely populated areas (e.g. rural or sub-urban users communicating with an urban user) without wasting an open frequency in the dense location.
305
+
306
+ Once the specified number of long distance connections have been made, any deficits in channel demand are mitigated by placing local CTCSS repeaters in highly populated areas (again by $k$ -means clustering).
307
+
308
+ # 6 Model Comparison Summary
309
+
310
+ The fundamental similarities and differences between the models are:
311
+
312
+ # Similarities
313
+
314
+ - $k$ -means clustering is prevalent in both models. Both models make use of $k$ -means cluster analysis to identify large groups of users. By ranking these clusters in terms of potential connected users, both models attempt to provide the most efficient connectivity scheme possible.
315
+ - Variable-strength repeaters may be employed in both models. By accounting for variable broadcasting strength, isolated users may be accommodated without fear of channel interference or an inordinate use of additional repeaters in both models.
316
+ - The change of frequency from a repeater $(\pm 600\mathrm{kHz})$ is resolved last in both models. After the repeaters are configured, the "up" and "down" repeater broadcast assignments may be made in both models to fit the specific configuration of the network.
317
+
318
+ # Differences
319
+
320
+ - The models generate the network differently. The "Bender" Snake Model places open repeaters along a continuous trajectory as determined by cluster points. In contrast, The Branching Model creates a single backbone and allows for growth in any direction toward a cluster point.
321
+ - Reclustering is required for the Branching Model. In order for branching to occur, the optimum target location must be determined after every iteration. The other model may utilize reclustering but does not require it.
322
+ - The method in which private lines are introduced differs between the models. The "Bender" Snake Model places private line repeaters as is necessary whereas the Branching Model places private line repeaters after the entire network has been established.
323
+
324
+ # 7 Case Studies
325
+
326
+ We developed two population distributions to test our models on. The two cases are: a city-like user distribution and a rural distribution with small towns of users. When we discuss "parity," we refer to the $\pm 600\mathrm{kHz}$ difference in the recieving and broadcast frequencies of the repeaters. The graphs that show these assignments represent each open line repeater as a node, each labeled with "+" or "-" accordingly. We assign parity to each repeater such that no one repeater is connected to the rest of the network solely through another repeater with identical parity. This prevents a signal being either stepped up or stepped down repeatedly such that it eventually falls outside the available frequency range and cannot be recieved.
327
+
328
+ The parameters for these case studies were set to values we deemed reasonable based on our research from our referenced sources.
329
+
330
+ # 7.1 City Distribution
331
+
332
+ This distribution represents a city (located at the center of the area) with surrounding suburbs/neighborhoods.
333
+
334
+ ![](images/f18eb801da2eaa242122142ff8c18cbf41789f8327578583de7f6dde053f9d13.jpg)
335
+ Figure 3: Surface Plot of Users
336
+
337
+ # Snaking Model
338
+
339
+ <table><tr><td>Parameter</td><td>Description</td></tr><tr><td>n = 1400</td><td>The number of users within 40mi</td></tr><tr><td>k = 5</td><td>Number of k-means cluster points</td></tr><tr><td>ds = 10mi</td><td>Maximum distance for user to repeater communication</td></tr><tr><td>hr = 150ft</td><td>Height of repeater towers to be placed</td></tr><tr><td>dh = 15mi</td><td>Repeater output distance2)</td></tr><tr><td>Δf = .025</td><td>Frequency seperation</td></tr><tr><td>ln = 3</td><td>Number of Long Distance Lines</td></tr><tr><td>lc = 5</td><td>Number of Locations in Long Distance Connections</td></tr></table>
340
+
341
+ The model places the first open repeater slightly north of the city to cover most of the users located there. This creates a large channel deficit and the model places two CTCSS repeaters on this iteration to compensate. Next, a repeater is placed to the northwest and the simulation proceeds to spiral counterclockwise around the city, placing CTCSS repeaters as necessary. The model also places two CTCSS repeaters on the fourth iteration. There are 9 open line repeaters and 8 CTCSS repeaters for a total of 17 repeaters.
342
+
343
+ ![](images/905eda808ac0e34e51ef38f7937cb2da8915b7ff89ce64e1fc9ea6b5e66e4ee1.jpg)
344
+ Figure 4: Repeater Placement (Snaking Model)
345
+
346
+ The model creates a closed loop of 9 open line repeaters. Since this number is odd, the parity does not work bi-directionally around the loop. There will be some signal leakage when the two step-up repeaters communicate directly but the signal will travel in the opposite direction around the entire network and be recieved.
347
+
348
+ ![](images/b75b72a00b3b31fdf841fedfd7444e740af0e67b95575cba7427329128425cf5.jpg)
349
+ Figure 5: Repeater Parity (Snaking Model)
350
+
351
+ Branching Model The model places the first open repeater in the city and creates three main branches to cover the surrounding suburbia. This web structure is highlighted with the black lines. The branching structure of open line repeaters is designed to efficiently cover the surface area rather than simply rushing from one population center to the next linearly. This structure is created first, and CTCSS lines are placed afterwards.
352
+
353
+ ![](images/f776271e89a3f10937fdb5935e80953888bf445f3e7ff8595f9c64383db9208a.jpg)
354
+ Figure 6: Repeater Placement (Branching Model)
355
+
356
+ The process of CTCSS repeater placement is designed to create higher connectivity than the Snaking model. In the Snaking model, CTCSS repeaters are used to provide more channels locally, but when the network is loaded to capacity not all users will be able to talk long-distance. In the Branching model, we assign one squelch tone as long distance, here denoted by 1 (blue circles). Each of these points has 3 CTCSS repeaters, denoted as types 1,2 and 3. Tones 4-8 provide local lines in a manner similar to Snaking. This model creates 8 open
357
+
358
+ ![](images/53d4029ad99dbeb6c0763587c66dca44ec85587d3c51663f405146e698840ff8.jpg)
359
+ Figure 7: CTCSS Line Placement (Branching Model)
360
+
361
+ repeaters and 17 CTCSS repeaters for a total of 26. This number is substantially higher than the "minimum number" of 17 that the Snaking model produced. These extra lines are necessitated by the long distance backbone. The aim of this model is to provide both the best connectivity with the fewest number of repeaters.
362
+
363
+ The parity assignment is quite simple here.
364
+
365
+ ![](images/e884043e9a00d901829e2b5c695eea432d6dc94098fae7990f966c888c0746c5.jpg)
366
+ Figure 8: Repeater Parity (Branching Model)
367
+
368
+ # 7.2 Rural Distribution
369
+
370
+ This distribution represents a rural area with eight small towns of concentrated population with approximately 100 people spread randomly throughout the area. The total user population is 1400. We attempt to provide connectivity to 1000 people using both the Snaking and Branching models.
371
+
372
+ ![](images/ad92643b38a6a17ed4d843dd752734273a33d9eeaed962b30f7b62c22bb8ed83.jpg)
373
+ Figure 9: Surface Plot of Users
374
+
375
+ Snaking Model The model starts near the northwestern-most town. We note that it places the open-line repeater so that it covers all of the targeted town but approximately half of the population in the second town. This coverage is optimal, however a connectivity deficit is created since more than 120 people lie within range and channel availability becomes an issue. The model then places a CTCSS repeater near the open repeater to provide more channels locally. This is not sufficient to cover the deficit, so the model places an additional CTCSS repeater on the same spot, this one with a different squelch tone. This resolves the deficit so it resumes placing open-line repeaters. The next placement is essentially due south of the previous, as it gravitates towards the two southern clusters that represent the two towns in the area. The placement of this new repeater does not create a channel deficit so there is no need for another CTCSS repeater. The model proceeds south again on the next iteration, then snakes around towards the southeastern town, and finally turns north, placing CTCSS repeaters as needed along the way. The open line repeater near the middle is placed last. Note that the 6th iteration (the southeastern most group) also places two CTCSS repeaters on the same spot for a total of 8 CTCSS repeaters and 10 open repeaters. The model uses 18 total repeaters to create a network that accommodates 1080 users.
376
+
377
+ ![](images/99a168ff3c47f385bfa7abc421f11a3125b931881bcc72179b2f49571848ce78.jpg)
378
+ Figure 10: Repeater Placement (Snaking Model)
379
+
380
+ Repeater parity is assigned after the simulation is complete. This is generally a simple process for the Snaking model. However, note the southeastern node group where there are two step up repeaters connected. While this will result in some signal leakage outside of the available spectrum, the path that involves the step down node allows these two step up nodes to communicate without signal loss.
381
+
382
+ ![](images/e58416a4857fc0cacf4abee9bcb19183492bfb212bfd930fd1a01d9cbf7c08ff.jpg)
383
+ Figure 11: Repeater Parity (Snaking Model)
384
+
385
+ Branching Method The branching structure is highlighted with the black lines. The four-long node line that connects the two biggest towns is the main spine, and all other node structures branch off of this. The branching structure of open line repeaters is designed to efficiently cover the surface area rather than simply rushing from one population center to the next linearly. This structure is created first, and CTCSS lines are placed afterwards.
386
+
387
+ ![](images/f16f702fee21b9f590d8ec18fb69e7da1c1ed203e9aa25f87155a8a387c7f08d.jpg)
388
+ Figure 12: Repeater Placement (Branching Model)
389
+
390
+ ![](images/b53804cc1d4fe3116a006ba5b9d061f09ebb8b41880f79c33be9ca44304f59b5.jpg)
391
+ Figure 13: CTCSS Line Placement (Branching Model)
392
+
393
+ The process of CTCSS repeater placement is designed to create higher connectivity than the Snaking model. In the Snaking model, CTCSS repeaters are used to provide more channels locally, but when the network is loaded to capacity not all users will be able to talk long-distance. In the Branching model, we assign one squelch tone as long distance, here denoted by 1 (blue circles). Each of these points has 3 CTCSS repeaters, one each of types: 1, 2 and 3. Tones 4-8 provide local lines in a manner similar to the Snaking method. This model creates 8 open repeaters and 17 CTCSS repeaters for a total of 25. This number is substantially higher than the "minimum number" of 18 that the Snaking model produces. These extra lines are necessitated by the building of the long distance backbone. The aim of this model is to provide better connectivity, and not provide an absolute minimum repeater number.
394
+
395
+ ![](images/78024a75a0ee2ed94488cd4402491fc3a068e1e3700812d7bc825b52539ff517.jpg)
396
+ Figure 14: Repeater Parity (Branching Model)
397
+
398
+ Parity assignment for this model is fairly trivial.
399
+
400
+ # 8 10,000 Simultaneous Users
401
+
402
+ Our model is highly adaptable to varying situations and stresses. As such, when considering repeater placement for a network capable of simultaneously supporting 10,000 users, we simply run our models against a data set with a little over 10,000 users (we use 12,000). Our models work exactly as before when trying to support 1,000 users. An important point to note however, is that the frequency separation must be lowered to accommodate more users. We choose the frequency separation to be $10\mathrm{kHz}$ for 10,000 users (whereas before we chose 25 kHz for 1,000 users).
403
+
404
+ The minimum number of repeaters necessary to simultaneously support 10,000 users is 19 open repeaters and 33 CTCSS repeaters all running on a different CTCSS tone for a total of 42 repeaters. We conclude that even with 10,000 users, an efficient network can be established within the constraints of the problem.
405
+
406
+ ![](images/0a6a1310569102f14ba1fa5eea311aa8aae63143cd4a281c3f2670ffd7d18c7a.jpg)
407
+ Figure 15: Results for 10,000 Simultaneous Users
408
+
409
+ # 9 Mountainous Terrain
410
+
411
+ While VHF radio signals are blocked by large land structures, the line-of-sight propagation method also permits an increased effective range when the height of the antenna is increased. As a result, the mountains could be used to our advantage by placing repeaters on top of them rather than around them. This is not a completely trivial fix, however, as the new effective distance is proportional to the square root of the antenna's height. This would provide diminishing returns.
412
+
413
+ In the case where there is one large topographical peak (i.e. one large mountain peak), the obvious solution is to place one strong repeater on the mountain peak to provide the most coverage. However, now consider the case where this mountain does not have one discernable, well-defined peak and the area containing these peaks is relatively large and wide-spread (i.e. a mountain range). These numerous peaks may block the signal from a single repeater on the mountain and may not be accessible from all surrounding areas. The strength of the signal may vary with the angle due to an uneven distribution of peaks and valleys in the mountain range. In this situation, we would use a multiple-repeater network configured around the base and valleys that naturally occur in the mountain range. This would circumvent the mountain range and allow for connectivity. However, this also eliminates the line-of-sight advantage that the mountain could provide. There would most likely be ways to leverage the height advantage the mountains provide on a case-by-case basis.
414
+
415
+ # 10 Sensitivity to Parameters
416
+
417
+ Number of Cluster Points Since our model so heavily relies upon $k$ -means cluster analysis, it is natural to wonder how the number of cluster points affects model performance. By running our model with variable initial cluster points ( $k = 5, 10, 20$ ) we are able to gauge whether this has a significant impact on performance. We chose to use the rural population distribution since it provided a more interesting analysis.
418
+
419
+ ![](images/817209b94b5c52ba3b80c347bf10b64cd228a2e33a73ae0022d7cb98088822a3.jpg)
420
+ (a) $k = 10$
421
+
422
+ ![](images/088c982f84c1a517899e407478285fd535f4601034e3523cbc4dbbb9fdd10153.jpg)
423
+ (b) $k = 20$
424
+ Figure 16: "Bender" Snaking Model Cluster Point Sensitivity Rural Case
425
+
426
+ When $k = 5$ or $k = 10$ , the model determines that nine open line repeaters and eight CTCSS lines are necessary. When $k = 20$ , the model requires eight open lines and eight CTCSS lines be available. We conclude that our model does not seem to be very sensitive to changes in the number of initial cluster points.
427
+
428
+ Separation Distance When determining optimal placement for repeaters, our model places each repeater a distance of $d_h$ apart so that repeater connectivity and communication is guaranteed. The value we use for $d_h$ will have a large influence on the performance of the model. In our case studies, we set $d_h = 15$ . This is the distance that a 150 ft repeater would be able to transmit its signal. For $d_h = 10$ and $d_h = 20$ , the height of the necessary repeater is 66 ft. 8 in. and 266 ft. 8 in. respectively.
429
+
430
+ <table><tr><td>dh</td><td>Tower Height (ft)</td><td>Open</td></tr><tr><td>10</td><td>66&#x27;</td><td>10</td></tr><tr><td>15</td><td>150&#x27;</td><td>8</td></tr><tr><td>20</td><td>266&#x27;</td><td>7</td></tr></table>
431
+
432
+ Clearly, tower height (and hence seperation distance) has a significant impact on model performance. However, while increasing the tower height does decrease the minimum number
433
+
434
+ ![](images/9c85ba49489e8b452d58e674ab6482a100419c7843066b471e658eaf31c8f715.jpg)
435
+ (a) $d = 10$
436
+
437
+ ![](images/dfe4496ce286525c4ecfe0371f452fc84e9e38f9ee96d93a4522efc84f87a643.jpg)
438
+ (b) $d = 20$
439
+ Figure 17: Branching Model Separation Distance Sensitivity Rural Case
440
+
441
+ of repeaters necessary, it will increase the cost of each repeater. This is a choice the user will have to make.
442
+
443
+ Initial Number of People Our algorithms run until the desired number of people are connected to the network and there are enough channels available to those people. Changing the starting population of users drastically impacts model performance. This is not surprising as a higher population would allow our model to capture the desired number of people faster.
444
+
445
+ <table><tr><td>dh</td><td>Open</td><td>CTCSS</td></tr><tr><td>1400</td><td>10</td><td>8</td></tr><tr><td>3000</td><td>3</td><td>8</td></tr><tr><td>10000</td><td>1</td><td>8</td></tr></table>
446
+
447
+ Therefore, the higher the initial number of potential users, the fewer repeaters are necessary to sustain 1000 simultaneous users.
448
+
449
+ # 11 Strengths & Weaknesses
450
+
451
+ # Strengths
452
+
453
+ - Versatility of the models. Both models are highly versatile and can account for many changing parameters. We were very impressed that our model accommodates 10,000 users under the established requirements.
454
+
455
+ - Smart clustering. The fact that reclustering can be implemented (or is required) in the models creates a smarter algorithm that targets the highest priority targets at that moment. This allows for the "best" decision possible to be made at any given iteration.
456
+ - Efficient use of CTCSS Lines. Both models, even with 10,000 users, do not exhaust the use of private lines. These unused lines could accommodate more traffic if it were desired.
457
+
458
+ # Weaknesses
459
+
460
+ - Large reliance on $k$ -cluster analysis. Other clustering methods exist and our choice to exclusively use $k$ -clustering limits our model. Running the models with different clustering methods would have been more illuminating to see if there were more efficient ways to network and support the users.
461
+ - No use of Quality Threshold. Quality Threshold (QT) is a clustering method for which a distance treshold, not the number of clusters, is set. Implementing this method of clustering could have improved efficiency.
462
+ - Difficulty with populations close to target. We found that if only 1000 users were present, the algorithm would circle around itself trying to hunt down the last remaining user. We supplanted this concern with the assumption that there were more users than we desired to connect. We felt this was a realistic liberty to take as the problem did not state that there were precisely 1000 potential users in the area.
463
+
464
+ # 12 Conclusion
465
+
466
+ The absolute fewest number of repeaters required to support 1,000 users is 17. This number was created by the "Bender" Snaking Model with the city distribution of users. We felt this number was reasonable for the given area and user population. The Branching Model yielded a network of 26 repeaters but established connectivity for a significantly larger area.
467
+
468
+ When considering the rural distribution, the "Bender" Snaking Model reported 18 necessary repeaters while the Branching Model reported 25. The difference in required repeaters (7) was consistent with the difference in repeaters for the city distribution (9).
469
+
470
+ By comparing the two models, we were able to make a few fundamental conclusions:
471
+
472
+ - Better connectivity requires more repeaters. We can't argue with the minimum number of repeaters reported by our model but we did note that better-connectivity
473
+
474
+ (which potentially correlates into better service) for the users required more repeaters. This seems true to life as a more robust networks of any kind can support a greater load.
475
+
476
+ - 54 CTCSS lines are not necessary. We never exhausted our pool of CTCSS lines. Even with the 10,000 user load, 12 CTCSS tones were still available to use.
477
+ - CTCSS lines have multiple applications. While CTCSS lines are primarily used to reduce interference problems in densely populated areas, they also may be used to establish dedicated long-distance communication lines.
478
+
479
+ # References
480
+
481
+ [1] Metropolitan Coordination Association, Inc. Coordination Guidelines, February 2011. http://www.metrorcor.net/coordination-guidelines.htm.
482
+ [2] Metropolitan Coordination Association, Inc. CTCSS (PL) Tones Frequencies, February 2011. http://www.metrorcor.net/ctcss.htm.
483
+ [3] Institute for Telecommunication Sciences. High Frequency Propagation Models, February 2011. http://elbert.its.bldrdoc.gov/hf.html.
484
+ [4] Adrian W. Graham, Nicholas C. Kirkman, and Peter M. Paul. Mobile Radio Network Design in the VHF and UHF Bands. John Wily & Sons, Ltd, West Sussex, England, 2007.
485
+ [5] Leif J. Harcke, Kenneth S. Dueker, and David B. Leeson. Frequency Coordination in the Amateur Radio Emergency Service. ECJ, 1(1):31-36, 2004.
486
+ [6] Barry McLarnon. Vhf/Uhf/Microwave Radio Propagation: A Primer for Digital Experimenters. A workshop given at the 1997 TAPR/ARRL Digital Communications Conference, February 2011. http://www.tapr.org/ve3jf.dcc97.html.
487
+ [7] The American Radio Relay League. Band Plan, February 2011. http://www.arrl.org/band-plan-1#2m.
488
+ [8] Wikipedia. 2-Meter Band, February 2011. http://en.wikipedia.org/wiki/2-meter_band.
489
+ [9] Wikipedia. Amateur Radio Repeater, February 2011. http://en.wikipedia.org/wiki/Amateur_radio_repeater.
490
+ [10] Wikipedia. $k$ -means Clustering, February 2011. http://en.wikipedia.org/wiki/K-means_clustering.
491
+ [11] Wikipedia. Very High Frequency, February 2011. http://en.wikipedia.org/wiki/Very_high_freqrequency.
492
+
493
+ # 13 Appendix A: Full-Page Plots
494
+
495
+ ![](images/a8a9ed38fb7ef021b81183a86b2dd8740411f34d07b8ca762a878ff72989c5af.jpg)
496
+ "Bender" Snaking Method Repeater Placement
497
+ Figure 18: Repeater Placement (Snaking Model)
498
+
499
+ ![](images/10c5bbf97cc5e189a5f224966ba805a4b54e221d8440c2108cfe9266f8def356.jpg)
500
+ Figure 19: Repeater Placement (Branching Model)
501
+
502
+ ![](images/db5f09d79a539493964d3afa326238ae2020ebcf79b6e08048ca2492035bd654.jpg)
503
+ Figure 20: CTCSS Line Placement (Branching Model)
504
+
505
+ ![](images/8e5a291be6e5b534a7247fd80ab5bc14f43ecc706886b46e3c70e2f91862643b.jpg)
506
+ Figure 21: Repeater Placement (Snaking Model)
507
+
508
+ ![](images/f323b362e31da74d3200307fee89a850d4a73a72800e3637ec7f8bf65052155c.jpg)
509
+ Figure 22: Repeater Placement (Branching Model)
510
+
511
+ ![](images/48b5468994f7f3c4470f3ffd060aa6d1691198841f7347e852b0eee35c2efb87.jpg)
512
+ Figure 23: CTCSS Line Placement (Branching Model)
513
+
514
+ ![](images/c51d37f388f9ad217f1a4d6cd4994b57630f9703237ae254b152a30b67c1c2c6.jpg)
515
+ Figure 24: Results for 10,000 Simultaneous Users
516
+
517
+ # 14 Appendix B: Source Code
518
+
519
+ # "Bender" Snaking Model:
520
+
521
+ 1 load 'C:\Users\****\Desktop\MCM 2011\townsdata3.txt
522
+ 2 sepdist = 10; %separation distance
523
+ 3 plsepdist =1; %Private line seperation distance
524
+ 4 xcor = townsdata3(:,2);
525
+ 5 ycor = townsdata3(:,3);
526
+ 6 totalnum = zeros(2000,2);
527
+ 7 connected = 0;
528
+ 8 n=zeros(2000,5);
529
+ 9 inrange = zeros(360);
530
+ 10 ply = zeros(360,1);
531
+ 11 plx = zeros(360,1);
532
+ 12 ind = 0;
533
+ 13 openchannels=0;
534
+ 14 $\Delta f$ .025;
535
+ 15 cover = 0;
536
+ 16 it = 1;
537
+ 17plit =0;
538
+ 18 rep=zeros(50,2);
539
+ 19 population = length(xcor);
540
+ 20 iteration=0;
541
+ 21 concat=horzcat(xcor,ycor);
542
+ 22 notcovered=zeros(population,2);
543
+ 23 numclus=10;
544
+ 24 [clusid,clusters]=kmeans(concat,numclus);
545
+ 25 clusscr=zeros(numclus);
546
+ 26
547
+ 27
548
+ 28 for i = 1:population
549
+ 29 for n = 1:numclus
550
+ 30 if clusid(i) == n
551
+ 31 clusscr(n) = clusscr(n) + 1;
552
+ 32 end
553
+ 33 end
554
+ 34 end
555
+ 35
556
+ 36 %Plot Users and clusters
557
+ 37 plot Concat(:,1),concat(:,2),'.', 'MarkerSize',1);
558
+ 38 hold on
559
+ 39 plot (clusters(:,1),clusters(:,2),'kx',...
560
+ 40 'MarkerSize',20,'LineWidth',2);
561
+ 41 hold on
562
+ 42 plot (clusters(:,1),clusters(:,2),'ko',...
563
+ 43 'MarkerSize',20,'LineWidth',2);
564
+ 44 hold off
565
+ 45
566
+ 46
567
+ 47
568
+ 48
569
+
570
+ Find initial repeater location
571
+ ```matlab
572
+ for i=1:5,
573
+ for j=1:population,
574
+ n(i,j) = sqrt(((clusters(i,1) - xcor(j))^2) + ((clusters(i,2) - ycor(j))^2));
575
+ if n(i,j) <= sepdist
576
+ totalnum(i,1) = totalnum(i,1) + 1;
577
+ end
578
+ end
579
+ end
580
+ [C,I] = max(totalnum);
581
+ index = I(1);
582
+ rep(1,1) = clusters(index,1);
583
+ rep(1,2) = clusters(index,1);
584
+ openchannels = 3/df;
585
+ for i=1:population,
586
+ if sqrt(((rep(1,1) - xcor(i))^2) + ((rep(1,2) - ycor(i))^2)) > sepdist
587
+ ind = ind + 1;
588
+ notcovered(ind,1) = xcor(i);
589
+ notcovered(ind,2) = ycor(i);
590
+ end
591
+ end
592
+ cover = totalnum(index,1);
593
+ population = population-cover;
594
+ connected = min(totalnum(index,1), openchannels);
595
+ deficit = max(totalnum(index,1) - connected,0);
596
+ disp('the first repeater will be placed at'), disp('x'), disp(rep(1,1)), disp('disp('number connected'), disp Connected);
597
+ disp('deficit'), disp(deficit);
598
+ disp('under cover'), disp.cover);
599
+ disp('open channels'), disp(openchannels);
600
+ disp('population'), disp(population);
601
+ dlmwrite('C:\Users\*****\Desktop\MCM 2011\winner.txt', I, '\t');
602
+ dlmwrite('C:\Users\*****\Desktop\MCM 2011\distances.txt', n, '\t');
603
+ dlmwrite('C:\Users\*****\Desktop\MCM 2011\proximity.txt', totalnum, '\t')
604
+ while connected < 1000,
605
+ iteration = iteration + 1;
606
+ inrange = zeros(1000);
607
+ onrange = zeros(1000);
608
+ if deficit > 0
609
+ plit = plit + 1;
610
+ for theta = 1:360,
611
+ plx(theta) = rep(it,1) + plsepdist * cos((3.14159*theta)/180);
612
+ ply(theta) = rep(it,2) + plsepdist * sin((3.14159*theta)/180);
613
+ if ((plx(theta) ≥ -40) && (plx(theta) ≤ 40))
614
+ if ((ply(theta) ≥ -40) && (ply(theta) ≤ 40))
615
+ for i=1:population,
616
+ dist(i,1) = sqrt((plx(theta)-notcovered(i,2)*iterati
617
+ ```
618
+
619
+ if dist(i,1) $\leq$ sepdist
620
+ inrange(theta) $=$ inrange(theta) +1;
621
+ end
622
+ end
623
+ end
624
+ end
625
+ end
626
+ end
627
+ [C,I] $=$ max(inrange);
628
+ angle $=$ I(1);
629
+ repl(plit,1) $=$ rep(it,1)+plsepdist\*cos((3.14159\*angle)/180);
630
+ repl(plit,2) $=$ rep(it,2)+plsepdist\*sin((3.14159\*angle)/180);
631
+ added=0;
632
+ openchannels $=$ openchannels $+ ( 3 / \Delta f )$ ;
633
+ %%REMOVE ENTRIES WHICH HAVE BEEN USED
634
+ ind=0;
635
+ for i=1:population,
636
+ if sqrt((rep(it,1)-notcovered(i,2\*iteration-1))^2)+(rep(it,2)-notcovered
637
+ int = ind +1;
638
+ .notcovered(ind,2\*iteration+1)=notcovered(i,2\*iteration-1);
639
+ .notcovered(ind,2\*iteration+2)=notcovered(i,2\*iteration);
640
+ else
641
+ added=added+1;
642
+ end
643
+ end
644
+ cover $=$ cover $+$ added;
645
+ population $=$ population - added;
646
+ connected $=$ min(cover,openchannels);
647
+ deficit $=$ max(cover - connected,0);
648
+ disp('the PL repeater will be placed at'),disp('x='),disp(repl(plit,1)),disp (
649
+ disp('added'),disp(added);
650
+ disp('cover'),disp(cover);
651
+ disp('people connected'): disp(connected);
652
+ disp('deficit'),disp.deficit);
653
+ disp('open channels'),disp(openchannels);
654
+ disp('population'),disp(population);
655
+ end
656
+ if deficit $= = 0$
657
+ if it $=$ it+1;
658
+ innrange=zeros(5);
659
+ onnrange $=$ zeros(5);
660
+ d $=$ zeros(5);
661
+ for clusterno=1:5,
662
+ d(clusterno)=sqrt((rep(it-1,1)-clusters(clusterno,1))^2+(rep(it-1,2)-clusten
663
+ if ((rep(it-1,1) $\geq$ clusters(clusterno,1)) && (rep(it-1,2) $\leq$ clusters(clusterno,2)
664
+ ang(clusterno)= acos((clusters(clusterno,1)-rep(it-1,1))/d(clusterno));
665
+ else if ((rep(it-1,1)>clusters(clusterno,1)) && (rep(it-1,2) $\geq$ clusters(clustern
666
+ ang(clusterno)=2\*pi-\acos((clusters(clusterno,1)-rep(it-1,1))/d(clustern
667
+ else if ((rep(it-1,1) $\leq$ clusters(clusterno,1)) && (rep(it-1,2) $\geq$ clusters(clustern
668
+
669
+ ang(clusterno) $= 2$ *pi-acos((clusters(clusterno,1)-rep(it-1,1))/d(cluster) else ang(clusterno)=acos((clusters(clusterno,1)-rep(1,1))/d(clusterno)); end if ((rep(it-1,1) $\geq$ clusters(clusterno,1)) && (rep(it-1,2) $\leq$ clusters(clusterno,2) ang(clusterno)=pi-asin((clusters(clusterno,2)-rep(it-1,2))/d(clusterno); elseif ((rep(it-1,1) $\geq$ clusters(clusterno,1)) && (rep(it-1,2) $\geq$ clusters(clusterno,2)/d(clusterno); elseif ((rep(it-1,1) $\leq$ clusters(clusterno,1)) && (rep(it-1,2) $\geq$ clusters(clusterno,2)/d(clusterno); else ang(clusterno)=asin((clusters(clusterno,2)-rep(it-1,2))/d(clusterno)) end $\%$ } x(Clusterno)=rep(it-1,1)+sepdist\*1.5\*cos(ang(Clusterno)); y(Clusterno)=rep(it-1,2)+sepdist\*1.5\*sin(ang(Clusterno)); if ((x(Clusterno) $\geq -40$ ) && (x(Clusterno) $\leq 40$ ) && (y(Clusterno) $\geq -40$ ) && (y(Clustern) for i=1:population, if sqrt((x(Clusterno)-notcovered(i,2\*iteration-1))^2+(y(Cluster: inrange(clusterno)=inrange(clusterno)+1; end end for i=1:population, if sqrt((clusters(clusterno,1)-notcovered(i,2\*iteration-1))^2+ onrange(clusterno)=onrange(clusterno)+1; end end else inrange(Clusterno)=-100; end end [C,I] $=$ max(inrange); angleid $= I(1)$ ; rep(it,1)=rep(it-1,1)+sepdist\*1.5\*cos(ang(angleid)); rep(it,2)=rep(it-1,2)+sepdist\*1.5\*sin(ang(angleid)); if ((rep(it,1) $\geq -40$ ) && (rep(it,1) $\leq 40$ ) && (rep(it,2) $\geq -40$ ) && (rep(it,2) $\leq 40$ ) ind=0; added=0; for i=1:population, if sqrt((rep(it,1)-notcovered(i,2\*iteration-1))^2)+(rep(it,2)-notcovered(ind = ind +1; notcovered(ind,2\*iteration+1)=notcovered(i,2\*iteration-1); notcovered(ind,2\*iteration+2)=notcovered(i,2\*iteration);
670
+
671
+ else added $\equiv$ added $+1$ end end else theid $= 1$ while ((rep(it,1) $\leq -40$ )||(rep(it,1)>40)||rep(it,2)<-40)||rep(it,2)>40)) rep(it,1)=rep(it-1,1)+sepdist\*1.5\*cos(ang(theid)); rep(it,2)=rep(it-1,2)+sepdist\*1.5\*sin(ang(theid)); theid $=$ theid $+1$ end ind $= 0$ . added $= 0$ . for i $= 1$ :population, if sqrt((rep(it,1)-notcovered(i,2\*iteration-1))^2)+(rep(it,2)-notcovered ind $=$ ind $+1$ notcovered(ind,2\*iteration+1)=notcovered(i,2\*iteration-1); notcovered(ind,2\*iteration+2)=notcovered(i,2\*iteration); else added $=$ added $+1$ end end end cover $=$ cover $+$ added; population $=$ population - added; connected $=$ min(cover,openchannels); deficit $=$ max (cover - connected,0); disp('the open repeater will be placed at'),disp('x $\coloneqq$ ),disp(rep(it,1)),disp('y disp('added'),disp(added); disp('cover'),disp.cover); disp('open channels'),disp(openchannels); disp('people connected:'),disp(connected); disp('deficit'),disp(deficit); disp('population'),disp(population); end end disp('number of open repeaters'),disp(it); disp('number of PL repeaters'),disp(plit); xplot $\equiv$ zeros(it,1); yplot $\equiv$ zeros(it,1); for i $= 1$ :it xplot(i,1)=rep(i,1); yplot(i,1)=rep(i,2);
672
+ end xplotpl $\equiv$ zeros(plit,1); yplotpl $\equiv$ zeros(plit,1); for i $= 1$ :plit
673
+
674
+ xplotpl(i,1) $\equiv$ repp1(i,1); yplotpl(i,1) $\equiv$ repp1(i,2); end
675
+
676
+ ```matlab
677
+ plot(xcor, ycor, 's', 'MarkerSize', 4, 'MarkerEdgeColor', [1 1 1]);
678
+ xlabel('Miles');
679
+ ylabel('Miles');
680
+ title('Snaking Method Repeater Placement');
681
+ hold on
682
+ ```
683
+
684
+ ```javascript
685
+ plot(xplot,yplot,'o','MarkerFaceColor','r','MarkerEdgeColor','r','MarkerSize',10,'LineWidth hold on plot(xplotpl,yplotpl,'o','MarkerFaceColor','g','MarkerEdgeColor','g','MarkerSize',10,'Line) hold on
686
+ ```
687
+
688
+ ```javascript
689
+ plot(clusters(:,1),clusters(:,2),'v',... 'MarkerSize',10,'LineWidth',2,'MarkerEdgeColor','k'); hold on
690
+ ```
691
+
692
+ ```matlab
693
+ for i=1:it,
694
+ h=rep(i,1); k=rep(i,2); r=15; N=256;
695
+ t=(0:N)*2*pi/N;
696
+ plot(r*cos(t)+h,r*sin(t)+k,'Color','r','LineStyle','--');
697
+ hold on
698
+ end
699
+ ```
700
+
701
+ ```matlab
702
+ for i=1:it,
703
+ h=rep(i,1); k=rep(i,2); r=10; N=256;
704
+ t=(0:N)*2*pi/N;
705
+ plot(r*cos(t)+h, r*sin(t)+k, 'Color', 'c', 'LineStyle', ':');
706
+ hold on
707
+ end
708
+ ```
709
+
710
+ ```matlab
711
+ for i=1:plit,
712
+ h=repl(i,1); k=repl(i,2); r=15; N=256;
713
+ t=(0:N) * 2 * pi/N;
714
+ plot(r * cos(t) + h, r * sin(t) + k, 'Color', 'g', 'LineStyle', '-');
715
+ hold on
716
+ end
717
+ ```
718
+
719
+ ```javascript
720
+ axis([-40 40 -40 40]); axis('square')
721
+ ```
722
+
723
+ 319 hold off
724
+
725
+ # Branching Model:
726
+
727
+ 1 load 'C:\Users\***Desktop\MCM 2011\newtestdata2.txt';
728
+ 2 newx = newtestdata2(:,2); %x cor of data
729
+ 3 newy = newtestdata2(:,3); %y cor of data
730
+ 4 notcovered = horzcat(newx,newy);
731
+ 5 surfsens=2; %distance from point to include in surfscr
732
+ 6 sepdist = 10; %separation distance of repeaters
733
+ 7 scr=zeros(length(newx),1); %how many points are within surfsens
734
+ 8 rep=zeros(40,2); %repeater locations
735
+ 9 numclus=5; %number of clusters
736
+ 10 clusscr=zeros(numclus,3); %how many points are within sepdist of cluster point
737
+ 11 iteration=0;
738
+ 12 clf;
739
+ 13 numuncov=length(newx);
740
+ 14 it=0;
741
+ 15 connected = 0;
742
+ 16 newrepang = 0;
743
+ 17 numlongdist=10;
744
+ 18 actlongdist=5;
745
+ 19 plsepdist=15;
746
+ 20 numberoflongdistancelines=3;
747
+ 21 $\Delta f = .025$
748
+ 22 perline $= 3 / \Delta f$
749
+ 23 openchannels=perline;
750
+ 24 for i=1:length(newx),
751
+ 25 for j=1:length(newx)
752
+ 26 if sqrt((newx(i)-newx(j))^2+(newy(i)-newy(j))^2)\leq surfsens
753
+ 27 scr(i)=scr(i)+1;
754
+ 28 end
755
+ 29 end
756
+ 30 end
757
+ 31 end
758
+ 32
759
+ 33 concat $=$ horzcat(newx,newy,scr); $\% \%$ (xcor,ycor,zcor)
760
+ 34 [-,clusters]=kmeans(concat,numclus);
761
+ 35 disp('initial uncovered'),disp(numuncov);
762
+ 36
763
+ 37 {
764
+ 38 plot Users and clusters
765
+ 39 plot (concat(:,1),concat(:,2),'.');
766
+ 40 hold on
767
+ 41 plot (clusters(:,1),clusters(:,2),'kx',..)
768
+ 42 'MarkerSize',20,'LineWidth',2);
769
+ 43 hold on
770
+ 44 plot (clusters(:,1),clusters(:,2),'ko',..)
771
+ 45 'MarkerSize',20,'LineWidth',2);
772
+
773
+ hold off
774
+
775
+ elseif ((rep(1,1) $\geq$ targetloc(1)) && (rep(1,2) $\geq$ targetloc(2))) newrepang $= 2\star$ pi-acos((targetloc(1)-rep(1,1))/newrepdist);
776
+ elseif ((rep(1,1) $\leq$ targetloc(1)) && (rep(1,2) $\geq$ targetloc(2))) newrepang $= 2\star$ pi-acos((targetloc(1)-rep(1,1))/newrepdist);
777
+ else newrepang $\equiv$ acos((targetloc(1)-rep(1,1))/newrepdist);
778
+ end
779
+ repno $= 1$ .
780
+ iteration $= 0$ .
781
+ lastplaced $= 1$ .
782
+ %add repeater to make connection
783
+ while connected<1000,
784
+ iteration $\equiv$ iteration+1;
785
+ while newrepdist>sepdist, it $\equiv$ it+1; repno $\equiv$ repno+1; rep(repno,1) $\equiv$ rep(lastplaced,1)+sepdist\*cos(newrepang); rep(repno,2)=rep(lastplaced,2)+sepdist\*sin(newrepang); lastplaced $\equiv$ repno; %set the last placed to the current repeater newrepdist $\equiv$ sqrt((targetloc(1)-rep(repno,1))^2+(targetloc(2))-rep(repno,2)
786
+ %Count covered and remove from dataset added $= 0$ . i $= 0$ . for j=1: numuncov, if sqrt((rep(repno,1)-notcovered(j,2\*it+1))^2+(rep(repno,2)-notcovered added $\equiv$ added+1; else i $= \mathrm{i} + 1$ notcovered(i,2\*it+3)=notcovered(j,2\*it+1); notcovered(i,2\*it+4)=notcovered(j,2\*it+2); end
787
+ end
788
+ coveragestat $= 0$ . for j=1:length(newx), if sqrt((rep(repno,1)-newx(j))^2+(rep(repno,2)-newy(j))^2) $\leq$ sepdist coveragestat $\equiv$ coveragestat+1; end
789
+ end
790
+ reproverage (repno) $\equiv$ coveragestat; numuncov $\equiv$ numuncov-added; connected = connected + added; disp('added'),disp(added); disp('adding another');
791
+ end
792
+ disp('iteration'),disp(iteration);
793
+
794
+ %New targetloc
795
+ concat $=$ zeros(numuncov,2);
796
+ for $\mathrm{i} = 1$ :numuncov; concat(i,1) $=$ notcovered(i,2\*it+3); concat(i,2) $=$ notcovered(i,2\*it+4);
797
+ end
798
+ [clusid,clusters] $\equiv$ kmeans(concat,numclus);
799
+ clusscr=zeros(numclus,3);
800
+ %Set temp repeater locations
801
+ for i=1: numclus, retemp(i,1)=clusters(i,1); retemp(i,2)=clusters(i,2);
802
+ end
803
+ %Calculate score of each cluster and order by score
804
+ for i=1: numclus, clusscr(i,1)=retemp(i,1); clusscr(i,2)=retemp(i,2); for j=1: numuncov, if sqrt((retemp(i,1)-concat(j,1))^2+(retemp(i,2)-concat(j,2))^2)\~sepdist clusscr(i,3)=clusscr(i,3)+1; end end
805
+ end
806
+ srtclus $=$ sortrows(clusscr,-3); %sorted clusters
807
+ %Set new target location
808
+ targetloc(1)=srtclus(1,1);
809
+ targetloc(2)=srtclus(1,2);
810
+ repdist=zeros(repno,1);
811
+ for i=1: repno, repdist(i,1)=sqrt((targetloc(1)-rep(i,1))^2+(targetloc(2)-rep(i,2))^2);
812
+ end
813
+ [C,I]=min(repdist);
814
+ lastplaced $\equiv$ I(1);
815
+ newrepdist $=$ sqrt((rep(lastplaced,1)-targetloc(1))^2+(rep(lastplaced,2)-targetloc(2))^?;
816
+ if ((rep(lastplaced,1) $\geq$ targetloc(1)) && (rep(lastplaced,2) $\leq$ targetloc(2))) newrepang $\coloneqq$ acos((targetloc(1)-rep(lastplaced,1))/newrepdist); elseif ((rep(lastplaced,1) $\geq$ targetloc(1)) && (rep(lastplaced,2) $\geq$ targetloc(2))) newrepang $\coloneqq$ 2\*pi-acos((targetloc(1)-rep(lastplaced,1))/newrepdist); elseif ((rep(lastplaced,1) $\leq$ targetloc(1)) && (rep(lastplaced,2) $\geq$ targetloc(2))) newrepang $\coloneqq$ 2\*pi-acos((targetloc(1)-rep(lastplaced,1))/newrepdist); else newrepang $\coloneqq$ acos((targetloc(1)-rep(lastplaced,1))/newrepdist);
817
+ end
818
+
819
+ disp('connected'),disp connected);
820
+ end
821
+ disp('number uncovered'),disp(numuncov);
822
+ %%Plot Data
823
+ xplot=zeros(repno,1);
824
+ yplot=zeros(repno,1);
825
+ for i=1:repno, xplot(i)=rep(i,1); yplot(i)=rep(i,2);
826
+ end
827
+ %
828
+ concat $=$ zeros(length(newx),2);
829
+ for i=1:length(newx); concat(i,1) $\equiv$ newx(i); concat(i,2) $\equiv$ newy(i);
830
+ end
831
+ concat=horzcat(newx,newy); [clusid,clusters]=kmeans(concat,numlongdist);
832
+ cluslongscr=zeros(numclus,2);
833
+ for i=1:length(clusters), for j=1:length(newx) if sqrt((clusters(i,1)-newx(j))^2+(clusters(i,2)-newy(j))^2) $^2 \leq$ sepdist; $$ cluslongscr(i,1)=i; $$ cluslongscr(i,2)=cluslongscr(i,2)+1; end end
834
+ end
835
+ cluslonsrt=sqrtrows(cluslonscr,-2);
836
+ plplots=zeros(actlongdist,2);
837
+ for i=1:actlongdist, plplots(i,1)=clusters(cluslonsrt(i),1); plplots(i,2)=clusters(cluslonsrt(i),2);
838
+ end
839
+ plrep=zeros(100,2);
840
+ longdistmat=zeros(actlongdist,2);
841
+ currentlong=1;
842
+ plrep(1,1)=plplots(1,1);
843
+ plrep(1,2)=plplots(1,2);
844
+
845
+ ```matlab
846
+ plrep(1,3) = 1;
847
+ for j=1:actlongdist,
848
+ longdistmat(j,1)=j;
849
+ longdistmat(j,2)=sqrt((plrep(1,1)-plplots(j,1))^2+(plrep(1,2)-plplots(j,2))^2);
850
+ end
851
+ ldsort=sortrows(longdistmat,2);
852
+ tar(1,1)=plplots(1dsort(2,1),1);
853
+ tar(1,2)=plplots(1dsort(2,1),2);
854
+ prevtarget=ldsort(2,1);
855
+ d2tar=longdistmat(1dsort(2,1),2);
856
+ if ((plrep(1,1)≥tar(1,1)) && (plrep(1,2)≤tar(1,2)))
857
+ angletotarget=acos((tar(1,1)-plrep(1,1))/d2tar);
858
+ elseif ((plrep(1,1)≥tar(1,1)) && (plrep(1,2)≥tar(1,2)))
859
+ angletotarget=2*pi-acos((tar(1,1)-plrep(1,1))/d2tar);
860
+ elseif ((plrep(1,1)≤tar(1,1)) && (plrep(1,2)≥tar(1,2)))
861
+ angletotarget=2*pi-acos((tar(1,1)-plrep(1,1))/d2tar);
862
+ else
863
+ angletotarget=acos((tar(1,1)-plrep(1,1))/d2tar);
864
+ end
865
+ while d2tar>plsepdist,
866
+ currentlong=currentlong+1;
867
+ plrep(currentlong,1)=plrep(currentlong-1,1)+plsepdist*cos(angelotarget);
868
+ plrep(currentlong,2)=plrep(currentlong-1,2)+plsepdist*sin(angelotarget);
869
+ plrep(currentlong,3)=1;
870
+ d2tar=sqrt((plrep(currentlong,1)-tar(1,1))^2+(plrep(currentlong,2)-tar(1,2))^2);
871
+ end
872
+ for i=1:actlongdist,
873
+ newzones(i,1)=plplots(i,1);
874
+ newzones(i,2)=plplots(i,2);
875
+ end
876
+ newzones(prevmatrix[i])=[[];
877
+ newzones(i,:)=[[];
878
+ counter=1;
879
+ while counter<actlongdist-1;
880
+ counter=counter+1;
881
+ longdistmat=zeros(actlongdist-counter,2);
882
+ for i=1:actlongdist-counter,
883
+ longdistmat(i,1)=i;
884
+ longdistmat(i,2)=sqrt((plrep(currentlong,1)-newzones(i,1))^2+(plrep(currentlong,2)
885
+ end
886
+ ldsort=sqrtrows(longdistmat,2);
887
+ prevtarget=ldsort(1,1);
888
+ tar(1,1)=newzones(prevmatrix,i);
889
+ tar(1,2)=newzones(prevmatrix,i);
890
+ ```
891
+
892
+ d2tar=longdistmat(prevmt,2); if ((plrep(currentlong,1) $\geq$ tar(1,1)) && (plrep(currentlong,2) $\leq$ tar(1,2))) angletotarget $=$ acos((tar(1,1)-plrep(currentlong,1))/d2tar); elseif ((plrep(currentlong,1) $\geq$ tar(1,1)) && (plrep(currentlong,2) $\geq$ tar(1,2))) angletotarget $= 2\star$ pi-acos((tar(1,1)-plrep(currentlong,1))/d2tar); elseif ((plrep(currentlong,1) $\leq$ tar(1,1)) && (plrep(currentlong,2) $\geq$ tar(1,2))) angletotarget $= 2\star$ pi-acos((tar(1,1)-plrep(currentlong,1))/d2tar); else angletotarget $\equiv$ acos((tar(1,1)-plrep(currentlong,1))/d2tar); end while d2tar>plsepdist, currentlong $\coloneqq$ currentlong+1; plrep(currentlong,1)=plrep(currentlong-1,1)+plsepdist\*cos(angeltotarget); plrep(currentlong,2)=plrep(currentlong-1,2)+plsepdist\*sin(angeltotarget); plrep(currentlong,3)=1; d2tar $\equiv$ sqrt((plrep(currentlong,1)-tar(1,1))^2+(plrep(currentlong,2)-tar(1,2))^2); end newzones(prevmt,:=[];
893
+ end
894
+ noplrep $\equiv$ currentlong; openchannels $\equiv$ openchannels+numberoflongdistancelines\*perline; deficit $\equiv$ max connected-openchannels,0); indexnum $= 0$ indexnum $= 0$ channelnumber $\equiv$ numberoflongdistancelines; plcolor(1:currentlong,1)=51; plcolor(1:currentlong,2)=51; plcolor(1:currentlong,3)=255; pluncovered $\equiv$ horzcat(newx,newy); numuncovered $\equiv$ length(newx); while deficit>0 channelnumber $\equiv$ channelnumber+1; currentlong $\equiv$ currentlong+1; indexnum $\equiv$ indexnum+1; if indexnum $\equiv$ actlongdist indexnum $= 1$ ; concat $=$ zeros(numuncovered,2); for i=1: numuncovered; concat(i,1) $=$ pluncovered(i,1); concat(i,2) $=$ pluncovered(i,2); end [clusid,clusters] $\equiv$ kmeans(pluncovered,numlongdist);
895
+
896
+ ```matlab
897
+ cluslongscr=zeros(numlongdist,2);
898
+ for i=1:length(clusters), for j=1:length(pluncovered) if sqrt((clusters(i,1)-pluncovered(j,1))^2+(clusters(i,2)-pluncovered(j,2))
899
+ cluslongscr(i,1)=i;
900
+ cluslongscr(i,2)=cluslongscr(i,2)+1; end end
901
+ end
902
+ cluslongsr=sortrows(cluslongscr,-2);
903
+ plplots=zeros(actlongdist,2);
904
+ for i=1:actlongdist, plplots(i,1)=clusters(cluslongsr(i),1);
905
+ plplots(i,2)=clusters(cluslongsr(i),2);
906
+ end
907
+ end
908
+ plrep(currentlong,1)=plplots(indexnum,1);
909
+ plrep(currentlong,2)=plplots(indexnum,2);
910
+ plrep(currentlong,3)=channelnumber; index1=1;
911
+ for i=1:numuncovered, if sqrt((plrep(currentlong,1)-pluncovered(i))^2+(plrep(currentlong,2)-pluncovered(i))
912
+ pluncovered(index1,1)=pluncovered(i,1);
913
+ pluncovered(index1,2)=pluncovered(i,2);
914
+ index1=index1+1;
915
+ else numuncovered=numuncovered-1;
916
+ end
917
+ end
918
+ plcolor(currentlong,1)=255-indexnum2*20;
919
+ plcolor(currentlong,2)=40+indexnum2*20;
920
+ plcolor(currentlong,3)=0;
921
+ noplrep=noplrep+1;
922
+ openchannels=openchannels+perline;
923
+ deficit=max connected-openchannels,0);
924
+ indexnum2=indexnum2+1;
925
+ end
926
+ plrepplot=zeros(currentlong,2);
927
+ for i=1:currentlong, plrepplot(i,1)=plrep(i,1);
928
+ plrepplot(i,2)=plrep(i,2);
929
+ end
930
+ }
931
+ ```
932
+
933
+ ylabel('Miles');
934
+ title('Random User Data');
935
+ hold on
936
+ plot(xplot,yplot,'.','MarkerFaceColor', 'none', 'MarkerEdgeColor', [1 0 0], 'MarkerSize', 20, 'L); hold on
937
+ $\% \}$ $\% \{$ plot(plplots(:,1),plplots(:,2),o', 'MarkerFaceColor', g', 'MarkerEdgeColor', g, 'MarkerSize', n);
938
+ for i=1:noplrep, plot(plrepplot(i,1),plrepplot(i,2), '.', 'MarkerFaceColor', g', 'MarkerEdgeColor', g', 'MarkerSize', n); hold on
939
+ end
940
+ for i=1:repno, h=rep(i,1); k=rep(i,2); r=10; N=256; t=(0:N)*2*pi/N; plot(r*cos(t)+h,r*sin(t)+k,'Color', 'r', 'LineStyle', '-'); hold on
941
+ end
942
+ for i=1:repno, h=rep(i,1); k=rep(i,2); r=10; N=256; t=(0:N)*2*pi/N; plot(r*cos(t)+h,r*sin(t)+k,'Color', [43/255 129/255 86/255], 'LineStyle', ':'); hold on
943
+ end
944
+ $\% \}$ $\% \{$ for i=1:noplrep, h=plrep(i,1); k=plrep(i,2); r=15; N=256; t=(0:N)*2*pi/N; plot(r*cos(t)+h,r*sin(t)+k,'Color', [plcolor(i,1)/255 plcolor(i,2)/255 plcolor(i,3)/255]; hold on
945
+ end
946
+ for i=1:currentlong, if i==7, text(plrep(i,1)-2, plrep(i,2), num2str(plrep(i,3))); elseif i==8, text(plrep(i,1)+.5, plrep(i,2), num2str(plrep(i,3))); elseif i==12, else text(plrep(i,1)+.5, plrep(i,2), num2str(plrep(i,3))));
947
+
948
+ 481 end
949
+ 482 end
950
+ 483 $\%$
951
+ 484
952
+ 485
953
+ 486 $\% \{$
954
+ 487 h=plot(clusters(:,1),clusters(:,2),'v',...
955
+ 488 'MarkerSize',10,'LineWidth',2,'MarkerEdgeColor','k');
956
+ 489 axis([-40 40 -40 40]);
957
+ 490 set(h,'Color',[.8 .8 .8]);
958
+ 491 get(h);
959
+ 492 $\% \}$
960
+ 493
961
+ 494 axis([-40 40 -40 40]);
962
+ 495 axis('square')
963
+ 496 hold off
964
+ 497 disp('number of repeaters'),disp(repno);
965
+ 498
966
+ 499 disp('number of PL repeaters'),disp(noplrep);
967
+ 500 disp('number of channels'),disp(openchannels);
MCM/2011/B/12114/12114.md ADDED
@@ -0,0 +1,582 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # VHF REPEATER PLACEMENT
2
+
3
+ Control # 12114
4
+
5
+ February 14, 2011
6
+
7
+ # Summary Statement
8
+
9
+ Our task is to develop a model that determines how to minimally and adequately place VHF repeaters so that low-power users, such as mobile stations, can communicate with one another in situations where direct user-to-user contact would not be possible. We assume that we are given a population of users within a circular region of radius 40 miles, and we attempt to find the minimal number of repeaters necessary to provide adequate coverage while minimizing the number of radio bands in use and keeping interference negligible. Although we assume that there must be a direct line of sight between a repeater and a user, our model allows for terrains of complex geometry including mountains and valleys. Throughout most of our analysis, we assume that repeater antennas are 18 feet above the ground, which corresponds to a coverage radius of 10 miles in a perfectly flat region.
10
+
11
+ Initially, we employ a greedy algorithm to successively place repeaters in the locations that provide coverage to the most additional users. In subsequent incarnations of the model, we consider goals other than maximal coverage in selecting the optimal positions. To assign frequency channels to these repeaters we form a graph whose vertices represent the repeaters, and we consider vertices to be adjacent if the repeaters are sufficiently close to one another. This graph is then colored greedily.
12
+
13
+ To test our model, we generate population distributions using both uniformly random placement of users and a preferential attachment model that more closely matches real world population distributions. We find that, using realistic population distributions, we were able to provide adequate coverage to 1000 simultaneous users with about 15 repeaters and 10000 users with about 20 repeaters; however by relaxing these conditions and requiring that only $95\%$ of users be covered, we could lower these numbers to 10 and 15 respectively. We also showed that 21 repeaters should always be enough to provide coverage to a region with a radius of 40 miles.
14
+
15
+ # Contents
16
+
17
+ # 1 Background 5
18
+
19
+ 1.1 Very High Frequency (VHF) 5
20
+ 1.2 Repeaters 5
21
+
22
+ # 2 Problem Interpretation 7
23
+
24
+ # 3Assumptions 7
25
+
26
+ # 4 Circle Covering 9
27
+
28
+ 4.1 Radio Wave Propogation 11
29
+
30
+ # 5 Population Distribution 12
31
+
32
+ 5.1 Uniform Algorithm 12
33
+ 5.2 Preferential Attachment Algorithm 13
34
+
35
+ # 6 Our Approach 13
36
+
37
+ 6.1 Basic Coverage 13
38
+ 6.1.1 Algorithmic Runtime 15
39
+
40
+ 6.2 Additional Layers: Maximizing Utility 15
41
+
42
+ 6.2.1 Basic Coverage as a Special Case 16
43
+ 6.2.2 Runtime Analysis and Optimizations 16
44
+ 6.2.3 Additional Notes 18
45
+
46
+ 6.3 Frequency Selection 18
47
+
48
+ # 7 Results 18
49
+
50
+ 7.1 Basic Model 18
51
+ 7.2 Mountains and Complex Geometries 23
52
+
53
+ # 8 Conclusions 24
54
+
55
+ # 9 Future Considerations 25
56
+
57
+ 9.1 Changes to the Model 25
58
+
59
+ 9.1.1 Ad-hoc Networks and Game Theory 25
60
+ 9.1.2 One-to-One Communication 26
61
+ 9.1.3 Non-static and Introduced Users 26
62
+
63
+ 9.2 Model Correctness 26
64
+ 9.2.1 Region Topography 26
65
+ 9.3 Real World Data 26
66
+ 9.4 Theoretical Issues 27
67
+ 9.4.1 Algorithmic Improvement 27
68
+
69
+ # 1 Background
70
+
71
+ # 1.1 Very High Frequency (VHF)
72
+
73
+ In most countries, the VHF frequencies 144-146 MHz are allocated for amateur radio. In the Americas, this band is 144-148 MHz. These frequencies are commonly referred to as the 2-meter amateur radio. The transmissions are local and generally require more compact machinery than operating on other frequencies. Because of their reliability, this band is also used for communicating emergencies via mobile and hand-held transmission devices. Because they provide back-up emergency communication, transmitting on these frequencies usually requires a license. Transmission time may be limited to a set time, such as 30 seconds, and many people send courtesy tones to remind people to leave gaps between transmissions; these breaks accommodate accident or emergency reporting as well as prevent one person from hogging the frequency.
74
+
75
+ To prevent interference and to control the usage of transmitting, government agencies manage the frequency allocations. In the United States, the Federal Communications Commission "is charged with regulating interstate and international communications by radio, television, wire, satellite and cable."[4] Originally, the spacing between 2-meter channels was $30\mathrm{kHz}$ . When the bands became overloaded with users, the states split in their spacing method: some divided the stations in half such that the spacing between them became $15\mathrm{kHz}$ ; and others adopted a $20\mathrm{kHz}$ spacing. When frequencies from the transmitters overlap with only $15\mathrm{kHz}$ difference, there can still be interference. With a spacing of $20\mathrm{kHz}$ , there is virtually no interference [2].
76
+
77
+ # 1.2 Repeaters
78
+
79
+ In order to extend the distance a transmission can reach, repeaters receive input on a prescribed frequency and output the amplified signal at the original frequency $\pm 600\mathrm{kHz}$ . This pair of frequencies for a given repeater is commonly referred to as a channel. Over an antenna, repeaters transmit the signal usually as far as the observable horizon. For mobile and hand-held devices, this line-of-sight provides a very limited range of only a few miles. Due to the fact that repeaters are typically placed on high locations such as mountains, the observable horizon of the repeater spans much further than it would otherwise. Tropospheric bending extends this Radio Horizon even more from
80
+
81
+ $$
82
+ r \approx 1. 2 3 \sqrt {h}
83
+ $$
84
+
85
+ to
86
+
87
+ $$
88
+ r \approx \sqrt {2 h} \approx 1. 4 1 \sqrt {h},
89
+ $$
90
+
91
+ where $r$ is the radius or distance from the repeater antenna to the Radio Horizon and $h$ is the height of the antenna in feet [9]. Note that at the edges of the coverage of a repeater
92
+
93
+ - the fringe - the signal becomes weak and may drop out. Repeaters can also broadcast their received signal over the internet through applications such as EchoLink, which allows the transmissions to reach across the globe.
94
+
95
+ As listed on the National Association for Amateur Radio's website, the band plan for 144-148MHz specifies the following table<sup>1</sup>:
96
+
97
+ <table><tr><td>FREQUENCY (MHz)</td><td>ALLOCATED</td></tr><tr><td>146.01-146.37</td><td>Repeater inputs</td></tr><tr><td>146.40-146.58</td><td>Simplex†</td></tr><tr><td>146.52</td><td>National Simplex Calling Frequency</td></tr><tr><td>146.61-146.97</td><td>Repeater outputs</td></tr><tr><td>147.00-147.39</td><td>Repeater outputs</td></tr><tr><td>147.42-147.57</td><td>Simplex</td></tr><tr><td>147.60-147.99</td><td>Repeater inputs</td></tr></table>
98
+
99
+ # UK Repeater Frequencies
100
+
101
+ <table><tr><td>CHANNEL</td><td>OUTPUT</td><td>#</td></tr><tr><td>RV48</td><td>145.6000</td><td>10</td></tr><tr><td>RV49</td><td>145.6125</td><td>5</td></tr><tr><td>RV50</td><td>145.6250</td><td>10</td></tr><tr><td>RV51</td><td>145.6375</td><td>4</td></tr><tr><td>RV52</td><td>145.6500</td><td>8</td></tr><tr><td>RV53</td><td>145.6625</td><td>8</td></tr><tr><td>RV54</td><td>145.6750</td><td>7</td></tr><tr><td>RV55</td><td>145.6875</td><td>7</td></tr><tr><td>RV56</td><td>145.7000</td><td>10</td></tr><tr><td>RV57</td><td>145.7125</td><td>6</td></tr><tr><td>RV58</td><td>145.7250</td><td>11</td></tr><tr><td>RV59</td><td>145.7375</td><td>6</td></tr><tr><td>RV60</td><td>145.7500</td><td>9</td></tr><tr><td>RV61</td><td>145.7625</td><td>5</td></tr><tr><td>RV62</td><td>145.7750</td><td>13</td></tr><tr><td>RV63</td><td>145.7875</td><td>8</td></tr></table>
102
+
103
+ Figure 1: The number of repeaters which output at given frequencies in the UK.[10]
104
+
105
+ As an example of actual repeater frequencies, we look at those in the UK.
106
+
107
+ The UK Amateur Radio Repeater Resource website lists all of the "existing and licensed analogue and dual-mode 145MHz amateur repeaters" in the UK. Of the 127 operational repeaters listed, there are only 15 distinct frequencies used. For a given frequency, the regions reached by the repeaters do not overlap and some repeaters may even have the same PL code; however, these regions are a significant distance apart ( $\sim$ 200 miles). Furthermore, the frequencies used for repeaters are in a contiguous band. [Note that for context, the UK is 94,060 square miles, whereas our region is approximately 5,027 square miles.]
108
+
109
+ # 2 Problem Interpretation
110
+
111
+ Given a population of users within a circular region of radius 40 miles, we attempt to minimally and adequately place repeaters to assist with communication via VHF transmitters. Because VHF frequencies can be in high demand, we also minimize the number of frequencies used by our repeaters and analyze the benefit of using PL codes. We look at the cases where there are 1,000 and 10,000 simultaneous users with two types of population distributions: uniformly random and preferentially attached. The preferential distribution simulates the tendency of users to be found in clusters. In this way, we look at different types of regions which encapsulates sparse rural regions, towns, suburbs, and cities.
112
+
113
+ Because signals travel primarily by line-of-sight, we also analyze optimal placement of the repeaters when mountains are present in the given terrain and the effect this has on the number of the repeaters. Further, we discuss the effects of various attenuation factors on the signal.
114
+
115
+ # 3 Assumptions
116
+
117
+ There are three instances possible for when we would need to lay repeaters:
118
+
119
+ 1. People reside within the region and no repeaters exist.
120
+ 2. People reside within the region and inadequate repeaters exist.
121
+ 3. The distributions of people and repeaters are unknown or not present.
122
+
123
+ In the third case, in order to adequately and minimally cover all of the simultaneous users, we must cover the entire region. See Circle Covering (Section 4) for repeater placement.
124
+
125
+ In the following analysis, we assume the first case, which could also easily be extended to the second case with our model by laying down a framework of already-existing repeaters and checking which users are already covered.
126
+
127
+ # Repeaters
128
+
129
+ Usually repeaters are placed on high buildings to maximize their coverage; however, with a strong repeater and a reasonably tall building, this range could easily be more than 40 miles. If we used such repeaters, the minimal repeater placement would be directly in the center of the region.
130
+
131
+ To avoid looking at such a trivial case, we assume an antenna height of 18 feet and consider the range within 10 miles of the repeater, a range which does not impede on the "fringe" area and hence allows us to assume that all people within this circle are adequately
132
+
133
+ covered. We discard the "fringe" range as unsatisfactory because is unreliable and signals may drop. Though the 18 feet and 10 mile range are for the most part arbitrary, any smaller of a range would be unrealistically small and any larger converges to the trivial case.
134
+
135
+ We also assume that any person within the range specified of the repeater can communicate with any other user who is within range of a, possibly distinct, repeater through an application such as EchoLink.
136
+
137
+ Also, we assume that repeaters fail independently.
138
+
139
+ # Discretized Space
140
+
141
+ For ease of modeling, we assume a grid on which we place people, repeaters, and do calculations. We generally use a fineness of $1/4$ mile for this grid. We also ensure that the people, repeaters, and calculations are within the circle centered at the center of the grid.
142
+
143
+ When placing repeaters, we assume that the circular coverage never escapes the surrounding grid boundary.
144
+
145
+ # Usage Patterns
146
+
147
+ We assume that one circle covering any number of people is sufficient coverage. We address this issue further with utility (Section 6.2).
148
+
149
+ # Obstructions
150
+
151
+ In our basic model, we assume that reflections, diffraction, and obstacles such as buildings have a neglectably small effect on the range of the repeater, which generally holds true in practice. We do discuss obstructions such as forests and consider the effect that mountains have on the line-of-sight, including the effects of diffraction (Sections 4.1, 7.2).
152
+
153
+ # Interference
154
+
155
+ Interference can occur if two repeaters with the same frequency have overlapping coverages, even if their PL codes differ. We keep this as an assumption in our model and choose frequencies such that they are more than 25 miles apart and are not adjacent. Because we want to minimize the number of frequencies used, as mentioned in Section 2, we also attempt to reuse frequencies in our model and to keep these frequencies in a block as in Figure 1. Also, we assume that given a sufficiently small number of frequencies required, it is trivial to assign the channels frequencies without collision due to the $600\mathrm{kHz}$ shifting of input and output of repeaters.
156
+
157
+ # 4 Circle Covering
158
+
159
+ One method of ensuring all users have coverage is to cover the entire region. We first look analytically at how to minimally cover the entire region with repeaters.
160
+
161
+ Let us assume that each repeater has a circular range with radius 10. Here we analyze the number of repeaters, $n$ , needed to cover the region of radius 40.
162
+
163
+ # Simple lower bound
164
+
165
+ We first find a lower bound on the number of repeaters needed to achieve a full covering. Using the fact that a circle with radius $r$ has area $\pi r^2$ , we note that the region to cover has area
166
+
167
+ $$
168
+ A _ {r e g i o n} = \pi (4 0) ^ {2} = 1 6 0 0 \pi
169
+ $$
170
+
171
+ and each repeater has a coverage of area
172
+
173
+ $$
174
+ A _ {r e p e a t e r} = \pi (1 0) ^ {2} = 1 0 0 \pi .
175
+ $$
176
+
177
+ Because
178
+
179
+ $$
180
+ \frac {A _ {r e g i o n}}{A _ {r e p e a t e r}} = 1 6,
181
+ $$
182
+
183
+ we see that the area covered by 16 repeaters is equivalent to the area we intend to cover.
184
+
185
+ This does not, however, imply that we only
186
+
187
+ need 16 repeaters, because placing the circles would require overlap of their coverages.
188
+
189
+ Hence, we find for a lower bound
190
+
191
+ $$
192
+ 1 6 \leq n.
193
+ $$
194
+
195
+ ![](images/5d338136861bf36c0ae388071115017532cc4fc01ad30fbc98538daa89b60dc8.jpg)
196
+ Figure 2: Circles of radii 40 and 10.
197
+
198
+ # Hexagonal approximation upper bound
199
+
200
+ To get an upper bound on the number of repeaters needed, we look to hexagonal packing. The problem we are solving is easily relatable to the Disk Covering Problem, where one finds the smallest radius $r$ for which $n$ disks of radius $r$ can cover the unit disk. This is an unsolved, $\mathcal{NP}$ -complete problem with analytic answers for small $n$ and approximations up to $n = 10$ [5]; however, we are looking at $n \geq 16$ . Our problem is also relatable to the Circle Packing Problem, where circles are packed into a region without overlap. Using a circle packing of a smaller radius than that of the repeater coverage, we can find a circle packing and then expand the radius to obtain overlapping circles which applies to our problem. On the euclidean plane, hexagonal packing provides the optimal lattice solution for circle packing [6]; however, when the boundary is finite, this is not usually the case. Because our problem remains unsolved, we look to hexagonal packing for an approximation by placing a hexagonal lattice of smaller circles and extending their boundaries.
201
+
202
+ Because we can find a hexagonal scheme which uses 21 circles of radius 10 to cover our region of radius 40 (Figure 3), we know that at most we will need 21 repeaters. Hence, our new bounds on $n$ are
203
+
204
+ $$
205
+ 1 6 \leq n \leq 2 1.
206
+ $$
207
+
208
+ ![](images/63d68298fd7b26f280626c1c04d2c3883a24a8523e48ef06c8b5bb4d0e3c6702.jpg)
209
+ Figure 3: Upper bound of 21 derived from hexagonal packing.
210
+
211
+ Thus, optimally we can hope to obtain results from our model which require at least 16 and at most 21 repeaters.
212
+
213
+ # 4.1 Radio Wave Propogation
214
+
215
+ In free space, the power density of all electromagnetic radiation follows the inverse square law $\rho \propto \frac{1}{r^2}$ , so that a doubling of distance results in one-fourth the power density. However propagation of radio waves near the surface of the earth can be significantly more complicated than this. Lower frequency radio waves often propagate by sticking to the surface of the earth, while higher frequency radio waves need a direct line of sight, and radio waves of certain middle range frequencies can propagate by refraction through the atmosphere. We are considering only VHF radio waves which propagate mostly by direct line of sight, however other factors such as refraction, reflection, and diffraction have an effect.
216
+
217
+ # Refraction
218
+
219
+ Radio waves in the HF range can travel very great distances through a process called skywave propagation. In this process, waves are bent downward by refraction in the ionosphere, as if they were reflected off a mirror. When the waves reach the earth, they may reflect upwards again. This process may occur several times, allowing the wave to travel across continents. While refraction of VHF waves is not strong enough to induce skywave propagation, it still slightly bends the path of the waves back towards the earth. The result is that the waves are able to propagate farther than if they were traveling in a straight line. Without the refractive effect, VHF waves would only be able to travel as far as the visible horizon before being absorbed into the Earth. However, this bending allows the waves to propagate beyond the visible horizon to something often called the Radio Horizon. The distance in miles to the Radio Horizon is approximately $\sqrt{2A}$ where $A$ is the antenna hight in feet [3].
220
+
221
+ ![](images/264a3277321b89a91d4c05f39459dcfe76d3f02d6466708ca46bdb0d37b66f85.jpg)
222
+ Figure 4: Signal range extends due to tropospheric interference.
223
+
224
+ # Reflection
225
+
226
+ Ground plane reflections create a second path for radio wave. When this wave recombines with the direct line of sight wave, there can be either constructive or deconstructive interference. However, ordinarily the ground plane reflection reduces the strength of the signal considerably, effectively changing the power density relationship to an inverse fourth law $\rho \propto \frac{1}{r^4}$ rather than the inverse square law [3].
227
+
228
+ ![](images/772bbf56c009da00cd7bcc3221dc5df601402512263600c1110a5daf53c924f8.jpg)
229
+ Figure 5: Reflections of the signal can interfere or strengthen the direct wave.
230
+
231
+ # Diffraction
232
+
233
+ When VHF radio waves encounter large obstacles such as mountains, the wave is generally absorbed or reflected by the object. However in some cases, if the radio wave propagates near the top of the object, diffraction can redirect the wave towards the target. In this way the two radios can communicate even when their line of sight is blocked, however the signal strength is reduced by the diffraction process. In other cases, when the path goes directly above an object, diffraction can attenuate the signal even though there was a direct line of sight between two antennas. In particular, obstructions inside the fresnel zone contribute to interference by diffraction.
234
+
235
+ ![](images/2dea897fbbb3048a38da07dde47cd361d84b333e5751848610ebbf22f3a873aa.jpg)
236
+ Figure 6: Diffraction over surfaces can cause indirect paths to connect antennas.
237
+
238
+ # 5 Population Distribution
239
+
240
+ In order to acquire test data on which to execute our model, we need to generate population distributions to recreate user distribution scenarios similar to those that could be expected in real life. We consider two different algorithms to generate reasonable distributions within a circle with a 40-mile radius.
241
+
242
+ # 5.1 Uniform Algorithm
243
+
244
+ For each inhabitant, the uniform algorithm naively selects a point within the circle uniformly at random as their place of residence. This covers the circle reasonably evenly, without yielding the unnatural regularity attained by placing the points with equal spacing. As we will see in Section 7, this distribution is oftentimes the hardest to deal with as there are few patterns or clusters to be taken advantage of.
245
+
246
+ ![](images/c12719f8f00db586a57ebba9efb0e28bccf3872fba17f0d354feeab3d78ecb95.jpg)
247
+ (a) Uniform
248
+ Figure 7: The two types of considered user distributions with 1000 users.
249
+
250
+ ![](images/c0b7d09353ca6b4b1ba298d3764d60b0392de382af620966456bc14f3377b6f3.jpg)
251
+ (b) Preferential
252
+
253
+ # 5.2 Preferential Attachment Algorithm
254
+
255
+ An alternative algorithm, based on the model of the preferential attachment suggested by Barabási and Albert [1], distributes the users in a slightly more organized way. Initially, some small number of users are distributed uniformly at random throughout the region to act as a seed for the algorithm. Afterwards, new users either select a location uniformly at random with some small fixed probability or select an existing user uniformly at random and located themselves within a small distance of that user.
256
+
257
+ This method ends up forming a population distribution approximating a preferential attachment network, with clusters of users benefiting from the rich-get-richer effect in bringing in additional users. This is a highly desirable quality for our model, as population distributions tend to roughly follow the scale-free design formed by preferential attachment [7].
258
+
259
+ For clarity, pseudocode for this algorithm is given in Program 1.
260
+
261
+ # 6 Our Approach
262
+
263
+ Here we begin discussing the core of our model and algorithms.
264
+
265
+ # 6.1 Basic Coverage
266
+
267
+ Two similar goals can be sought in trying to attain basic coverage. We would like to be able to both:
268
+
269
+ # Program 1 Preferential Attachment Algorithm
270
+
271
+ init_users is the number of initial users distributed uniformly at random
272
+ total_users is the number of total users
273
+ $r$ is the maximum radius between two “attached” users
274
+ connect_chance is the chance of a new user being attached to a random vertex
275
+ greedy_color refers to the common greedy coloring algorithm
276
+
277
+ # Function preferential_attachment
278
+
279
+ Let pop be an empty list of users
280
+
281
+ ```txt
282
+ /\*Place the initial set of users\*/
283
+ loop init_users times: append to pop a user located uniformly at random in the region of interest
284
+ ```
285
+
286
+ /\*Obtain a coloring of the generated graph\*/ coloring $=$ greedycolor $(G)$
287
+
288
+ For each vertex $v$ in $G$ :
289
+ [ \text{freqmap}[v] = \text{frequency\_set[coloring[v]]} ]
290
+
291
+ ```txt
292
+ return freqmap
293
+ ```
294
+
295
+ 1. Minimize the number of repeaters necessary to cover some fraction $p$ of the population.
296
+ 2. Maximize the number of people covered by some fixed number, $n$ , of repeaters.
297
+
298
+ Since the decision problems corresponding to the above two goals are both known to be $\mathcal{NP}$ -hard [11], we employ a greedy algorithm to try approximating the solution by repeatedly taking the locally-optimal move (that is, repeatedly covering users with maximally-covering circles). This greedy approach was shown to be a 2-approximation of the optimal solution in a more general setting, but has been demonstrated to approximate the optimal solution rather well in simulations [11]. Interestingly, the monotonic reasoning of the greedy algorithm allows us to simultaneously calculate our success at attaining the two goals mentioned above: while greedily computing the number of repeaters needed to cover a certain fraction of the population, we can record the total coverage at each intermediate step in order to compute the effectiveness of greedily placing $n$ repeaters.
299
+
300
+ Because this method is a specific case of the utility method introduced in Section 6.2, we delay the introduction of pseudocode for the time-being. More thorough analysis of the code's design and implications will be presented with the pseudocode.
301
+
302
+ # 6.1.1 Algorithmic Runtime
303
+
304
+ Letting $k$ denote the maximum number of circles of radius $r$ needed to cover the 40-mile-radius circle, the 2-approximation factor discussed in the previous section implies that the greedy algorithm will terminate within at most $2k$ iterations. To find the optimal circle in each iteration, the body of the loop iterates over $\mathcal{O}(s^2)$ grid points (for $s$ representing the fineness of the grid), each of which requires $\mathcal{O}((rs)^2)$ calculations, leading to an overall runtime of $\mathcal{O}(r^2 s^4)$ . Because $k$ is $\mathcal{O}(r^{-2})$ , the overall runtime of this algorithm is
305
+
306
+ $$
307
+ \mathcal {O} (r ^ {- 2} r ^ {2} s ^ {4}) = \mathcal {O} (s ^ {4}),
308
+ $$
309
+
310
+ or quartic in the fineness of the grid.
311
+
312
+ # 6.2 Additional Layers: Maximizing Utility
313
+
314
+ Although the algorithm described in Section 6.1 can successfully cover any given percentage of users, it often employs a number of repeaters which cover only a relatively small fraction of the population. Instead, a better use of these repeaters might be in the establishment of secondary, backup networks from which the high-population districts can benefit.
315
+
316
+ The reasoning is as follows: if an average repeater is malfunctioning with some fraction $p$ of the time, the expected loss of utility (ignoring regions of overlap) is $pn$ , where $n$ is the number of people covered by the repeater's signal. Thus, placing a repeater to get rid of most of this downtime in a high-population region might be more beneficial than putting
317
+
318
+ one somewhere that covers a small number of users. If we assume that the probabilities of each of two repeaters experiencing faults are independent, the probability of all repeaters in an area covered by $q$ repeaters being down simultaneously is $np^q$ . Alternatively, we can treat the first repeater covering a specific individual as having $p$ utility, the next having $p^2$ utility, and so on, forming a geometric series that converges to $1 / (1 - p)$ as the supremum of attainable utilities. As this value forms a multiplicative constant for any chosen set of parameters, the fact that the maximum attainable utility per person is more than 1 does not affect the calculation of the placement of repeaters in any scale-independent covering algorithm, including the greedy algorithm we employ.
319
+
320
+ To clarify the workings of our algorithm, we present pseudocode describing its pre-optimization behavior in Program 2.
321
+
322
+ # 6.2.1 Basic Coverage as a Special Case
323
+
324
+ In addition to maximizing the total utility of the covering, the above pseudocode can actually be used to satisfy the two goals mentioned in Section 6.1. To do this, one simply needs to set $u$ to 0, reducing the utility-maximization problem to coverage maximization, which in turn solves both tasks outlined in Section 6.1. This minimizes the amount of code reuse needed, and simplifies making later changes to the model.
325
+
326
+ # 6.2.2 Runtime Analysis and Optimizations
327
+
328
+ As the overall structure of the main loop's body has not changed significantly since previous runtime analysis, the runtime of each iteration is again $\mathcal{O}(r^2 s^4)$ . The number of iterations, however, is now also dependent on the rate of convergence of the utility to the desired amount, namely,
329
+
330
+ $$
331
+ k \sim \frac {\log_ {u} (1 - W)}{r ^ {2}}
332
+ $$
333
+
334
+ meaning that the runtime of the algorithm as a whole is
335
+
336
+ $$
337
+ \mathcal {O} \left(s ^ {4} \log_ {u} (1 - W)\right)
338
+ $$
339
+
340
+ whenever $u > 0$ and $\mathcal{O}(s^4)$ otherwise.
341
+
342
+ In order to successfully deal with this approximately-quartic runtime, various optimizations had to be made. Firstly, instead of recalculating the populations of the various repeater regions of effect in each iteration, we can precompute each of these populations beforehand. This can be done efficiently (in $\mathcal{O}(rn^2)$ time) using standard dynamic programming approaches. With these precomputed values and a maintained sorted list of circle populations, we can reduce the $\mathcal{O}(r^2 s^4)$ portion of the algorithm to $\mathcal{O}(rs^2\log (rs))$ average-case runtime, drastically reducing the number of computations necessary. Further, although not implemented in our model, the search portion of this step can be sped up
343
+
344
+ # Program 2 Utility-Based Greedy Cover
345
+
346
+ $r$ is the effective radius of a repeater's signal $u$ is the expected proportion of downtime of the repeaters $W$ is the utility ratio at which the program is to stop
347
+
348
+ # Function utility_basedgreedy-cover
349
+
350
+ covering $=$ empty list currentutility $= 0$
351
+
352
+ /\*Precompute maximum possible utility using infinite geometric series formula\*/ maxutility $=$ population/(1-u)
353
+
354
+ set the utility of all users to 1
355
+ loop until broken: let greedy_value $= -1$ best_point $= (0,0)$ for each grid point $p$ compute total utility $u$ of all users within $r$ miles of $p$ if $u > greedy\_ value:$ best_point $= p$ greedy_value $= u$
356
+
357
+ ```c
358
+ /\*Store result\*/
359
+ append best_point to covering
360
+ ```
361
+
362
+ for each user $x$ within $r$ miles of best_point: currentutility $+ = x$ .utility $^ { \text{生} } { \text{出} }$ u /\*Check stopping condition\*/ if currentutility/max/utility $< W$ .. break
363
+
364
+ ```txt
365
+ return covering
366
+ ```
367
+
368
+ further by parallelizing the search over multiple cores, reducing the total runtime by a small amount.
369
+
370
+ # 6.2.3 Additional Notes
371
+
372
+ This algorithm can also handle instances in which the search area surrounding the various potential repeaters is non-circular, although the various sequential optimizations (i.e. all those besides the parallelization) noted in 6.2.2 are eliminated, and the tight runtime is much more difficult to calculate (though it can easily be shown to be no worse than $\mathcal{O}\left(\frac{s^4\max(\mathbf{a})}{\min(\mathbf{a})}\right)$ where $\mathbf{a}$ is the array of areas of the various regions covered by different locations).
373
+
374
+ # 6.3 Frequency Selection
375
+
376
+ To assign frequencies to the various repeaters, we form a graph whose vertices represent the repeaters and whose edges represent locality between the two repeaters at its endpoints. After this graph is created, a reasonable vertex coloring is found (greedily or otherwise), with the different colors representing distinct frequencies of the repeaters. Pseudocode for this idea is given in Program 3. Note that because our model allows for only a small number of frequencies, we assume that assigning these colors to frequencies is trivial and so do not assign specific numbers.
377
+
378
+ Note that although we can simply define two repeaters to be adjacent if there's overlap in the regions covered by the pair, our current algorithm maintains an extra half-radius separating the regions to ensure minimal interference.
379
+
380
+ # 7 Results
381
+
382
+ This section will discuss the results from our model. We will look at the results from the basic model with both uniform and preferential distributions. This includes looking at both potential populations of users, and will look at coverage and utility. In each of the plots, each distinct frequency is a distinct color.
383
+
384
+ # 7.1 Basic Model
385
+
386
+ # Covering
387
+
388
+ Using the Greedy Covering Algorithm, described in Program 2, we not only look at the minimum number of circles needed to cover all users but also what happens if a smaller
389
+
390
+ # Program 3 Frequency Allocation Algorithm
391
+
392
+ repeater_list is the list containing the locations of repeaters $r$ is the effective radius of a repeater's signal frequency_set is a list of allowable, non-overlapping frequencies
393
+
394
+ # Function greedy_freqency_allocator
395
+
396
+ Let $G$ be an edgeless undirected graph with one vertex per repeater in repeater_list Let freqmap be a mapping from repeaters into frequencies
397
+
398
+ for each repeater pair $(i,j)$ with $i\neq j$ .. if $\mathrm{dist}(i,j)\leq \frac{5}{2} r$ Add an edge in $G$ connecting the vertices representing $i$ and $j$
399
+
400
+ /\*Obtain a coloring of $G$ , represented as a vertex $\rightarrow$ integer map\*/ coloring $=$ greedy_color(G)
401
+
402
+ For each vertex $v$ in $G$ :
403
+ [ \text{freqmap}[v] = \text{frequency\_set[coloring[v]]} ]
404
+
405
+ return freqmap
406
+
407
+ ![](images/5d40cf8e6f12593da00753a336164ff2e386ff3cbfee12fb5d3341911ac4b459.jpg)
408
+ (a) $80\%$ Coverage, 3 Colors, 11 Circles
409
+
410
+ ![](images/2e6a3e4f33b71bcdf94fd3f0b2b951f481c514b16b6abbca1d2da788c6f36a2b.jpg)
411
+ (b) $92\%$ Coverage, 5 Colors, 15 Circles
412
+
413
+ ![](images/e64a754c793504be6842a293f19a769687a522122d4f75ca054b0e47bc84d5a6.jpg)
414
+ (c) $100\%$ Coverage, 8 Colors, 28 Circles
415
+ Figure 8: Uniform distribution of 1000 users with $80\%$ , $92\%$ , and $100\%$ coverage. The number of colors corresponds to the number of frequencies used.
416
+
417
+ ![](images/7fe4460e694c2407a643188c68124a4cbda7caa55604433d52781d2e3d80e557.jpg)
418
+ (a) $80\%$ Coverage, 4 Colors, 12 Circles
419
+
420
+ ![](images/c66d1404361b9fc60206c69a81a57a2bc8c39c23cd4dab255dec3c1620b09dd6.jpg)
421
+ (b) $92\%$ Coverage, 6 Colors, 18 Circles
422
+
423
+ ![](images/75e2a78c51926b50a11ba0a2e4e8605a9aee1ce1d8084ca2ac20a7d8643b772c.jpg)
424
+ (c) $100\%$ Coverage, 10 Colors, 35 Circles
425
+
426
+ ![](images/a318f53c824f5c3e79cbab7f880606f97384beaa07282928888ae620e85b5d16.jpg)
427
+ Figure 9: Uniform distribution of 10,000 users with $80\%$ , $92\%$ , and $100\%$ coverage.
428
+ (a) $80\%$ Coverage, 3 Colors, 6 Circles
429
+ Figure 11: Preferential distribution of 1000 users with $80\%$ , $92\%$ , and $100\%$ coverage.
430
+
431
+ ![](images/6b33f240ee8a7d5945f819bae46f0e6b6c78298b56afe2326bc9d87e151de9df.jpg)
432
+ (b) $92\%$ Coverage, 4 Colors, 10 Circles
433
+
434
+ ![](images/d58a97cb738299661d9100b3102da2c172446a3bbe4cc9046230ef283d4c72b9.jpg)
435
+ (c) $100\%$ Coverage, 6 Colors, 21 Circles
436
+
437
+ percentage of users are required to be covered. For instance, if only $90\%$ of users are required to be covered the number of repeaters required may be much smaller. This acts as a sensitivity analysis as well as offering insight into the algorithm used.
438
+
439
+ The first case we look at is that in Figure 8, which uses a uniform distribution for the population spread. We see that for $80\%$ coverage of the users, only 11 repeaters are required. Again, for only $92\%$ of users to be covered, only 15 are required; however, the last $8\%$ require 13 extra repeaters, pushing the number of required repeaters to 28.
440
+
441
+ A similar phenomenon occurs in Figure 9 where the last $8\%$ require 17 extra repeaters, pushing the number to 35.
442
+
443
+ In the preferential distribution case, we notice that covering $80\%$ requires less repeaters. This is because a single
444
+
445
+ ![](images/9baeb285074d8e004ca1103d7e43849dc5de6ba892b6bc1327c4b2229783e201.jpg)
446
+ (a) $80\%$ Coverage, 3 Colors, 10 Circles
447
+ Figure 10: Preferential distribution of 10,000 users with a coverage.
448
+
449
+ ![](images/6b6ad9a8947e5c1abd01a4fc0ddd718161ac496d15eb59affc6e639fbbe1e285.jpg)
450
+ (a) Covering
451
+
452
+ ![](images/345642bb91f350946eecce34414286ff17e1dd76c619704c87d4e16a4424f540.jpg)
453
+ (b) Utility
454
+ Figure 12: Uniform and preferential distributions of both 1,000 and 10,000 user population, looking at the covering and utility cases. In (a), the minimum number of repeaters, on the $y$ -axis, needed to provide coverage to a percentage of people, on the $x$ -axis, is plotted. In (b), the minimum number of repeaters to achieve the utility ratio is plotted. Each curve of 1000 users averages 100 runs; each of 10,000 users averages 20 runs. The error bars on each curve show one standard deviation away from the average.
455
+
456
+ repeater can cover a larger percentage of the users due to them being clustered in location. The last $8\%$ still requires a significant amount of repeaters.
457
+
458
+ # Utility
459
+
460
+ Looking at Figures 13 and 14, we again see that as the percentage of users increases, the number of repeaters required increases. There again exists the shift perceived as before, and preferential distributions still require less repeaters than that for uniform distributions.
461
+
462
+ # Overview
463
+
464
+ In Figure 12a, we average the minimum number of circles needed to cover some percentage of the population from 100 runs for each curve of 1000 users and 20 runs for each curve of 10,000 users. We do this by incrementally adding circles and checking how many circles pushes the percentage over that in the plot. We can see that either of the uniform populations requires more repeaters than either of the preferential populations. This is because the preferential clusters the points together, making them easier to cover with less repeaters. The uniform population, as the number of users increases, should converge to needing to cover the entire area. In fact, the preferential population should also cause this to happen; however, as we can see, the Greedy Covering Algorithm performs better in this case because the order of selection is close to optimal due to the extreme clusters. For
465
+
466
+ ![](images/4ee694e2deb1705d48ef12d1385d7ef550b35520c9d4f9c6780defc7e9a991fe.jpg)
467
+ (a) $80\%$ Coverage, 4 Colors
468
+
469
+ ![](images/f7504c284a811e1532d10f0a12ef083ced60ed3b127dc9ffe847db10a28c1c2a.jpg)
470
+ (b) $90\%$ Coverage, 6 Colors
471
+
472
+ ![](images/2405ae604b59e2ea2fb224b48058c18c8efda71fdb09b7d94860f455684844ca.jpg)
473
+ (c) $98\%$ Coverage, 10 Colors
474
+ Figure 13: Preferential distribution of 10,000 users with $80\%$ , $90\%$ , and $98\%$ utility ratios.
475
+
476
+ ![](images/1cdb02df8c54a418b0a10b10ebb3dec391500cd11a3d24ce22df6ebd81b5bfb0.jpg)
477
+ (a) $90\%$ Coverage, 4 Colors
478
+ Figure 14: Preferential distribution of 1000 users with $90\%$ and $98\%$ utility ratios.
479
+
480
+ ![](images/2f2916fa784bf718e2e9f8a3b2a3f80d3afff7f4fd3fe27342412a4200b3046e.jpg)
481
+ (b) $98\%$ Coverage, 6 Colors
482
+
483
+ smaller sizes, the greedy algorithm is optimal in the case where there exist only clusters with sufficient spacing. We can see the algorithm at work in the figures, selecting the high density areas first and then gathering the remainder population.
484
+
485
+ As the number of users increase, the curves are simply shifted up. This is because more area is covered by users as the number of users increases, and covering the same percentage of users should require more.
486
+
487
+ We perform the same operation to find the curves in Figure 12b. The behavior is very similar; however, the scale differs.
488
+
489
+ In each case, we can easily stay within the frequency band of 145-148MHz, and the number of colors rarely grow to above 12, meaning that we only need allocate 12 frequencies to the use of these repeaters.
490
+
491
+ # 7.2 Mountains and Complex Geometries
492
+
493
+ Up to this point we have only considered a completely flat surface for our reregion of interest. If we allow the elevation of the surface to deviate from the smooth curved surface of a sphere, the problem of placing repeaters becomes considerably more difficult. Previously the coverage of a repeater was independent of its location, however if we allow for complex elevation geometries, we must calculate the coverage of a repeater for any given location. This requires us to check for ever potential repater location at which locations would an antenna be in a direct line of sight path from the antenna of the repeater. To accomplish this we designed and implemented an algorithm that determines whether two points on an elevation map can be connected by an onubstructed line of sight path, and
494
+
495
+ # Line of Sight
496
+
497
+ To determine whether or not one point is in the line of sight of another, we first find the elevation of the mesh points between the two radios, which we call the elevation profile curve. This is done using bilinear interpolation when accuracy is desired, or Bresenham's line algorithm when speed is desired over accuracy. Next we calculate the heights of the antennas places at the radio locations (generally assumed to be 18 feet). Then we form the straight line connecting the antennas and check to see if it intersects the elevation profile. If there is an intersection, then the line of sight is obstructed. In practice the curve and the line are calculated and compared simultaneously so that intersections can be found more quickly. This algorithm has a runtime of $\mathcal{O}(n)$ (although the expectation is constant for large $n$ ).
498
+
499
+ # Repeater Coverage
500
+
501
+ For each mesh point on our grid, we ascertain the area coverage of a potential repeater at that location. We do so by checking the line of sight visibility from that point to each other point on the grid. If the mesh is a square of size $n$ , then ince there are $n^2$ points on the mesh, and for each of these points we must look at every other point, this algorithm calls the line of sight algorithm $n^4$ times, although exploiting the symmetry of the problem allows us to cut this in half (if point A can see B, then point B can see A). Thus determining the coverage of a repeater for every point on the mesh has an algorithmic complexity of $O(n^5)$ .
502
+
503
+ # Constructing Terrain
504
+
505
+ To test our model we need realistic elevation data. While we can use real world elevation data, we also wanted a way to generate our own rough terrain for flexibility. We start with a smooth surface representing the curvature of the earth. This is given by
506
+
507
+ $$
508
+ f (x, y) = \sqrt {R ^ {2} - x ^ {2} - y ^ {2}} + \sqrt {R ^ {2} - r ^ {2}} - R
509
+ $$
510
+
511
+ where $R$ is the radius of the earth and $r$ is the radius of the region. Next we can perturb this surface using either real world elevation data, or the Diamond-Square algorithm, to simulate mountains and other complex terrain.
512
+
513
+ # Effects of Mountains
514
+
515
+ When we add mountains to our region, we find that some locations offer far more coverage for a repeater than other locations. Therefore, in general, our greedy algorithm has a tendency to pick these locations as they are more effective at both increasing coverage of users and utility ratio. In some scenarios, we find a location that offers an exceptionally great coverage, for instance at the top of a high peak. In these cases, generally fewer repeaters are needed to accommodate the users, since a few well placed ones are particularly effective. However, the mountains and valleys can also isolate users from the repeaters, requiring a repeater to be placed very close to them in order to provide service. These isolated repeaters only provide coverage to users in very close proximity, limiting there effectiveness. In these cases, more repeaters than usual may be necessary to provide adequate coverage for the region.
516
+
517
+ # 8 Conclusions
518
+
519
+ Although our analytic results in Section 4 suggest that an optimal covering of any collection of points within the circle should require no more than 21 repeaters, our greedy
520
+
521
+ ![](images/8563aafa78bcbb281fd5517a60fd7a5234201c5ebc2e18f3d263378c3ab1e76c.jpg)
522
+ (a) Smooth Curvature of Earth
523
+
524
+ ![](images/3cdddeb5054eff0b915cda7f368429d70e6ee8f4e50010f43fd3df2fbf43678f.jpg)
525
+ (b) Perturbed by Mountains
526
+ Figure 15: Here we shows the curvature of the Earth in our model as well as the curvature perturbed by mountains.
527
+
528
+ approach consistently required more than this number. However, not only is this not surprising (given the simple nature of the algorithm), but it also allows for a simple solution: whenever the model requests more than 21 repeaters, simply place the repeaters according to the technique discussed in Section 4. Thus, although our model is weak when $100\%$ coverage is sought, the fact that we know of an optimal solution to the problem (for most population distributions) makes this a moot point.
529
+
530
+ Instead, the strength of our model lies in its ability to present good solutions when fewer than $100\%$ coverage is requested. As portrayed in Figures 8, 9, 11, and 12, relaxing the perfect covering requirement slightly allows for solutions involving much fewer than 21 repeaters, greatly improving the value one is attaining per placed repeater.
531
+
532
+ Interestingly, the same phenomenon holds when attempting to maximize the utility ratio. Figures 12, 13, and 14 all show that marginally weaker goals result in a substantially smaller number of required repeaters.
533
+
534
+ # 9 Future Considerations
535
+
536
+ # 9.1 Changes to the Model
537
+
538
+ # 9.1.1 Ad-hoc Networks and Game Theory
539
+
540
+ Currently, our model assumes that there is an external medium by which repeaters can communicate. It would be interesting to analyze what happens in the absense of this chan-
541
+
542
+ nel, where repeaters relay messages by spreading using their own transmitters (and thereby being bounded by some physical limitations).
543
+
544
+ Further, one can consider the question as to what kinds of game theoretic implications can arise. Can there be an analogue of selfish routing in the VHF wave networks?
545
+
546
+ # 9.1.2 One-to-One Communication
547
+
548
+ Whereas our model is built for one-to-many communication, there can also exist scenarios where the users wish to communicate between themselves. Although much more complex, creating a system that allows for any pair of users to converse could prove to be enlightening algorithmically.
549
+
550
+ # 9.1.3 Non-static and Introduced Users
551
+
552
+ As of now, our model assumes all users are completely stationary. Perhaps a more complete model would incorporate dynamic users in addition to static ones, allowing for mobile individuals to participate in the network as well. Further, how can our model incorporate the joining and leaving of additional users? Is it "stable" with respect to small perturbations in the population?
553
+
554
+ # 9.2 Model Correctness
555
+
556
+ # 9.2.1 Region Topography
557
+
558
+ A truly accurate model of repeater coordination would utilize terrain data to better predict just how the VHF radio waves will propagate throughout the region. Although our model crudely approximates this propagation with perfect circles, the various aspects of terrain data can greatly change the efficacy of various locations in housing repeater. The Longley-Rice Irregular Terrain Model[8], for example, demonstrates a simple way to model wave propagation on complex terrains.
559
+
560
+ # 9.3 Real World Data
561
+
562
+ Currently, all of our data is randomly generated. The best way to model real-world scenarios, however, is to acquire the actual population distributions from cities and find some topographic map of the regions in question. If chosen carefully, this data can much better reflect the intricacies of population spread and the abnormalities of terrain.
563
+
564
+ # 9.4 Theoretical Issues
565
+
566
+ # 9.4.1 Algorithmic Improvement
567
+
568
+ Currently, our analysis depends heavily on greedy algorithms. Although the optimal solution to these problems might be intractable, our model could still benefit from algorithms with stronger average-case behavior, many of which are readily available.
569
+
570
+ # References
571
+
572
+ [1] Albert, R., & Barabási, A.-L. (2002). Statistical mechanics of complex networks. Reviews of Modern Physics, 74, 47-97.
573
+ [2] American Radio Relay League (2008). The Arrl Operating Manual for Radio Ama- teurs. ARRL OPERATING MANUAL. Amer Radio Relay League.
574
+ [3] Barclay, L., & Institution of Electrical Engineers (2003). Propagation of radiowaves. Electromagnetic Waves. Institution of Electrical Engineers.
575
+ [4] Federal Communications Commission (2011). About the fcc. URL http://www.fcc.gov/aboutus.html
576
+ [5] Fowler, R., Paterson, M., & Tanimoto, S. (1981). Optimal packing and covering in the plane are np-complete. Information Processing Letters, 12(3), 133-137.
577
+ [6] Fukshansky, L. (2011). Revisiting the hexagonal lattice: On optimal lattice circle packing. Elemente der Mathematik, 66(1), 1-9.
578
+ [7] Hernando, A., Villuendas, D., Vesperinas, C., Abad, M., & Plastino, A. (2009). Unravelling the size distribution of social groups with information theory on complex networks. European Physics Journal B.
579
+ [8] Longley, A. G., & Rice, P. L. (1968). Prediction of tropospheric radio transmission loss over irregular terrain: A computer method. *ESSA Tech. Report ERL65-ITS67*.
580
+ [9] Straw, R., Cebik, L., Hallidy, D., & Jansson, D. (2007). The ARR L Antenna Book. ARR L ANTENNA BOOK. ARR L.
581
+ [10] ukrepeaters, The UK Amateur Radio Repeater Resource (2011). 2 metres band repeaters (coverage). URL http://www.ukrepeater.net/2m.htm
582
+ [11] Xiao, B., Zhuge, Q., He, Y., & Sha, E. H.-M. (2003). Algorithms for disk covering problems with the most points. Proceeding of the International Conference on Parallel and Distributed Computing and Systems, (pp. 541-546).
MCM/2011/B/2011-MCM-B-Com/2011-MCM-B-Com.md ADDED
@@ -0,0 +1,98 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Judges' Commentary: The Outstanding Repeater Coordination Papers
2
+
3
+ Kathleen Shannon
4
+
5
+ 1101 Camden Ave.
6
+
7
+ Mathematics and Computer Science
8
+
9
+ Salisbury University
10
+
11
+ Salisbury, MD 21804
12
+
13
+ kmshannon@salisbury.edu
14
+
15
+ # Overview
16
+
17
+ This year's problem dealt with finding the number of repeaters needed to create a VHS network to cover a circular region of radius 40 miles and simultaneously serve first 1,000, then 10,000, users. Naturally, there is quite a bit of literature available related to this topic.
18
+
19
+ # Approaches
20
+
21
+ The approaches used could be broken down into two categories. Some papers focused first on covering the area, others on covering the population.
22
+
23
+ # Covering the Area
24
+
25
+ There is much to be said for the simplicity and directness of the method of covering the area first. The most common approach was to tile the region with hexagons inscribed within circles of radius equal to the distance that a user's signal will reach effectively. Some papers shifted their hexagonal lattice back and forth to capture the minimum number of hexagons needed to cover the 40-mile circle. Good papers then generated simulated populations, generally uniformly distributed, to check if the number of repeaters was adequate for the usage load. Most then added more repeaters
26
+
27
+ The UMAP Journal 32 (2) (2011) 149-154. ©Copyright 2011 by COMAP, Inc. All rights reserved. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice. Abstracting with credit is permitted, but copyrights for components of this work owned by others than COMAP must be honored. To copy otherwise, to republish, to post on servers, or to redistribute to lists requires prior permission from COMAP.
28
+
29
+ for the 10,000-user case. The better papers tested their results against non-uniformly distributed populations, either following some other distribution or concentrated in groups or towns. Some of the population generation that we saw was quite creative and demonstrative of good modeling.
30
+
31
+ # Covering the Population
32
+
33
+ Many of the papers simulated user populations first, then attempted to cover all (or a high percentage of) the users with a minimal number of repeaters. Of course, if a population distribution is simulated and then covered with $K$ repeaters, in general additional argument is needed before one can conclude that $K$ repeaters will work for any such distribution. Most of the papers used repeated simulations as their argument. There were some interesting approaches used to cover the populations minimally, including greedy and genetic algorithms. One of the more creative papers assumed that although the goal was to cover 1,000 or 10,000 simultaneous users, there were in fact, more users than that; and that team's algorithm was designed to capture just the required number of users. Although this was a simplifying assumption that dramatically changed the problem, and judges felt that the uncovered users might not appreciate this approach, we could find nothing in the problem statement to preclude it; and the paper in question stated and justified the assumption.
34
+
35
+ The most disappointing feature of these papers, which were in general creative and presented interesting modeling, was that although their approaches clearly relied on advance knowledge of the locations of all the users in the region, virtually none of the papers highlighted this fact, either in the assumptions or in the weaknesses of their models. Although (as one might expect) this approach generally (especially in the 1,000-user case) required fewer repeaters than the area-first approach, almost without exception teams that took this approach failed to indicate that such an approach requires collecting and entering great amounts of data that may not even be available in a real-world application. While this fact does not necessarily negate the validity of the model or its results, the papers should have clearly stated the assumption that these locations must be known for the model to be useful and should also have mentioned this requirement as a disadvantage. Almost no teams made the reader aware of this critical fact.
36
+
37
+ Generally, the judges in the final stages, referring to flaws in papers, call a flaw that keeps a very good paper from being outstanding a "fatal flaw"; and our discussions and deliberations frequently come down to arguing whether or not a discovered flaw should be considered "fatal." Some felt that requiring knowledge of users' locations, while neither including such knowledge in the assumptions nor acknowledging the need as a weakness, should be a "fatal flaw"; but eventually the desire to have some Outstanding papers outweighed those feelings.
38
+
39
+ # Determining the Required Spacing for the Repeaters
40
+
41
+ There was quite a bit of disagreement in the ranges used for the repeaters and the users. Papers generally correctly assumed that the range of the repeaters would be greater than the range of the users' equipment, making the latter the determining factor. But we saw ranges for repeaters going from about 3 miles to 100 miles. It is possible, using the radius of the Earth (and assuming that the Earth is perfectly spherical) to compute the "line of sight" distance to the horizon as a function of the height of the repeater. Some papers found this relationship, either in the literature or by computing it themselves. Others made reference to online sources giving the ranges for repeaters. Given the time constraints and the fact that this is a modeling contest, not a contest to distinguish engineering prowess, we did not use the range value as a discriminator, even though we suspected that some of the sources referenced may not have actually referred to VHS repeaters.
42
+
43
+ # Use of Sources
44
+
45
+ In a contest of this nature, it is expected that participants will rely on sources; but it is also expected that the participants will cite and evaluate those sources. Many papers used graphics that—since we saw them in a number of papers—must have come from some online source, but they failed to specifically credit the source for the graphic. Also, many used models that they found in the literature, such as the Hata Model. This is appropriate; however, if you choose a model from the literature, then you should explain why you choose that equation to use, what assumptions led you to that equation, and what value-added you gave it as you adapted it to the given problem. It is also important that if you use equations from the literature, that you adapt the notation to match what you use in the rest of the paper, and that you clearly explain any notation that you use.
46
+
47
+ # Mountainous Terrain
48
+
49
+ Most papers that considered mountainous terrain spent some time dealing with line-of-sight issues relating to the terrain. A few simulated some mountainous terrain or found some sample elevation maps and indicated what changes would be necessary in repeater placement for these samples. Some papers discussed changes to the population distribution caused by the terrain. The judges acknowledged that it would have been unreasonable to expect models that would independently deal with any terrain, but we looked for papers that indicated how one would approach uneven ground.
50
+
51
+ # General Modeling Principles
52
+
53
+ One of the things teams needed to do for this problem—and which many neglected—was to decide consciously which portions of their model should be deterministic and which should be stochastic.
54
+
55
+ Assumptions were also important factors. When you make assumptions, you need to justify them—not simply state them. You should not include assumptions that are unnecessary for your model or have nothing to do with it. But even with the assumptions that you do need, you should indicate how sensitive your results are to those assumptions. It is OK to justify an assumption by indicating that it was necessary for your model, even though in reality it may not hold (for example, in this problem the assumption that population is uniformly distributed might fall into this category); but in that case, it is essential to discuss how the results depend on that assumption.
56
+
57
+ It is important not to make assumptions that defeat the purpose of the problem. Some papers assumed that repeaters were connected by wires. That was not at all in keeping with the statement of the problem, and it eventually eliminated some otherwise well-written papers.
58
+
59
+ # Sensitivity Analysis, Error Analysis, and Model Testing
60
+
61
+ An important area that turned out to be one of the major discriminators at the end was testing and sensitivity analysis. How does the number of repeaters change if your population is distributed in a different fashion? If you used normal distributions, for example, how do your results change given an $x\%$ change in the assumed standard deviation? What if the range of a repeater is less than assumed? The better papers also tested their results, some by comparison with the actual distribution of users and repeaters in various locations, and others by simulations of one sort or another.
62
+
63
+ Finally, always do a commonsense check. If you are running out of time, and your commonsense check fails, you should at least acknowledge that. We had some beautifully written papers that had results where you needed on the order of 2,000 repeaters for 1,000 users. One could argue that such might be possible if the area coverage was what was driving the need. But when the same paper then required $15,000+$ repeaters for 10,000 users, the reality check certainly failed. This was a "fatal flaw"! Always ask if the results "make sense" logically.
64
+
65
+ # Executive Summary
66
+
67
+ Every year, we seem to reiterate the importance of a good executive summary. We continue to threaten not to read beyond a poor summary; and while we have yet to live up to that threat, it is certainly the case that the summary sets the expectation of the reader for the rest of the paper. The summary should
68
+
69
+ - be the last thing written,
70
+ - stand alone,
71
+ make sense, and
72
+ - be satisfying, even if the reader has not read the problem and never intends to read the paper.
73
+
74
+ The results, a description of the model, any key assumptions, and recommendations should be clearly included. Important strengths and weaknesses should be highlighted. It takes some skill to write a good executive summary, but it is a skill that will take you far. Out in that "real world," you frequently need to boil down months of work into a well-crafted executive summary for the decision makers. Your MCM summary should be good practice. Look at the Outstanding paper printed in this issue, which exemplifies what we look for in a good summary. That paper consistently, through all rounds of judging, received the highest marks.
75
+
76
+ # Writing and Organization
77
+
78
+ Even a brilliant team will not go far if the members cannot convey their work effectively. A few tips for the writing:
79
+
80
+ - Even when you divide up tasks such as sections to write, have your best writer do a final edit. Do this after you have run the grammar and spellchecker, and then run them one more time after the final edit.
81
+ - If you try some additional models and abandon them, not using them in the final analysis, put them in an appendix rather than in the body of the paper, where they distract the reader.
82
+ - Keep in mind that judges have very limited time to read your paper. The salient points need to be easy to find. If your paper is long, it may be that although many judges have looked at it, no single judge has had time to read the whole thing.
83
+ - Avoid unnecessary repetition, use good section headings, and offset / display important parts.
84
+
85
+ - Label graphics in such a way that a reader flipping through your paper will see what they represent.
86
+ - Have conclusions at the end of each section, and make sure that results are easy to find.
87
+
88
+ # Conclusion
89
+
90
+ This problem led to a variety of solution techniques and approaches. It allowed for a great deal of creativity, and in the end the creativity in the solution was one of the primary factors for bringing papers recognition. Mathematical modeling is an art, and in the long run it will be the kind of creativity we see in these papers that will help solve the problems facing the world. We commend all the participants for developing these crucial skills. We are proud of your accomplishments and the drive that led you to devote your time and energy to this endeavor.
91
+
92
+ # About the Author
93
+
94
+ ![](images/55cad1ca6510a81c401c031370554777e0fb58fbdc0cc56cc791405b975360b0.jpg)
95
+
96
+ Kathleen Shannon is Professor of Mathematics at Salisbury University and former chair of the Department of Mathematics and Computer Science. She earned her bachelor's degree at the College of the Holy Cross with a double major in Mathematics and Physics and her Ph.D. in Applied Mathematics from Brown University in the mid 1980s
97
+
98
+ under the direction of Philip J. Davis. Since then, she has been primarily interested in undergraduate mathematics education and mathematical modeling. She has been involved since 1990 with the MCM as, at different times, a team advisor, a triage judge, and a final judge (sometimes as an MAA or SIAM judge). She has also been a co-Principal Investigator on two National Science Foundation Grants for the PascGalois Project (http://www.PascGalois.org) on visualizing abstract mathematics.
MCM/2011/B/2011-MCM-B-Com2/2011-MCM-B-Com2.md ADDED
@@ -0,0 +1,65 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Judges' Commentary:
2
+
3
+ # The Fusaro Award for the
4
+
5
+ # Repeater Coordination Problem
6
+
7
+ Marie Vanisko
8
+
9
+ Dept. of Mathematics, Engineering, and Computer Science
10
+
11
+ Carroll College
12
+
13
+ Helena, MT 59625
14
+
15
+ mvanisko@carroll.edu
16
+
17
+ # Introduction
18
+
19
+ MCM Founding Director Fusaro attributes the competition's popularity in part to the challenge of working on practical problems. "Students generally like a challenge and probably are attracted by the opportunity, for perhaps the first time in their mathematical lives, to work as a team on a realistic applied problem," he says. The most important aspect of the MCM is the impact it has on its participants and, as Fusaro puts it, "the confidence that this experience engenders." The Ben Fusaro Award for the 2011 discrete problem went to a team from the University of Electronic Science and Technology (UES&T), Web Sciences Center, in Chengdu, Sichuan, China. This solution paper was in the top group, the Outstanding papers. Characteristics it exemplified were:
20
+
21
+ - Presented a high-quality application of the complete modeling process.
22
+ - Demonstrated noteworthy originality and creativity in the modeling effort to solve the problem as given.
23
+ - Written clearly and concisely, making it a pleasure to read.
24
+
25
+ # The Problem
26
+
27
+ This year's problem dealt with finding the number of repeaters needed to create a VHS network to cover a circular region of radius 40 miles and
28
+
29
+ simultaneously serve first 1,000 users, then 10,000 users. The approaches that different teams took could be broken down into two categories. Some papers focused first on covering the area, others on covering the population. This team did both. Students found many publications related to this topic. While to receive an Outstanding or Meritorious designation it was important for a team to review the literature, teams had to address all the issues raised and come up with a solution that demonstrated their own creativity.
30
+
31
+ # The University of Electronic Science and Technology Paper
32
+
33
+ # One-Page Summary Sheet
34
+
35
+ This team did an outstanding job with its executive summary. Although it was a bit long with very small print, in one page they motivated the reader and provided the reader with a precise summary of what they had accomplished. It gave an overview of everything from the assumptions, to the modeling and how it was done, to the testing of their models, and finally, to the analysis of the accuracy of their results and limitations of their models. It was well written and among the best examples of what an executive summary should be. The team's executive summary was followed by a one-page abstract. Typically, an executive summary contains more information (and often more sensitive information) than the abstract does.
36
+
37
+ # Assumptions
38
+
39
+ As was the case with many teams, this team began with the assumption that the distribution of users in the area was uniform. Other teams considered a variety of other distributions as well, but this team did not. The second assumption stated the conditions under which repeaters would interfere with each other and the third was that the wireless signals can fade freely. Of critical importance, the team showed how their assumptions were used in the development of their model.
40
+
41
+ # The Model and Methods
42
+
43
+ The team proposed a two-tiered network model consisting of user nodes and repeater nodes. As many teams did, they covered the circle with a sufficient number of hexagons to yield transmission among users based on the maximum communication distances for each type of node. However, they also used Voronoi diagrams to optimize communication with the least
44
+
45
+ number of repeaters. Spanning trees were used to assign frequencies in the desired ranges and private line (PL) tones.
46
+
47
+ # Testing Their Models
48
+
49
+ After determining the communication distances for their users and repeaters, along with the maximum capacity for a repeater, the UES&T team computed lower bounds for the number of repeaters that would be needed for each population size. They then developed an algorithm to place the users and repeaters within the designated circle of radius 40 miles and subdivided the area using Voronoi diagrams. They refined their algorithm to make certain that in the Voronoi regions, no repeater had users beyond its threshold capacity. Then they tested their models—not with just one size region, but with many, for user numbers of 1,000 and 10,000. By analyzing their results, they were able to comment on the sensitivity and robustness of their models. This was something that very few papers were able to do, and it is a very important step in the modeling process.
50
+
51
+ # Recognizing Limitations of the Model
52
+
53
+ Recognizing the limitations of a model is an important last step in the completion of the modeling process. The students recognized that their algorithms would have to be modified if the terrain were not flat or if the users were distributed differently.
54
+
55
+ # References and Bibliography
56
+
57
+ The list of references used was fairly thorough, but specific documentation of where those references were used was not always clear. Precise documentation of references used should have been included throughout the paper.
58
+
59
+ # Conclusion
60
+
61
+ The careful exposition in the development of the mathematical models made this paper one that the judges felt was worthy of the Outstanding designation. The team members are to be congratulated on their analysis, their clarity, and on using the mathematics they knew to create and justify their own model for the Repeater Coordination Problem.
62
+
63
+ # About the Author
64
+
65
+ Marie Vanisko is a Mathematics Professor Emerita from Carroll College in Helena, Montana, where she taught for more than 30 years. She was also a Visiting Professor at the U.S. Military Academy at West Point and taught for five years at California State University Stanislaus. She chairs the Board of Directors at the Montana Learning Center at Canyon Ferry and serves on the Engineering Advisory Board at Carroll College. She has been a judge for both the MCM and HiMCM.
MCM/2011/B/9440/9440.md ADDED
The diff for this file is too large to render. See raw diff