| symbol | descriptions | units |
| v | Evacuation Speed of Flow | m/s |
| Sind | Average personal area | m2/people |
| Stotal | Total area of human flow | m2 |
| D | Crowd congestion | \ |
| k | Speed coefficient | \ |
| ρ | Population density | people/m2 |
| q | Human flow | people/s |
| G | Overall matrix of the model | \ |
+
+Note: Symbols not mentioned in the table will be explained if they appear below.
+
+# 4 Microscopic model: personnel evacuation speed model
+
+# 4.1 A possible model of different road segments (N&M model)
+
+According to the related research of Predtechenski and Milinskii [3], the average evacuation speed is a function of the flow density. Besides, they used the observation data of the aisle and stairs to fit the relationship curve. Unfortunately, this is a complex nonlinear relationship, that is not suitable for a wide range of applications.
+
+After that, Nelson and Mowrer et al[4]. simplified the relationship between velocity and flow density to a linear relationship, and selected different linear coefficients for different evacuation channels. When 0.54 persons/m $^2$ ≤ D ≤ 3.8 persons/m $^2$ , the relationship between the moving speed and the flow density is as follows.
+
+$$
+v = k - 0. 2 6 6 k D \tag {1}
+$$
+
+Where, we define $D$ as the crowd congestion degree, that is, the ratio of the actual area of the human body in the crowd to the total area of the flow of people, which reflects the overall crowding degree of the crowd. It can be derived from $D = \frac{NS_{ind}}{S_{total}}$ , Where, $S_{ind}$ refers to the average personal occupation area, $S_{total}$ refers to the total area of the flow of people. And $k$ in the formula (1) can be determined from Table 2[5].
+
+Table 2: Speed coefficient $k$
+
+| Evacuation Channel Type | k |
| Stairs | StairsRiser/mm | Tread/mm |
| 190 | 254 | 1.00 |
| 178 | 279 | 1.08 |
| 165 | 305 | 1.16 |
| 165 | 330 | 1.23 |
| Corridor,Aisle,Ramp,Doorway | 1.40 |
+
+Corresponding to the above table, $k$ is taken as 1 and 1.4 respectively to obtain the "staircase model" and "corridor model".
+
+In summary, $N \& M$ Model[6] considers the relationship between evacuation speed and flow density[7], and gives different models such as the staircase model and corridor model for different road sections, but it is not suitable for the situation where the flow density is relatively large, such as the narrow exit of the room. So we try to find a better model to replace it.
+
+# 4.2 A possible model considering the degree of congestion
+
+# 4.2.1 Crowding in two directions
+
+According to research by Lu Junan and others, the speed of personnel escape is affected by Front-Crowding and Side-Crowding.[8]
+
+Front-Crowding: In an escaped group, taking the exit as the coordinate origin, suppose the location of the $j$ -th person is at $x_{j}$ , the speed of him is $\frac{dx_j}{dt}$ . So the distance from the previous $j - 1$ -th person is $|x_{j} - x_{j - 1}|$ , which is recorded as $A$ , also, the relative velocity is $\frac{dx_j}{dt} - \frac{dx_{j - 1}}{dt}$ , which is recorded as $B$ . Then the forward damping force caused by crowding before and after is proportional to $\frac{B}{A}$ .
+
+Side-Crowding: Here we use a frictional damping force to describe the level of SideCrowding[9]. Note that Side-Crowding is generally suitable when the crowd density is particularly large, and when the population density is small, it has little effect.
+
+# 4.2.2 Escape velocity dynamic equation
+
+According to Newton's second law, we have obtained the following escape velocity dynamics equation.
+
+$$
+- m _ {j} \frac {d ^ {2} x _ {j}}{d t ^ {2}} = \alpha \left(\frac {d x _ {j}}{d t} - \frac {d x _ {j - 1}}{d t}\right) / | x _ {j} - x _ {j - 1} | + f _ {j} \tag {2}
+$$
+
+Where $j = 2N$ , which constitutes a system of equations with $N - 1$ equations.
+
+# 4.2.3 Stable evacuation speed model (SES Model)
+
+Performing integration of the above formula, we get
+
+$$
+\frac {d x _ {j}}{d t} = \lambda_ {j} \ln \left(x _ {j - 1} (t) - x _ {j} (t)\right) - \frac {F _ {j}}{m _ {j}} + c _ {j} \tag {3}
+$$
+
+In the formula, $F_{j}$ is the integral term of $f_{j}$ and $c_{j}$ is a constant term.
+
+Let each person occupy a square with an area of $LL$ , Set the front interval as $L + d_{1}$ , the side interval is $L + d_{2}$ .
+
+Then front density $\rho_{1} = \frac{1}{L + d_{1}}$ , side density $\rho_{2} = \frac{1}{L + d_{2}}$ .
+
+Evacuation speed
+
+$$
+v \left(\rho_ {1}, \rho_ {2}\right) = \frac {d x _ {j}}{d t} \tag {4}
+$$
+
+When $v(\rho_1, \rho_2) = 0$ , $\rho_1 = \rho_{1m}$ , $\rho_2 = \rho_{2m}$ . $\rho_{1m}$ and $\rho_{2m}$ represent front and side maximum density respectively. Then use the expression containing $\rho_{1m}\rho_{2m}$ to indicate $c_j$ . Substituting a into equation (1), we get
+
+$$
+v _ {j} \left(\rho_ {1}, \rho_ {2}\right) = \left\{ \begin{array}{l l} \lambda_ {j} L n \frac {\rho_ {1 m}}{\rho_ {1}} - \frac {k \left(\rho_ {2} - \rho_ {2 m}\right)}{m _ {j}} & \left(\rho_ {i c} < \rho_ {i} \leq \rho_ {i m}, i = 1, 2\right) \\ v _ {m} & \left(\rho_ {i} \leq \rho_ {i c}, i = 1, 2\right) \end{array} \right. \tag {5}
+$$
+
+When the density is small, the velocity of the human flow is not affected by it, and the critical density does not affect the velocity[10].
+
+Considering other factors such as personnel characteristics, the degree of influence before and after congestion is different, we attain
+
+$$
+v _ {j} \left(\rho_ {1}, \rho_ {2}\right) = v _ {m} \left(\alpha L n \frac {\rho_ {1 m} / \rho_ {1}}{\rho_ {1 m} / \rho_ {1 c}} + \frac {\beta \left(\rho_ {2 m} - \rho_ {2}\right)}{\rho_ {2 m} - \rho_ {2 c}} + \gamma\right) \tag {6}
+$$
+
+where, $\rho = \rho_{1}\rho_{2} = D$ , $\rho_{1c}$ and $\rho_{2c}$ are respectively the critical densities of two directions. $\alpha, \beta, \gamma$ is corresponding weight of each aspect.
+
+The above is the evacuation escape speed model, also known as the SES Model.
+
+It is analyzed that the above model is a generalized model. When the flow density is large, neither of $\alpha, \beta, \gamma$ is zero. When the flow density gets smaller, it can be considered to be mainly affected by Front-Crowding; when the flow density is small enough, only the impact of personnel characteristics can be considered. At this time, the population structure is relatively uniform under normal circumstances, the characteristics of the personnel should be consistent, and $v$ is a certain value[11]. Here is an example to implement this model.
+
+According to the fire safety principle data prepared by M.Y.Roytman et al.[12], the adult body thickness in the front and rear direction is about $0.32m$ , and the body width in both directions is $0.5m$ . Considering that the contact between the human body is the highest density, it can be considered that the maximum density $\rho_{1m}$ and $\rho_{2m}$ in the front and side directions are 3 persons/m and 2 persons/m, respectively. According to the research data of Ando et al.[13], when distance between the human body lines is about $0.75\mathrm{m}$ . According
+
+to this, the critical density $\rho_{1c}$ and $\rho_{2c}$ of the front direction and the side direction are about 0.89 persons/m and 1.33 persons/m, respectively. Substituting the above $\rho_{1m},\rho_{2m},\rho_{1c}$ and $\rho_{2c}$ into the model, we obtain:
+
+$$
+\begin{array}{l} \mathrm {A} = 1. 3 2 - 0. 8 2 \mathrm {L n} (\rho) \\ B = 3. 0 - 0. 7 6 \rho \end{array} \tag {7}
+$$
+
+$$
+v _ {j} (\rho) = v _ {m} (\alpha A + \beta B + \gamma) \tag {8}
+$$
+
+The SES model considers the combined effects of factors such as Front-Crowding, Side-Crowding and other personnel characteristics on evacuation speed during evacuation, and can comprehensively reflect the influencing factors of evacuation speed[14]. The only downside is that due to the uncertainty and ambiguity of the characteristics of the personnel, the effect of this part on the whole cannot be well described.
+
+# 5 Construction of the graph
+
+# 5.1 Description of the diagram
+
+Graphs are one of the most powerful frameworks in data structure and algorithmics. Graphs can be used to represent almost any type of structure or system. In this problem, the Louvre is a complex space building structure. To explore its personnel evacuation plan, the method of constructing a node graph can be adopted. To simplify the calculation, we added two hypothetical nodes 1 and 9.
+
+For the sake of convenience, we can use the schematic shown in Figure 3 to represent the idea of a node graph. In Figure 3, we imagine a building with six rooms, where the numbers 2 to 7 represent these rooms. The green line segments indicate the relationship between the rooms, where the solid lines indicate the corridors, and the dotted lines indicate that there is a staircase between the connections of the rooms.
+
+
+Figure 3: Node graph of a hypothetical building
+
+It can be seen that in Figure 3, the green box shows the entire structure of the building. If we use the maximum flow method to calculate the node graph on this basis, we will find that
+
+this is a "multi-target to multi-target" problem, that is, corresponding to evacuation from multiple rooms to multiple exits. Although the node structure diagram in the green frame is very intuitive, such multi-objective planning brings a lot of trouble to the calculation. In addition, the Louvre itself is a large and complex space building, and the planning of the evacuation plan will be very complicated and difficult to calculate.
+
+So, we added two special nodes, which are represented by the number 9 and the number 1 in Figure 3.
+
+Node 9 is called "sink point" and represents the external environment. Its black line with the numbers 6, 7, and 8 indicates the three exits of the building. Node 1 is called the source point and does not have practical meaning. However, the number (not shown in Figure 3) on the line connecting it to each room indicates the number of people remaining in the room at a certain moment. It is worth noting that when the problem is solved later, all the connections will have numbers, and their meanings will be different, which will be explained in the later process.
+
+The addition of two special nodes makes the multi-objective programming problem that is complicated and difficult to calculate become a simple and easy to calculate single-objective programming problem. The function of node 1 is to represent the remaining number truly and accurately. The function of node 9 is to express the flow of people leaving the building from the exit. This creates a new "single goal to single goal" issue. This result corresponds to the relationship between the total number of people remaining in all rooms and the flow of people leaving all exits. The simplification of the goal brings great convenience to the optimization of the algorithm.
+
+In addition, adding two virtual nodes has other benefits that will be explained in the theory of algorithm in the next section.
+
+# 5.2 Matrix model
+
+After the model of the graph is established, all we need to do is to incorporate the numerical calculations on this basis. In order to clearly show the variation of human flow during the simulated evacuation process and to see the optimal evacuation route planning, we can integrate various data with the help of the matrix.
+
+In this model, the data needed to simulate the evacuation law are as follows:
+
+I. The number of people in each room at any time of evacuation. We use the number of source points connected to each room to represent them.
+II. During the evacuation process, the number of people passing between adjacent parts (rooms) during a period of time, that is, the flow of people carried by corridors or stairs during a period of time. We use numbers on the connecting lines between rooms to represent them.
+III. In the process of evacuation, the number of people evacuated from each exit at any time, in other words, the flow of the exit. They are represented by numbers on the connecting line between the outlet and the outside (sink point).
+
+Assuming that a building is divided into $n$ parts, we can interpret it as $n$ rooms, then we build a model with $n$ nodes. At the same time, we can establish a matrix of $n \times n$ , in which the elements represent the connection between nodes. Since this $n$ -order matrix can only represent the exchange of people between rooms in the building, and can not reflect the
+
+number of people remaining in the room or evacuating the building, we extend this $n$ -order matrix to a circle and turn it into a $(n + 2) \times (n + 2)$ matrix, as shown in Figure 4.
+
+The final $n + 2$ matrix is composed of several parts, which can establish their respective matrices.
+
+# 5.2.1 "corridor model" $C_q$ and "staircase model" $S_q$
+
+Firstly, the "corridor model" channel traffic matrix $C_{q}$ satisfies the following set of equations:
+
+$$
+\left\{ \begin{array}{l} c _ {i, j} = v = k - 0. 2 6 6 k D \\ c _ {i, j} = c _ {j, i} \\ c _ {i, i} = 0 \end{array} \right. \tag {9}
+$$
+
+Among them, $\left\{ \begin{array}{l} 2 \leq i \leq n + 1 \\ 2 \leq j \leq n + 1 \end{array} \right.$ . The other elements are all 0. $C_{i,j}$ represents the flow of people through the channel over a period of time. This value is calculated by the channel model combined with the density of the flow and the moving speed. $C_{i,j} = C_{j,i}$ indicates that the maximum velocity in both directions on the channel is equal. From the matrix point of view, this shows that the human traffic matrix is a symmetrical matrix, and the diagonal line is all zero because it has no practical meaning. The meaning of $C_{i,n+2} = 0$ is that the channel model only expresses the interconnection between the various parts of the building, but does not express the relationship between the exit and the outside world. The last column of the total matrix represents the flow of people at the exit of the building. Therefore, the last column of the traffic matrix is 0. The matrix is in the following form:
+
+$$
+C _ {q =} \left[ \begin{array}{c c c c} 0 & c _ {1, 2} & \dots & c _ {1, n + 2} = 0 \\ c _ {2, 1} & \ddots & \dots & \vdots \\ \vdots & \dots & \ddots & \vdots \\ c _ {n + 2, 1} & \dots & \dots & 0 \end{array} \right], S _ {q =} \left[ \begin{array}{c c c c} 0 & s _ {1, 2} & \dots & s _ {1, n + 2} = 0 \\ s _ {2, 1} & \ddots & \dots & \vdots \\ \vdots & \dots & \ddots & \vdots \\ s _ {n + 2, 1} & \dots & \dots & 0 \end{array} \right] \tag {10}
+$$
+
+Then the "staircase model" channel traffic matrix $S_{q}$ . It differs from the "corridor model" channel traffic matrix in that, over a period of time, the calculation method of the traffic through the corridor is different. It uses the formula of the staircase model:
+
+$$
+s _ {i, j} = v _ {j} (\rho) = v _ {m} (\alpha A + \beta B + \gamma) \tag {11}
+$$
+
+# 5.2.2 "initial matrix" $En$ and "export matrix" $Ex$
+
+Next, the initial matrix $En$ . It represents the number of people in each room at the initial time of evacuation. Finally, the export matrix $Ex$ . It represents the number of people evacuated from each exit during the evacuation process, i.e. the exit flow. It differs from the initial matrix $En$ in that it only has the last column which is not zero. $En$ and $Ex$ satisfy the following equation:
+
+$$
+\left\{ \begin{array}{l} e n _ {1, j} = t _ {j} \\ e n _ {i, j} = 0, (2 \leq i \leq n + 2) \end{array} , \left\{ \begin{array}{l} e x _ {i, n + 2} = T _ {j} \\ e x _ {i, j} = 0, (1 \leq j \leq n + 1) \end{array} \right. \right. \tag {12}
+$$
+
+The $t_j$ is the number that the source point connects with each room at any time. $T_j$ is the number on the connection line between the exit and the outside world (the sink point). Since the initial matrix only represents the remaining number of people in the internal room, it is independent of the channel, so the number of all rows except the first row is 0. The matrices are in the following form:
+
+$$
+E n = \left[ \begin{array}{c c c c c} 0 & e n _ {1, 2} & \dots & e n _ {1, n - 1} & 0 \\ 0 & \ddots & \dots & 0 & 0 \\ \vdots & \dots & \ddots & \vdots & \vdots \\ 0 & \dots & \dots & 0 & 0 \end{array} \right], E x = \left[ \begin{array}{c c c c} 0 & 0 & \dots & 0 \\ 0 & \ddots & \dots & e x _ {2, n + 2} \\ \vdots & \dots & \ddots & \vdots \\ 0 & 0 & \dots & e x _ {n - 1, n + 2} \\ 0 & 0 & \dots & 0 \end{array} \right] \tag {13}
+$$
+
+The four matrices ultimately form the overall matrix $G$ , visually displayed in Figure 4:
+
+$$
+G = C _ {q} + S _ {q} + E n + E x \tag {14}
+$$
+
+
+Figure 4: Overall matrix diagram
+
+# 5.3 Dynamic system planning algorithm
+
+# 5.3.1 Introducing Ford-Fulkerson algorithm
+
+In the above we have transformed the multi-nodes to multi-nodes model of the building into a single-node to single-node model. In this model, the number of people flowing out of the source at any time should be equal to the number of people entering the sink point. Therefore, in the process of escape, the number of people who escaped is the number of people who go from the source to the sink point. In order to get the maximum of escape speed, we need to make full use of the channel resources in the building at any time to maximize the escape rate. In the established model, the problem is equivalent to obtaining the maximum flow from the source to the sink at any time. For such problems, we can apply the Ford-Fulkerson algorithm and make appropriate corrections. The algorithm is described in detail below[15].
+
+Before describing the algorithm, we need to further clarify the meaning of each edge in the model. there are three kinds of nodes, source points, sink points, and room points in
+
+the model. The value on the line connecting the source point to the room node indicates the initial number of rooms in the corresponding room. The connection between the room nodes indicates the connection between the rooms, and the value indicates the maximum flow allowed by the channel. The connection between the sink point and the exit room nodes indicates the connection between the exit and the exterior of the building, and its value indicates the maximum passenger flow allowed by the exit.
+
+The original Ford-Fulkerson algorithm is an iterative algorithm. In each iteration, the value of the stream is increased by looking for an "augment path"[16]. The augmented path can be seen as a path from the source point to the sink point, and more streams can be added along this path. Iterating until the augmented path can no longer be found, at least one of the paths from the source point to the sink point must be full (ie, the size of the edge stream is equal to the size of the edge). The resulting stream at this point is the maximum flow in this case, which can be proved by the maximum flow minimum cut theorem. And we will not prove this here.
+
+# 5.3.2 Application of Ford-Fulkerson algorithm
+
+Since the model we built is not a traditional single-node to single-node mode, and the situation we considered is far more complicated than the Ford-Fulkerson algorithm. Therefore, we properly use the Ford-Fulkerson algorithm in the solution of the problem and propose a dynamic system planning algorithm based on the Ford-Fulkerson algorithm[17]. The algorithm takes the general planning and the planning of some rooms in emergency into account. The specific flow chart is shown in Figure A-1 (shown in appendix).
+
+As shown in the flow chart of Figure A-1, the algorithm is divided into two parts, emergency handling and general case processing. In general case processing, each iteration represents a situation within a time period, such as within 0 to 1 second. When the number of people in the source point is less than 0.5, the iteration stops and the algorithm ends. In each iteration, the flow matrix and the margin matrix can be derived by the Ford-Fulkerson algorithm. Then, the value of the edge between the source point and the room node in the original model, which is the initial number of people in each room is corrected by the flow matrix. And the current flow velocity model is used to correct the maximum flow of the channel by the current flow density in each channel and the flow density at the exit. This will provide a picture of the flow of people in each time period. Finally, specific path planning is obtained from all traffic matrices. And the degree of use of each channel can be derived from the margin matrix[18].
+
+In emergency handling, we make the following processing. As shown in the emergency processing section of Figure A-1, first, the person in the emergency room is transferred to the adjacent non-emergency room by default, and the entrance degree of the emergency room is set to zero. According to the maximum flow rate of the connecting channel, the time required for the transition $T_{T}$ can be calculated. Since the emergency transfer route remains unchanged from 0 to $T_{T}$ , regardless of the external situation. In this way, in the general case, at the beginning of each iteration from 0 to $T_{T}$ , the number of people transferred from the emergency room during the time period is added to the remaining number of people in the non-emergency room. And set the corresponding route and finally integrated into the traffic matrix.
+
+# 6 Simulation model validation
+
+# 6.1 model initialization instructions
+
+According to the graph model and matrix model defined in 5.1 and 5.2, we initialize the number of nodes in the node (taking random numbers between 100 and 200), and specify the maximum flow rates of each exit are 40 People/s, 35 People/s, 30 People/s.
+
+As shown in Figure 5, 1 is a virtual ingress node, 9 is a virtual egress node, and 2-8 are actual nodes (where in the 6, 7, 8 nodes adjacent to the egress are called first-level nodes, and the first-level nodes are called second-level nodes), the value on the red line segment represents the initial number of people in the node, the value on the blue line segment represents the reachable maximum flow of the different channels between the nodes, and the value on the green line segment represents the maximum flow of each exit.
+
+
+Figure 5: Model initialization
+
+# 6.2 Evacuation plan
+
+According to the requirements of the topic, it is our goal to evacuate all personnel as quickly as possible. In order to achieve this goal, we substituted the model into the algorithm and solved it iteratively.
+
+Here we take 5s as a time step and make a flow diagram. As shown in Figure 6, the values on the red line have no practical meaning. The values on the blue and green lines indicate the channel and exit flow in the 5S time step. It can be seen intuitively that the first step of the evacuation plan is to empty the exit rooms 6, 7 and 8. Then the personnel at the second-level nodes move to the adjacent first-level nodes (Figure 6(a)). As the operating pressure of the evacuation medium-term channel increases, the channel between the two nodes may be required to be shunted (Figure 6(b)). Then, with the decrease of remaining number, the channel pressure between the two nodes decreases. The flow reduction is even reduced to 0 (Figure 6(c)), and eventually all evacuation is completed.
+
+In fact, our evacuation plan is a continuous process of the flow chart given in each time section. According to this evacuation plan, all the people can be evacuated as soon as possible.
+
+
+(a) Flow in 15s
+
+
+(b) Flows in 35s
+
+
+(c) Flows in 40s
+Figure 6:Evacuation flow
+
+# 6.3 Three types of bottleneck problems
+
+There are many bottlenecks in the actual evacuation of the building crowd. We divide them into three categories: (1) bottlenecks due to channel supply and demand imbalances (2) bottlenecks due to uneven distribution of exports (3) single room export bottlenecks.
+
+# 6.3.1 Bottlenecks due to imbalance in channel flow
+
+What we can find from the flow-time figure of the exit, shown in Figure 7(a), is that the flow drops occurs at around 10s, 30s and 35s.
+
+
+(a) Exit flow - time graph
+Figure 7: Bottleneck phenomenon
+
+
+(b) Flow Margin-time graph
+
+Next, we refine the reason of the drop in flow, and make the channel Flow Margin-time graph of node 6 as Figure 7(b). Node 6 has two inlet channels and one outlet channel, corresponding to the three curves in the figure. The Flow Margin is defined as the difference between the maximum flow of the channel and the actual flow, reflecting the remaining capacity of the channel. To minimize evacuation time, the Flow Margin must be minimized. It can be seen from the figure that the flow margin of node 6 has a rise in 10s and about 42s respectively. The sharp rise in 42s is clearly due to the decline in the total number of people remaining. The reason for the slight increase at 10s is that the sum of the maximum flow of channels 3-6 and 5-6 is less than the maximum flow of channels 6-9 $(15 + 12.1 < 30)$ , that
+
+is, the flow "unbalance between supply and demand" occurs. Reflected in Figure 7(a), this creates a bottleneck in flow at 10s.
+
+In order to solve the problem, we choose to expand the width of the restricted channel while keep the density unchanged to meet the flow supply and demand balance. Figure 7(a) also shows the optimized outlet flow-time diagram. It can be seen that the bottleneck is eliminated, as well as, evacuation time shortened. The total evacuation time is about 43.2s. (The accuracy is related to the selected time step)
+
+# 6.3.2 Bottlenecks due to uneven distribution of exports
+
+As mentioned above, there is a bottleneck at 30s in the exit flow-time chart. Make the remaining number of people in the 2.3.4.5-time chart, as shown in Figure 8. It can be seen that the absolute values of the slopes of nodes 3 and 2 are smaller than those of nodes 4 and 5, and the comparison of 3 to 4 is slower than the similar number of people. 2 to 5 fell more slowly. At around 30-35s, people in node 2, 4, 5 have been evacuated, but 3 has the remaining number, which caused a small decline in export flows. There are two main reasons for the bottleneck: one is the uneven distribution of personnel in different rooms, and the other is the uneven distribution of exports. These two reasons together cause the uneven speed of the overall personnel escape (that is, the slope is different). The solution is to make the distribution of exports as uniform as possible in the case of similar numbers of people, especially in rooms far from the exit, there should be more outward access.
+
+
+Figure 8: Num of remaining-time graph
+
+# 6.3.3 Single room export bottleneck
+
+In addition to the two types of bottlenecks shown in Figure 7(a), there is still a bottleneck problem that is not reflected in this situation, that is, the single-room export bottleneck problem.
+
+According to Helbing [19], Chunshan Lv [20] Weiguo Song [21] and others, the simulation of the evacuation model is carried out using a cellular automaton and a social force model: in the case of a relatively large population density, It is easy to form an "arched" blockage at the exit of the room, and the greater the speed expected by the personnel, the more obvious the blockage.
+
+In order to solve this bottleneck problem, we can control the evacuation of the expected speed by manual guidance, guidance, etc. to reduce or even eliminate the export blockage. (In fact, we have set the flow velocity of $0.8\mathrm{m / s}$ in the aforementioned model, and have considered the factors of export bottlenecks.) According to research by relevant experts [22], there are some more practical and specific measures that can effectively prevent The bottleneck, such as setting a cylindrical obstacle, etc., a few meters before the exit.
+
+# 6.4 Adaptive analysis
+
+According to the topic requirements, museum managers hope to obtain an adaptive model that can be used to address a wide range of considerations and various types of potential threats. Let's take a fire accident in a room as an example to show that our model is adaptable. Assuming that room 8 is on fire, it should meet such requirements: First, only leaving is allowed for room 8. Secondly, people in room 8 have to be evacuated as soon as possible. According to our aforementioned algorithm, the above two requirements can be achieved by blocking the entry channel of the node 8 which is shown in Figure9 (c), (d) and lead people evacuate to neighboring rooms just like Figure9 (a), (b).
+
+
+(a) Normal situation in 5s
+
+
+(b) Emergency situation in 5s
+
+
+(c) Normal situation in 15s
+
+
+(d) Emergency situation in 15s
+Figure 9: Adaptive model schematic
+
+According to the emergence of different emergencies, the transformation of our model can lead to the corresponding evacuation plan, which is adaptable and can be applied.
+
+# 6.5 Entry of emergency personnel
+
+When an accident occurs, emergency personnel is often required to enter the building in time for rescue. In the actual rescue, how to realize the entry of emergency personnel is also an important issue that should be considered in the case of ensuring the rapid evacuation of personnel. We give two options for this: The first one is real-time monitoring of channel flow. When the flow of people drops to $60\% - 70\%$ of the maximum flow, the emergency personnel are allowed to enter in. Second, when the situation is particularly urgent, priority is given to emergency personnel. We choose a shortest path, dedicated to emergency personnel use and close all channels and stairs link to this room in the model. Combined with domestic and foreign rescue cases, our program meets the actual requirements.
+
+# 7 Louvre evacuation model
+
+# 7.1 Application of the dynamic node graph in the Louvre
+
+As mentioned before, we can use nodes to represent the relatively independent parts of the building. We can call these parts different rooms. Therefore, we can divide the Louvre into five parts, representing the ground (0F), the first floor (1F), the second floor (2F), the negative first floor (-1F) and the negative second floor (-2F). These five floors can form five plane dynamic system node maps. The connecting part is the "channel between layers" formed by the actual stairs.
+
+In the actual process of building the model, we simplify each floor of the Louvre into a node diagram. Figure 10 shows the results of first floor (1F)[23] as an example.
+
+
+(a) Actual 1F
+
+
+(b) Node graph of 1F
+Figure 10: Louvre node graph
+
+After the dynamic node diagram of each layer is established, the layers are connected according to the actual distribution of stairs in the Louvre, as shown in Figure 11.
+
+
+Figure 11: Louvre's spatial node graph
+
+# 7.2 Analysis of the results of the Louvre model
+
+# 7.2.1 Evacuation simulation
+
+In the previous analysis of the model results, we have analyzed the evacuation of simple buildings. Now let's take a look at the evacuation plan for the complex building of the Louvre.
+
+
+(a) Flow in 50s
+
+
+(b) Flow in 120s
+
+
+(c) Flow in 190s
+Figure 12: Dynamic flow distribution
+
+
+(d) Flow in 250s
+
+In our simulation of the actual evacuation of the Louvre, we assumed some assumptions: I. The number of people in the hall is about 7,000;
+II. All persons are reasonably distributed according to the actual size of each room;
+III. In the evacuation process, the flow density meets the actual statistical value
+
+In Figure 12(a), we can see that the four green lines meet at a point, which is the sink point, indicating the external environment. The number of people on the four green lines connected to it is the flow of the four main exits. In Figure 12, in the same layer, the numbers on the line indicate the flow of people carried by the corridors and other channels in a period of time; the lines between the different layers represent the actual stairs, they The numbers above indicate the flow of people carried by the stairs over a period of time.
+
+As can be seen from Figure 12, this is a dynamic evacuation process simulation model.
+
+From the simulation, we can summarize the following conclusions:
+
+I. At the beginning of the evacuation, it is necessary to keep the exit area open;
+II. In the middle of the evacuation, it is found that the rooms far away from the exit are very congested, so consider opening other exits closer to them;
+III. In the later stage of evacuation, it can be found that the flow of some channels has been reduced, allowing rescuers to enter the venue in time.
+
+# 7.2.2 Bottleneck analysis
+
+It can be seen from Figure 13, the personnel in the hall can basically be evacuated within five minutes.
+
+
+Figure 13: Export flow-time diagram
+
+Figure 14 shows a schematic diagram of the margin of channel flow at each time. The red lines indicate that a channel is fully loaded, which is highly likely to cause an evacuation bottleneck. According to the blue line near the red line (not full load channel), the evacuation route can be re-planned to solve the bottleneck problem.
+
+# 8 How to use the model
+
+In order to give museum personnel a complete model exploration method, we give the following suggestions:
+
+I. Based on the details and dimensions of the museum, perfect the model and establish the most accurate spatial node map;
+II. Set up a density detector at each key part of the hall to detect the density in real time to ensure the accuracy of the data;
+III. Once the accident occurs, the model is quickly used for evacuation simulation. Because
+
+
+(a) Margin in 50s
+
+
+(b) Margin in 120s
+
+
+(c) Margin in 190s
+Figure 14: Channel margin diagram
+
+
+(d) Margin in 250s
+
+the calculation speed of this model is much faster than the actual evacuation speed, museum personnel can find the optimal evacuation route in advance and give timely instructions; (recommended to install clear and conspicuous indicator lights in the evacuation section)
+
+IV. Because the model is adaptable, museum personnel can adjust the model based on actual accidents. For example, if a room is on fire, the route can be re-planned according to the model calculation results to ensure that people can get real-time and optimal solutions;
+V. Through the simulation of the model, the bottleneck in the evacuation process can be quickly identified, so that the extra exits can be arranged to make the export distribution most reasonable;
+VI. Through dynamic simulations, museum personnel can find the most appropriate route and time for rescuers to enter.
+
+# 9 Conclusion
+
+# 9.1 Sensitivity analysis
+
+In the actual evacuation process, due to some objective reasons, such as the error of the density detector, the model will produce a certain deviation in the process of evacuation plan simulation. To determine the effect of these deviations on the final result, we performed a sensitivity analysis. Set the parameter to the flow density so that it has a $5\%$ error. The results produced are shown in Figure 15.
+
+It can be seen that the model can guarantee the accuracy of the simulation within a certain
+
+
+(a) Verification model
+Figure 15: Sensitivity analysis diagram
+
+
+(b) Louvre model
+
+error range. The museum only needs to ensure that the error of the flow density detection device is small to get accurate results. In addition, in the actual evacuation process, the flow density of each part is time-varying, so the accuracy of the current event detection device is more needed. Since the model can respond to the changing results quickly, to a certain extent, the model can eliminate the impact of errors through iterative updating. In this way, the museum can find the best evacuation plan more accurately.
+
+# 9.2 The model
+
+# 9.2.1 Strength
+
+The model has great generalization because it does not consider any special circumstances but the general situation. And the model fully measures the structure inside the building, and the calculation result is that the route plan is more intuitive. In addition, when there is a need to open or close a channel, room or exit, it can be easily modified in the model. This also reflects the excellent adaptability of the model.
+
+# 9.2.2 Weaknesses
+
+While using the model, in order to fully exert its dynamic capabilities, it may be necessary to monitor the flow density of the channel and the exit. Thus there are certain limitations.
+
+# 9.3 The algorithm
+
+# 9.3.1 Strength
+
+I. The effects of individual interactions and the physical parameters of specific channels are taken into account in the calculation of the maximum flow allowed in channels. In the application of the N&M model and the SES model, real-time human flow density is adopted, and therefore, it has certain dynamics.
+
+II. Since the planning is performed from the perspective of the system, the algorithm is
+
+extremely fast compared with other algorithms, such as a cellular automaton-based planning algorithm. This allows a number of possible scenarios to be tried by modifying the model parameters in response to an unexpected situation.
+
+III. Since the flow and the margin at any moment in the model can be obtained directly from the algorithm, the bottleneck problem that may occur in the planning scheme and the model can be found directly through the visualization of the flow matrix and the margin matrix.
+
+# 9.3.2 Weaknesses
+
+Firstly, the Ford-Fulkerson algorithm has certain limitations. When there is a special case for each value in the graph, the algorithm will loop indefinitely. However, considering that we apply the algorithm to the human flow problem, the probability of occurrence in a special case is almost zero.
+
+Secondly, the calculation result will occupy a large amount of storage space with the increase of precision. However, in general buildings, the required storage space is still within acceptable limits.
+
+# References
+
+[1]C. F. Wakim, S. Capperon and J. Oksman, A Markovian model of pedestrian behavior [C]. 2004 IEEE International Conference on Systems, Man and Cybernetics. IEEE, 2004(4): 4028-4033.
+[2]Dirk Helbing, Pter Molnr. Social force model for pedestrian dynamics[J]. Physical review, E. Statistical physics, plasmas, fluids, and related interdisciplinary topics, 1995, 51(5):4282-4286.
+[3]PREDTECHENSKII V M, MILINSKI A I.Planning for foot traffic flow in building[M]. Stroizdat Publishers, Moscow, 1969
+[4]PHILIT J D, CRAIG J D, RICHARD L R, et al. SFPEHandbook of Fire Protection Engineering [S]. Published by National fire protection association, 1995
+[5]Zhong, Wei & Tu, Rui & Yang, Jian-peng & Tianshui, Liang. (2013). Simulation of Evacuation Process in a Supermarket with Cellular Automata. Procedia Engineering. 52. 687-692. 10.1016/j.proeng.2013.02.207.
+[6]Li Junmei, Hu Cheng, Li Yanfeng, et al. Study on the effect of population density of different types of evacuation channels on walking speed[J]. Architecture Science, 2014, 30(8): 122-129. DOI: 10.13614/j.cnki.11-1962/tu.2014.08.023
+[7]Lee, Eric Wai Ming, Shi, Meng, Ma, Yi. A novel grid-based mesoscopic model for evacuation dynamics[J]. Physica, A. Statistical mechanics and its applications, 2018, 497:198-210.
+[8]Lu Junan, Fang Zheng, Lu Zhaoming, et al. Mathematical model of evacuation speed of building personnel[J]. Journal of Wuhan University (Engineering Science), 2002, 35(2): 66-70. DOI:10.3969/j.issn .1671-8844.2002.02.016.
+[9] Jin Nanjiang, Mao Zhanli. Research on evacuation path model of multi-story buildings in fire environment [J]. Industrial Safety and Environmental Protection, 2018, 44 (4): 1-4.
+[10]Jia Yu. Research on the law of evacuation behavior and speed model of the intersection of people in the middle school teaching building [D]. Southwest Jiaotong University, 2016.
+[11]C. Y. Wang and W. G. Weng, Study on evacuation characteristics in an ultra high-rise building with social force model, 17th International IEEE Conference on Intelligent Transportation Systems (ITSC), Qingdao, 2014, pp. 566-571.
+[12]ROYTMAN, M Y. Principles of fire safety standards for building construction [M]. Amerind publishing Co.New Delhi, 1975.
+[13]Ando KOta HOki T Forecasting the flow of people ( in Japanese)J Railway Research Review198845(8): 8-14
+
+[14]H. Wang, Q. Chen, J. Yan, Z. Yuan and D. Liang, Emergency Guidance Evacuation in Fire Scene Based on Pathfinder, 2014 7th International Conference on Intelligent Computation Technology and Automation, Changsha, 2014, pp. 226-230.
+[15]Zhang Yan, Ren Han. Ford-Fulkerson algorithm and short circle in embedded graph[J]. Chinese Journal of Applied Mathematics, 2008(05): 780-785.
+[16]Xia Xiaomei, Zhou Qianmin.NoC Path Assignment Based on Oriented Ford-Fulkerson Algorithm[J].Journal of Hefei University of Technology(Natural Science),2006(03):316-321.
+[17]Elvin J Moore, Wisut Kichainukon, Utomporn Phalavonk. Maximum flow in road networks with speed-dependent capacities application to Bangkok traffic[J].(4):489.
+[18]Uri Zwick, The smallest networks on which the Ford-Fulkerson maximum flow procedure may fail to terminate, Theoretical Computer Science, Volume 148, Issue 1, 1995, Pages 165-170, ISSN 0304-3975,
+[19] Helbing D, Farkas I, Vicsek T. Simulating dynamical features of escape panic [J]. Nature, 2000, 407: 487 - 490.
+[20]Lv Chunshan, Weng Wenguo, Yang Rui, et al. Personnel evacuation model based on motion mode and cellular automata in fire environment[J]. Journal of Tsinghua University(Science and Technology), 2007, 47(12): 2163-2167.
+[21]Song Weiguo, Yu Yanfei, Fan Weicheng, et al. A kind of evacuation of cellular automata model considering friction and repulsion[J]. Science in China, Series E, 2005, 35(7): 725-736. 10.3969/j.issn.1674-7259.2005.07.006.
+[22]Wei Chengjie. Research on pedestrian motion modeling and evacuation based on Markov model [D]. Beijing Jiaotong University, 2018.
+[23]Interactive Floor Plans,Retrieved from https://www.louvre.fr/en/plan
+
+# Appendix
+
+# Other figures
+
+
+Figure A-1: Flow diagrams of algorithm
+
+
+(a) Real -2F
+
+
+(b) Simulation of -2F
+
+
+(c) Real -1F
+
+
+(d) Simulation of -1F
+
+
+(e) Real 0F
+Figure A-2: -2F $\sim$ 0F
+
+
+(f) Simulation of OF
+
+
+(a) Real 1F
+
+
+(b) Simulation of 1F
+
+
+(c) Real 2F
+Figure A-3: $1\mathrm{F}\sim 2\mathrm{F}$
+
+
+(d) Simulation of 2F
+
+# control procedures
+
+# BFS
+
+function [ out, Parent ] = BFSGraph, start, End, parent, ROW
+
+visited $=$ zeros(1,ROW);
+
+queue = [];
+
+queue = [queue, start];
+
+visited(start) = 1;
+
+while queue
+
+$\mathrm{u} = \mathrm{queue}(1)$
+
+queue(1) = [];
+
+for $\mathrm{i} = 1 : \mathrm{{ROW}}$
+
+if (visited(i) == 0) && (Graph(u, i); 0)
+
+queue = [queue, i];
+
+visited(i) = 1;
+
+parent(i) = u;
+
+end
+
+end
+
+end
+
+out $=$ visited(End);
+
+Parent $=$ parent;
+
+end
+
+# Ford_fulkerson
+
+function [max_flow, Remain] = Ford_fulkerson(Graph, s, e, ROW)
+
+NUM INF = inf;
+
+parent $=$ zeros(1,ROW);
+
+parent(:) = -1;
+
+max_flow $= 0$
+
+start = s;
+
+End = e;
+
+[out, parent] = BFS (Graph, start, End, parent, ROW);
+
+while out
+
+path_flow = NUMINF;
+
+start = End;
+
+while start $=$ s
+
+path_flow = min(path_flow, Graph(parent(1, start), start));
+
+start = parent(1, start);
+
+end
+
+max_flow = max_flow + path_flow;
+
+$\mathrm{v} = \mathrm{End}$
+
+while $\mathrm{v} =$ start
+
+$\mathrm{u} = \mathrm{parent}(1,\mathrm{v})$
+
+$\operatorname{Graph}(\mathbf{u}, \mathbf{v}) = \operatorname{Graph}(\mathbf{u}, \mathbf{v}) - \text{path\_flow};$
+
+$\operatorname{Graph}(\mathrm{v}, \mathrm{u}) = \operatorname{Graph}(\mathrm{v}, \mathrm{u}) + \operatorname{path\_flow}$ ;
+
+$\mathrm{v} = \mathrm{parent}(1,\mathrm{v})$
+
+end
+
+[out, parent] = BFS(Graph, start, End, parent, ROW);
+
+end
+
+Remain $=$ Graph;
+
+end
+
+# Channel
+
+function [Graph_Channel] = Channel(Channel_width, Channel_density, ac)
+
+$\mathrm{v} = 1 - 0.266.^{*}1.40.^{*}$ Channel_density;
+
+Graph_Channel = v.*ac.*Channel_width.*Channel_density;
+
+end
+
+# Stairs
+
+function [Graph_Stairs] = Stairs(Stairs_width, Stairs_density, ac)
+
+$\mathrm{v} = 1 - 0.266.^{*}1.08.^{*}$ Stairs_density;
+
+Graph_Stairs = v.*ac.*Stairs_width.*Stairs_density;
+
+end
+
+# Main
+
+function[Max_flow, Flow, Remain, Time] = Main(G,ac)%Stairs_width, Stairs_density, Channel_width, Channel_density, Graph-enter,
+
+Graph_exit, ac)%, Emergency)
+
+Graph = G;%Channel(Channel_width, Channel_density, ac) + Stairs(Stairs_width, Stairs_density,
+
+ac) + Graph-enter +
+
+(Graph_exit./5.*ac);
+
+$\mathrm{ROW} = \mathrm{length}(\mathrm{Graph})$
+
+$\operatorname {Graph}(2:\mathrm{ROW},:) = \operatorname {Graph}(2:\mathrm{ROW},:).^{*}\mathrm{ac}. / 5;$
+
+Total $=$ sum(Graph(1,:))
+
+% Emergency situation
+
+$\%$ Only leaving is allowed
+
+$\%$ Use this section by uncommenting all about this
+
+$\% \{$
+
+if $\sim$ isempty(Emergency)
+
+Graph(2:ROW,Emergency) $= 0$
+
+$\mathrm{Ge} = \mathrm{Graph}$
+
+t = [];
+
+for $\mathrm{i} = 1$ :length(Emergency)
+
+$\mathrm{v} = \mathrm{sum}(\mathrm{Graph}(\mathrm{Emergency}(\mathrm{i}),:),2);$
+
+$\mathrm{t} = [\mathrm{t}, \mathrm{ac}^* \mathrm{Graph}(1, \mathrm{Emergency}(\mathrm{i})) / \mathrm{v}]$ ;
+
+end
+
+$\operatorname{Graph}(1, \operatorname{Emergency}) = 0$ ;
+
+Graph(Emergency, :) = 0;
+
+end
+
+$\% \}$
+
+$\mathrm{i} = 0$
+
+Max_flow $= []$
+
+Flow $=$ zeros(ROW, ROW, 1);
+
+Remain $=$ zeros(ROW, ROW, 1);
+
+while Total:0.5
+
+$\mathrm{i} = \mathrm{i} + 1$
+
+$\mathrm{mf} = 0$
+
+$\% \{$
+
+if $\sim$ isempty(Emergency)
+
+for $j = 1$ :length(Emergency)
+
+if $\mathrm{i}^{*}\mathrm{ac - t(j)jac}$
+
+$\operatorname {Graph}(1,:) = \operatorname {Graph}(1,:) + \operatorname {Ge}(\operatorname {Emergency}(\mathrm{j}),:).^{*}(\min (\mathrm{t}(\mathrm{j}),\mathrm{i}^{*}\mathrm{ac}) - \mathrm{i}^{*}\mathrm{ac} + \mathrm{ac}). / \mathrm{ac};$
+
+$\mathrm{mf} = \mathrm{mf} + \mathrm{Graph}(1,\mathrm{ROW})$ $\mathrm{Graph}(1,\mathrm{ROW}) = 0$
+end
+end
+end
+%}
+Copy $=$ Graph;
+[ max_flow, Copy] $=$ Ford_fulkerson(Copy, 1, ROW, ROW);
+Remain(:, :, i) $=$ Copy;
+Temp $=$ Graph - Copy;
+Temp(Temp;0) $= 0$ .
+%{
+if isempty(Emergency)
+for j $= 1$ :length(Emergency)
+if i\*ac-t(j)ac
+Temp(Emergency(j),:) $=$ Temp(Emergency(j),:) $^+$ Ge(Emergency(j),);
+end
+end
+end
+%}
+Flow(:, :, i) $=$ Temp;
+max_flow $=$ max_flow $^+$ mf;
+Max_flow $=$ [Max_flow, max_flow];
+Total $=$ Total - max_flow
+ $\mathrm{Graph}(1,::) = \mathrm{Graph}(1,::)$ - Temp(1,::);
+end
+Time $=$ i\*ac;
+end
\ No newline at end of file
diff --git a/MCM/2019/D/1923074/1923074.md b/MCM/2019/D/1923074/1923074.md
new file mode 100644
index 0000000000000000000000000000000000000000..079c495b7956fbe5c4305dbc23536b528caac689
--- /dev/null
+++ b/MCM/2019/D/1923074/1923074.md
@@ -0,0 +1,473 @@
+For office use only
+
+T1
+
+T2
+
+T3
+
+T4
+
+Team Control Number
+
+# 1923074
+
+Problem Chosen
+
+D
+
+For office use only
+
+F1
+
+F2
+
+F3
+
+F4
+
+# 2019 Interdisciplinary Contest in Modeling (ICM) Summary Sheet
+
+(Attach a copy of this page to each copy of your solution paper.)
+
+# Escape the Louvre
+
+# Summary
+
+On average, more than twenty thousand visitors come to the Louvre Museum each day to admire the precious works of art within. Unfortunately, the reputation of the Louvre also makes it a potential target for terrorist attacks [4]. Under emergency situations like this, all visitors in the museum have to be evacuated as soon as possible. Thus the optimal evacuation routes need to be identified and applied in different scenarios.
+
+To solve this problem, we start from the details of specific structures in the Louvre, including stairs, rooms, and doors, constructing several microscopic models to examine the basic properties of these building blocks with professional software. We then represent these structures as nodes and connect them with arcs, forming a macroscopic network model in accordance with the floor plans of the Louvre [6].
+
+Based on the network, we carry out computer simulations to explore the possible routes of evacuation and their efficiency. Our prime target is to minimize the evacuation time, but we first use the Floyd-Warshall Algorithm (Dynamic Programming) to find a shortest-path solution as a basic feasible solution. Based on this solution, we use the Genetic Algorithm to optimize the route by trial and error. After about 26 rounds of evolutions (recursions), the 'chromosomes' converges to the optimal solution and is presented with visual tools. We also record the real-time number of visitors on each node throughout the simulation and diagnose the results to identify potential bottlenecks.
+
+We then repeat the procedures above to explore different possible scenarios which may change the optimal solutions. We consider three policies where additional potential exits are opened and one specific situation that a section of the network is cut off. We find that the policy balancing evacuation time and the level of security is to open two additional exits (Porte des Art and Sully 1W). We also estimate by calculation that the cut-off of one specific section (the room presenting Mona Lisa) will lead to a slightly longer optimal time of evacuation.
+
+Due to the limitation of information, some of the problems, such as the handicapped visitors, are not examined in detail in our work; we do, however, discuss how we could solve these problems given additional information. We also give some related suggestions about how the inner structure of the museum could be modified to facilitate the evacuation of the disabled.
+
+# Contents
+
+# 1 Introduction 1
+
+1.1 Problem Background 1
+1.2 Our work 1
+
+# 2 Preparation of the Models 2
+
+2.1 Basic Assumptions 2
+2.2 Notations 2
+
+# 3 The Microscopic Models - Properties of the Nodes and Arcs 3
+
+3.1 Simplified Models of Staircases 3
+
+3.1.1 Normal Stairs 3
+3.1.2 Double Stairs 4
+3.1.3 Representation of stairs models in networks 5
+
+3.2 Simplified Models of Doors and Rooms 6
+
+3.2.1 Simulation of the door model 6
+3.2.2 Simplification of the door model in networks 8
+3.2.3 Representation of doors and rooms in networks 8
+
+# 4 The Macroscopic Model - A Network of the Louvre 9
+
+4.1 Network Construction 9
+4.2 Network Properties 9
+
+# 5 The Model of the Number of Visitors 10
+
+# 6 The Optimal Solution of Evacuation Routes 11
+
+6.1 Definition of a Feasible Solution 11
+6.2 A Basic Solution with the Floyd-Warshall (DP) Algorithm 11
+6.3 Optimization based on the Genetic Algorithm 12
+6.4 Results and Diagnostics (Bottlenecks) 13
+
+# 7 Application of the Model in Different Scenarios 14
+
+7.1 Additional Doors 14
+7.2 One Section is Cut off 16
+
+# 8 Implementation of our model 17
+
+# 9 Brief Discussions about Several Other Concerns 18
+
+9.1 Handicapped Visitors 18
+9.2 Deployment of emergency personnel 18
+9.3 Implementing our model to other buildings 18
+
+# 10 Strengths and Weaknesses 18
+
+10.1 Strengths 18
+10.2 Weaknesses 18
+
+# References 19
+
+# 1 Introduction
+
+# 1.1 Problem Background
+
+The Louvre is one of the largest and most visited museums in the world, attracting tourists around the globe. However, the complicated structure of the Louvre also makes it difficult for visitors to evacuate the museum should any emergency occurs. Thus the museum desires to improve its evacuation capability by model constructing and route finding [3].
+
+Three major problems are discussed in this paper, which are:
+
+- Using available information to convert the inner structure of the Louvre into a mathematical model.
+- Identify the optimal evacuation route and analyze the potential bottlenecks.
+- Consider different scenarios and explore several available policies.
+
+# 1.2 Our work
+
+Our work constructs a network model based on the floor plan of the Louvre [6]. Detailed structures the Louvre are simplified into nodes and arcs in a network. We then carry out computer simulation and use Genetic Algorithm to find the routes with minimum evacuation times. Finally, we diagnose the results to identify potential bottlenecks and other limitations. A structure of our work is shown in the chart below.
+
+
+Figure 1: The structure of our work
+
+# 2 Preparation of the Models
+
+# 2.1 Basic Assumptions
+
+- All rooms in the Louvre are square, vacant rooms with level grounds. Drawings, statues, pillars, or any other objects placed in any of these rooms have no effect on the behavior of the visitors.
+- During the emergency, all the lifts (elevators) and escalators are shut down due to safety concerns. Visitors are only allowed to use stairs to go to another floor.
+- In the evacuation process, all visitors will follow the designated route (designed by us) under the guidance of emergency personnel. They are not allowed to choose any alternative evacuation routes or use any unopened exit points.
+- A visitors will get out immediately on reaching any of the museum exits. Their behavior after evacuation is not in the scope of our study.
+- All 'visitors' refers to people who can walk on themselves, unless otherwise stated.
+
+# 2.2 Notations
+
+The primary notations used in this paper are listed in Table 3.
+
+Table 1: Notations
+
+| Symbol | Definition |
| O(i) | the set of all exit nodes, i = 1,2,3. |
| S(i) | the set of all non-exit nodes, i = 1,2,3... |
| E(n) | the set of all nodes in the Louvre; E = O ∪ S. |
| A(k) | the set of all available arcs between two nodes, k = 1,2,3. |
| V = {S, C} | a directed graph with sources S and targets C |
| H(n) | the set of maximum visitor capacities of the Node n. |
| K(n) | the set of real-time number of visitors on the Node n. |
| μij | the entering rate from Node i to Node j |
| t0 | the beginning of the evacuation. t0 = 0 |
| t | time elapsed from the beginning of the evacuation. |
| nin | the total number of visitors needing evacuation nin = ΣKi. |
| nout | the number of visitors evacuated at time t. |
| T | the value of t when nout/nin = 100% for the first time |
+
+# 3 The Microscopic Models - Properties of the Nodes and Arcs
+
+# 3.1 Simplified Models of Staircases
+
+As is assumed above, all the escalators and lifts will be unavailable in emergency situations. Consequently, the staircases will become the only connections between different floors of the palace, making them crucial parts in the evacuation model.
+
+We deem there to be two main types of staircases in the Louvre: the normal stairs and the "double" stairs. These two types of staircases are modeled respectively as follows.
+
+# 3.1.1 Normal Stairs
+
+The "normal" stairs refers to the typical staircases, with two endpoints on two adjacent floors and a number of steps connecting them. Although there are varies shapes of normal stairs in the Louvre - rectangular ones, curved ones, spiral ones, etc. - the underlying model is more or less the same: the escaping visitors using them will have to go down stairs from the upper endpoint to the lower one, simultaneously making a right or left turn.
+
+Figure 2: Pictures of a rectangular staircase and a spiral staircase in the Louvre
+
+Source: https://www.louvre.fr/
+
+
+
+To simplify our model, here we model a typical rectangular staircase based on the specifications of the Louvre's inner structure, and set it as the default staircase.
+
+
+Figure 3: A top view and a perspective view of the normal stairs model
+
+
+
+up to 43 people without being too crowded.
+
+
+Figure 4: A simulation of normal stairs
+
+$$
+\mu = \frac {\Delta K _ {i}}{\Delta t} \tag {1}
+$$
+
+Where $\Delta K_{i}$ represents the change in real-time population. Using an extracted period from the Pathfinder simulation, the rate is calculated to be 1.69 persons per second.
+
+# 3.1.2 Double Stairs
+
+The "double" stairs refers to a staircase with two endpoints on the upper floor converging to one endpoint on the lower floor. The structure of this type of staircase resembles a bird: two narrow "wings" and one broad "body".
+
+Figure 5: A picture of the Daru staircase in the Louvre, and a perspective view of our model
+
+http://albertis-window.com/2013/05/museum-shrines-and-performative-rituals/
+
+
+
+Again, we model a typical staircase with CAD tools and simulate the model with pathfinder. We find that the maximum capacity of one such staircase is around 80. The average passing rate is calculated to be 3.54 visitors per second.
+
+
+Figure 6: A simulation of double stairs
+
+# 3.1.3 Representation of stairs models in networks
+
+Based on 3.1.1 and 3.1.2, we then convert the stairs models above into nodes and arcs that can be used to construct a network:
+
+1. The endpoints of a staircase is represented by two nodes, one on the upper floor and the other on then lower floor.
+2. The stairs are represented by an arc connecting these two nodes
+3. Each node has a maximum capacity of $H_{i}$ . Arcs has no capacities. When the real-time number of visitors on the staircase node reaches $H_{i}$ ( $K_{i} = H_{i}$ ), visitors from the adjacent nodes could not enter this node ( $\mu_{xi} = 0$ ).
+
+4. The visitor on this node i can move to any other node $S_{y}$ at a unless that node has reached its maximum capacity $(K_{y} = H_{y})$
+
+According to the simulations and calculations above, values of $H$ and $\mu$ are given as below
+
+Table 2: Properties of Stairs
+
+| Type of Staircase | Max.Capacity (H/persons) | Passing Rate (μ/person/second) |
| Normal | 43 | 1.69 |
| Double | 80 | 3.54 |
+
+A detailed visual representation of stairs will be demonstrated in Section 4.
+
+# 3.2 Simplified Models of Doors and Rooms
+
+As there are more than 400 rooms and halls in the Louvre, it is virtually impossible and also unnecessary to simulate the situation in every single room, for most of the rooms will have little effect on the plan of evacuation.
+
+Figure 7: A picture of a typical room in the Louvre
+
+Source: https://www.louvre.fr/
+
+# 3.2.1 Simulation of the door model
+
+To demonstrate our idea, we create a model with three rooms. Room 1 (on the left side) has only one door (Denoted Door 1); Room 2 has two doors (Door 1 and Door 2); and Room 3 has one door and one exit (Door 2 and Door 3). All people are randomly distributed in the first room, initially. All people have to escape the series of rooms from the door on the right of the third room. All the doors are designed to be 96 centimeters in width, the most typical spec used in European buildings.
+
+We still use Pathfinder to simulate the evacuation process. The behavior of 100 people are demonstrated as follows.
+
+
+Figure 8: Extracted sections from the simulation of the room model (at time $t = 0.0, 1.7, 20.0$ , 100.0, 131.3, respectively)
+
+We can observe from the simulation that:
+
+- All 100 people, initially randomly distributed in Room 1, almost immediately clusters to the door.
+- At time $t = 1.7$ , which is quite soon, one of the visitors has already thoroughly gone through Door 1, Entering Room 2.
+- The flow of people in Room 2 (a room with two doors) is continuous and approximately uniform (keeps the same speed from the beginning to the last exit). This substantiates our assumption that the flow speed is unaffected by the density in the source room.
+- Room 3, which can also be seen as a room with two doors, demonstrates a flow pattern identical to the second room. Also, There is little queuing nor congestion at Door 2 (the door between Room 2 and Room 3). As a result, We deem it acceptable to ignore Door 2 and consider Room 2 and Room 3 as a single, longer room.
+- The size a room and the population in the room (or the population density in a room) has no or little effect on the speed of going through doors and rooms - We have tried cramming up to 300 people in Room 1, and the flowing pattern in Room 2 and 3 are almost the same.
+
+# 3.2.2 Simplification of the door model in networks
+
+Based on the observations above, we can make the following simplifications to the inner structure of the Louvre. We assume that all the doors in the Louvre are identical, 96 centimeters in width, and that there can be at most one door connecting two rooms:
+
+- The doors are the main limitations of visitor flowing speed. The rooms are simply the space between the doors.
+- The transferring rate from one door to another is solely determined by the property of the door, unless the visitors are trying to enter a room that is already full.
+
+# 3.2.3 Representation of doors and rooms in networks
+
+1. A room with only one door is represented with a node connected to the nearest other room.
+2. A series of rooms, each with two doors, can be seen as a single, longer room (or corridor). It is then represented by a single arc.
+3. A room with three or more doors is represented as a node. However, it is worth noticing that only two or less of its doors will ever be used for evacuation purposes (one in and another one out, at most).
+
+Table 3: Properties of Doors and Rooms
+
+| Max.Capacity-Room (H/persons) | Passing Rate-Door (μ/person/second) |
| 200 | 0.88 |
+
+# 4 The Macroscopic Model - A Network of the Louvre
+
+# 4.1 Network Construction
+
+With the rules introduced in the previous section, we can simplify the whole inner structure of the Louvre using nodes and arcs. A network is built based on the official floor plan of the Louvre [6]. The nodes are named from 1 to 58. Note that the -2F consists only of one node, so it is combined into the graph of -1F.
+
+
+Figure 9: The network representation of the inner structure of the Louvre
+
+# 4.2 Network Properties
+
+We shall reiterate some of the most important properties of the network
+
+1. Each node has a maximum capacity of $H_{i}$ . Arcs has no capacities. When the real-time number of visitors on the staircase node reaches $H_{i}$ ( $K_{i} = H_{i}$ ), visitors from the adjacent nodes could not enter this node ( $\mu_{xi} = 0$ ).
+2. The visitor on this node i can move to any other node $S_{y}$ at a constant rate of $\mu_{iy}$ unless that node has reached its maximum capacity ( $K_{y} = H_{y}$ ) I
+
+3. The capacity of an exit is defined to be infinity. It has no transferring rate because no one will try to leave that node.
+
+This network can also be seen as a graph with undirected edges, which can be stored as in MATLAB for future uses.
+
+
+Figure 10: An undirected graph representing our network, generated with MATLAB
+
+# 5 The Model of the Number of Visitors
+
+According to the data provided by the museum website, the number of visitors to the Louvre in 2017 is 8.6 million. Since the regular opening time of the Louvre is from 9 am to $18\mathrm{pm}$ , and that the extended hours for special dates are ignored in this model, it is estimated that the average number of visitors 23562 per day or 2618 per hour, rounding to the nearest integer. It is further distributed with the opening time interval considering peak hours [5]. We assume that terror attacks are most likely to happen during the peak hours due to the motivation of terrorists and the inter-arrival time of visitors follow an exponential distribution [1]. Therefore, it can be inferred that the number of visitors per hour follows a Poisson distribution [2].
+
+The generation for a random number of real-time visitors could be achieved by the following steps:
+
+1. Since we assume that inter-arrival time is exponential with $1 / \lambda$ , the random number of time can be produced by the method of inverse transformation method and random number uniformed distributed from 0 to 1.
+
+$$
+t = - \frac {1}{\lambda} \ln u _ {i}, u _ {i} \sim U (0, 1)
+$$
+
+2. For the Poisson distribution, n visitors could appear when time t satisfies $t_n \leq t < t_{n+1}$ . Then we produce random number with
+
+$$
+\left\{ \begin{array}{l} \prod_ {i = 1} ^ {n} u _ {i} \geq e ^ {- \lambda t} > \prod_ {i = 1} ^ {n + 1} u _ {i}, n > 0 \\ 1 \geq e ^ {- \lambda t} > u _ {1}, n = 0 \end{array} \right. \tag {3}
+$$
+
+3. Thus getting
+
+$$
+n = \min \left\{n: \prod_ {i = 1} ^ {n} u _ {i} < e ^ {- \lambda t} \right\} - 1 \tag {4}
+$$
+
+With that, we generate the initial number of visitors in each room $K_{i0}$ , with $\lambda = 2618 / 50 = 52.36$
+
+# 6 The Optimal Solution of Evacuation Routes
+
+After importing our parameters into MATLAB, we set out to find the optimal route.
+
+# 6.1 Definition of a Feasible Solution
+
+One feasible solution, or a route, is a directed graph with 58 nodes and 55 directed arcs. Each of the 55 non-exit node is the source of one and only one directed arc, while the exit nodes are not the source of any arcs.
+
+In the mathematical language, a route $V$ is defined by
+
+$$
+V = \{S, C \}, S = [ 2, 3, 4, 5, \dots , 5 8 ], (S _ {i}, C _ {i}) \in G
+$$
+
+where $\mathbf{G}$ is the undirected graph of the network.
+
+# 6.2 A Basic Solution with the Floyd-Warshall (DP) Algorithm
+
+To set a benchmark and make further simulations faster to converge, we first find a basic feasible solution to the route problem, looking for a directed graph converging to the exit nodes.
+
+We use the Floyd-Warshall Algorithm, a dynamic programming method, to compare the shortest route of each node to any of the three exit, eventually getting a shortest path tree.
+
+This result almost certainly not the optimal solution, for the shorter path is not equivalent to faster evacuation. It can, however, set a benchmark for further simulations. As we find that the basic feasible solution takes 748.6 seconds to finish evacuation, we decide that all the following simulations lasts at most 1000 seconds: any
+
+simulation failing to finish evacuation within 1000 second are considered 'infeasible' routes. It also provides many parameters that can be later used in the probability matrices of the Genetic Algorithm.
+
+As it is not the final solution we are looking for, the result is not visually presented.
+
+# 6.3 Optimization based on the Genetic Algorithm
+
+To perform the Genetic Algorithm (GA), we make the following definitions:
+
+Chromosome - $C$ , the target vector
+
+Fitness Function - Defined by $F = (1000 - T)$ , $T$ is the evacuation time, and $T = 1000$ if an algorithm is 'infeasible'.
+
+Probability of Crossover - a matrix based on the importance of the nodes
+
+Probability of Mutation - a matrix based on a node's distance from the nearest exit node. Specifically, $C_i$ adjacent to an exit node is not allowed to mutate if node $S_i$ is adjacent to an exit node.
+
+Options of Mutation - $\{C_i\| (S_i,C_i)\in G\}$
+
+Heuristics - Similarity is not punished in our model.
+
+The GA typically starts with random generated genes as the first generation. On applying this strategy to this problem, however, we find that the algorithm converges rather slowly: most of the randomly generated genes are unable to evacuate within 1000 seconds, making the genetic population very difficult to evolve. As a result, we decided to import the basic solution and its mutations as the first generation. We generate 50 chromosomes in each recursion, and perform 40 recursions. Below is a result showing the evacuation time of each chromosome generated.
+
+
+Figure 11: The result of 40 recursions
+
+We conclude that the optimal evacuation time is 468.2 second
+
+The optimized route is demonstrated as below
+
+
+
+
+
+
+Figure 12: The optimized evacuation route when opening three main exits
+
+
+
+# 6.4 Results and Diagnostics (Bottlenecks)
+
+Camped with the basic feasible solution, the optimized route is apparently much more efficient in evacuating visitors. As a result, our algorithm is validated.
+
+
+Figure 13: The optimized evacuation route when opening three main exits
+
+For the identification of bottlenecks, we store and plot the real-time numbers on each node throughout the simulation:
+
+
+Figure 14: The optimized evacuation route when opening three main exits
+
+According to the figure, the only room node ever reaching its capacity limit is node22 (The intersection between the Sully Wing and the Richelieu Wing on Ground Floor). It is full for around 3 minutes throughout the evacuation, making it a bottleneck in out model.
+
+If the museum would like to keep the evacuation process based on this route in order, it is recommended that deploy more emergency personnel on that node to prevent any potential danger caused by crowding.
+
+# 7 Application of the Model in Different Scenarios
+
+# 7.1 Additional Doors
+
+So far we have only used the three main exits: The Main Entrance on -2F (Directing to Carrousel or the Pyramid), Porte des Lions, and Passage Richelieu. However, three additional exits - Porte des Art and two more (here named Sully GW and Sully GN) can be opened if necessary. We now inspect the policies of opening additional doors.
+
+The three scenario are as follows:
+
+1. Policy 1: Open one additional exit - Porte Des Art (Node 17)
+2. Policy 2: Open two additional exits - Porte Des Art and Sully GW (Node 17, Node 19)
+3. Policy 3: Open three additional exits - Porte Des Art, Sully GW, and Sully GN (Node 17, Node 19, Node 21)
+
+For each of the scenarios, we repeat the procedure in Section 6, using the Genetic Algorithm to find the corresponding route with the fastest time of evacuation.
+
+
+Figure 15: Comparisons between the optimal speeds when additional exits are opened
+
+As can be seen from the figure above, opening two more doors is a balance point between the evacuation efficiency and the level of security. It much faster than opening one or no additional doors, while not so susceptible to art theft compared to the policy of opening three additional doors, for the third door (Sully GN, Node 21) leads directly to the street of Rue de Rivoli, making it easier for art thieves to get away. So we decide that opening two doors is an optimal policy if the museum find it necessary to open more exits.
+
+A detailed evacuation route is given below.
+
+
+Figure 16: Comparisons between the optimal speeds when additional exits
+
+Now we diagnose the room usage situation under this route.
+
+
+Figure 17: Room usage when two additional exits are opened
+
+The bottlenecks are largely alleviated under this policy, and no room is ever fully occupied. This is the main reason why the evacuation is so much more efficient. For most of the situations, we will recommend the museum to open two additional doors because it is much faster than just opening main exits. A shorter evacuation time means a higher possibility saving invaluable lives. However, if the emergency is not very serious, the museum may only open main exits with higher security levels to prevent art theft.
+
+# 7.2 One Section is Cut off
+
+Now let us consider the following scenario.
+
+A terrorist has successfully fooled the securities and carries a bomb into the Louvre. For he wants to make a sensation, he decided that the best place to set off the bomb is in front of Mona Lisa, the famous painting, located in 1F, Denon wing. The blast destroys Node 30, killing all visitors on the node and making the node inaccessible for evacuation purposes. The evacuation of people in other rooms begins immediately for fear of further attacks.
+
+Under this circumstance, we consider the policies of only opening main exits and opening two additional exits, respectively. We create new routes under different policies, and compare them to the normal ones (those planned without explosion)
+
+As we can see from the graph (Figure 18),
+
+1. Under the same policy, the cut-off of Node 30 slows down the evacuation process; this effect, however, is not very insignificant.
+2. Opening two more exits can still significantly reduce the evacuation time so we recommend this plan.
+
+
+Figure 18: Comparison between different policies and different scenarios
+
+The optimal route after the 'Mona Lisa' explosion, with the policy of opening two additional exits is given as below.
+
+
+Figure 19: Evacuation Route after 'Mona Lisa' explosion, two additional exits
+
+# 8 Implementation of our model
+
+Because the optimal evacuation routes changes in different scenarios, we recommend the Louvre officials to place several LED plates in each room of the museum. Each LED plate should be able display patterns of different directions. These plates will light up only during evacuation, indicating the desired moving directions for the visitors. It is important that these plates be powered with backup batteries so that they can work even if the regular power supply is cut off.
+
+# 9 Brief Discussions about Several Other Concerns
+
+# 9.1 Handicapped Visitors
+
+We fail to take the handicapped visitors into account due to a contradiction to our assumption. As far as we can infer from the map, people on wheel chairs can only move from one floor to another using disabled lifts. However, as we assume that all lifts are shut down due to safety concerns, it is impossible for the disabled on higher floors to evacuate. It is unreasonable that a public museum like this do not offer slopes for wheelchairs - maybe they are just not marked on the map. If there really are no wheelchair slopes, we highly recommend the museum to add such slopes among 0F, 1F, and 2F so that the disabled visitors on higher floors will be able to evacuate.
+
+# 9.2 Deployment of emergency personnel
+
+As a number of emergency personnel are regularly deployed in the museum, more emergency personnel should only be deployed while not affecting the speed of evacuation. We can work out a plan based our analysis of room usage - rooms and exits without many visitors can be used to deploy emergency personnel.
+
+# 9.3 Implementing our model to other buildings
+
+Our model is quite versatile. The rules of simplification can easily be applied to other large, crowded structures if detailed maps were given. However, the properties of nodes and arcs should be modified based on the specifications of stairs and doors in other buildings.
+
+# 10 Strengths and Weaknesses
+
+# 10.1 Strengths
+
+- Our model is versatile and able to adjust to different scenarios based on the real situation.
+- The running speed of our programs are fast (taking tens of seconds to calculate with a microcomputer), so the emergency personnel can quickly work out the optimal route on occurrence of an emergency.
+
+# 10.2 Weaknesses
+
+- The models of stairs and doors may be oversimplified. they may actual properties of the structures.
+
+- Some of the model parameters are based on approximation and not very accurate.
+- Our main model fails to take handicapped visitors into account.
+
+# References
+
+[1] Yoshimura, Y., Krebs, A., Ratti, C. (2017). Noninvasive Bluetooth Monitoring of Visitors' Length of Stay at the Louvre. IEEE Pervasive Computing, 16 (2), 26-34
+[2] Wikipedia: Poisson distribution. 2019.1.28 https://en.wikipedia.org/wiki/Poisson_distributionl
+[3] "Pyramid" Project Launch – The Musée du Louvre is improving visitor reception (2014-2016)." Louvre Press Kit, 18 Sept. 2014 www.louvre.fr/sites/default/files/dp(pyramid%2028102014_en.pdf.
+[4] Reporters, Telegraph. "Terror Attacks in France: From Toulouse to the Louvre." The Telegraph, Telegraph Media Group, 24 June 2018 www.telegraph.co.uk/news/0/terror-attacks-france-toulouse-louvre/.
+[5] “8.1 Million Visitors to the Louvre in 2017.” Louvre Press Release, 25 Jan. 2018 http://presse.louvre.fr/8-1-million-visitors-to-the-louvre-in-2017/.
+[6] "Interactive Floor Plans." Louvre - Interactive Floor Plans | Louvre Museum | Paris, 30 June 2016 http://www.louvre.fr/en/plan.
\ No newline at end of file
diff --git a/MCM/2019/D/1924801/1924801.md b/MCM/2019/D/1924801/1924801.md
new file mode 100644
index 0000000000000000000000000000000000000000..e640bfd1432be93b194232285628ac89b0478c77
--- /dev/null
+++ b/MCM/2019/D/1924801/1924801.md
@@ -0,0 +1,457 @@
+Team Control Number
+
+For office use only
+
+T1
+T2
+T3
+T4
+
+# 1924801
+
+Problem Chosen
+
+D
+
+For office use only
+
+F1
+F2
+F3
+F4
+
+2019
+
+MCM/ICM
+
+Summary Sheet
+
+(Your team's summary should be included as the first page of your electronic submission.)
+
+Type a summary of your results on this page. Do not include the name of your school, advisor, or team members on this page.
+
+A recent increase in terror attacks has raised demand for safe emergency evacuation plans worldwide. We focus on addressing difficulties which arise from evacuating the Louvre, the world's largest art museum. Evacuations are made difficult by the volume and variety of visitors; as a result, Louvre management desire an adaptable model in order to explore a range of evacuation plans over a broad set of considerations.
+
+In our computational network analysis, we partitioned the Louvre into sections and built an agent-based model to simulate evacuations in each section. After developing the logic for the agents, we ran simulations over each section to determine an empirical rate by which agents exited. To connect sections, we abstracted the problem by representing the building as a graph, allowing us to solve for overall time taken for an evacuation plan in terms of a network flows problem. A property of this abstraction called strong duality also identified bottleneck edges in the graph. We emphasize the power of abstraction in the adaptability of our model; simulating blocked passages or new secret exits are simply edge removal and addition. Bottleneck identification was our highest priority in considering public safety in order to easily find problematic areas that in an emergency.
+
+Our model predicted that a candidate evacuation plan involving all 4 public exits could evacuate the Louvre in 24.34 minutes. Furthermore, our bottleneck analysis revealed that while many bottlenecks surround the pyramid entrance, the entrance itself is not a bottleneck. We also found that keeping this property of the pyramid is crucial in emergencies, as it allows building access for emergency personnel and mitigates increased public safety concerns around the Louvre's most iconic entrance. Additionally, we found that securing the Passage Richelieu was critical to evacuation, as its safety was directly linked to the pyramid's safety. Keeping these entrances open and useful is therefore imperative to both speed and safety considerations in an evacuation.
+
+Overall, our model is powerful due to our ability to model individual human behavior followed by a powerfully adaptable abstraction of building flow dynamics. One weakness of our model is that our theoretical guarantee is given in terms of worst-case scenarios, which may be an upper bound on a real evacuation in more common cases. However, we feel that this weakness is acceptable in evacuation simulations.
+
+# Team #1924801 Problem D: Time to Leave the Louvre: A Computational Network Analysis
+
+# Introduction and Background Information
+
+The Louvre in Paris, France is the world's largest art museum, and received 8.1 million visitors in 2017 [1]. The composition of visitors is heavily varied, with $70\%$ of guests being international, coming from countries such as the United States, China, Brazil, the UK, and so on [1]. However, terror attacks in France have also been increasing [2], making it imperative that Louvre officials have a clear plan for evacuation in the case of an emergency.
+
+The main public entrance is the pyramid entrance two floors below the ground floor [3]. The Passage Richilieu entrance, Carrousel du Louvre entrance, and Portes Des Lions entrance are also potential entrances, although these entrances usually require memberships or reservations. However, an emergency situation would definitely serve as an exceptional case to allow usage of extra entrances in order to quickly and safely evacuate visitors. Additionally, there also exist other entrances that the public is generally unaware of. A natural question becomes whether or not these secret entrances provide sufficient compensation to justify compromising the Louvre's security by revealing their location to the public.
+
+Since these secret entrances are hidden from public knowledge, there is need for a highly adaptable and easily interpretable model that the Louvre management could use to test multiple evacuation plans. This includes potentially opening up secret exits, considering potential blockages, and generally being able to compare disparate evacuation strategies. The high variety in the population of visitors also makes evacuating difficult due to language barriers for tourists, families that will stick together, and disabled people for whom moving quickly is difficult.
+
+# Restatement of the Problem
+
+We are tasked with the broad problem of designing an evacuation model for the Louvre that allows exploration of a range of options. In order to clarify our purpose, we identify our primary goals as follows:
+
+(1) To determine a means to assess the efficiency of a given evacuation plan.
+(2) To develop - with respect to the above assessment - an optimal evacuation plan without compromising safety.
+(3) To identify key bottlenecks and other obstacles to safer, more efficient evacuations.
+(4) To determine the effect of additional exits or blocked routes on the optimal evacuation plan.
+(5) To communicate a clear plan of implementation through suggested policies and procedural recommendations with an emphasis on safety.
+
+Secondarily, we are also to consider:
+
+(a) The effect of a diverse demographic of visitors (spoken language, size of group, disability status) on evacuation and useful responses or countermeasures
+(b) Potential benefits of technology in aiding evacuation
+(c) Possible deployment routes for emergency personnel
+(d) Adaptability of the model to other large buildings
+
+# General Assumptions
+
+In order to address the variety of goals above, we often made assumptions and decisions to make the problem more tractable. These assumptions are as follows:
+
+- Assumption 1: Evacuees will act strictly in their own self-interest.
+
+Evacuees will not consider a globally optimal solution for everyone, and will instead make a locally greedy solution, modeling the urgency of an evacuation situation.
+
+- Assumption 2: "Natural flow" of evacuees.
+
+Upon notification of a required evacuation, individuals egress to and through the closest exit in order to leave the building as quickly as possible, unless explicitly directed by evacuation procedures and officials otherwise.
+
+- Assumption 3: Strict adherence to procedure.
+
+Individuals will generally follow the evacuation plan provided by Louvre management. For example, they will move to an exit assigned to them by the evacuation plan.
+
+- Assumption 4: Evacuees are safe and outside the responsibility of the Louvre emergency management team once they have exited the building.
+
+While we are well informed to the geometry and environment inside the Louvre, the outside world is complex and ever-changing. Due to this fact and the wide variety of potential evacuation threats, attempting to secure the safety of people outside of the building is also deemed outside of the scope of our plan.
+
+- Assumption 5: Increasing panic causes people to make more and more sub-optimal or irrational choices.
+
+Modeling how individuals react under the urgency of an evacuation situation can help us understand how our model can extend to real-world situations.
+
+- Assumption 6: Elevators are off limits during evacuation situations except for emergency personnel and disabled people.
+
+Elevators can be dangerous in emergency situations 5.
+
+- Assumption 7: Language barriers can be mitigated by appropriate signage and technology in multiple languages, such as multilingual phone apps.
+
+Many of the Louvre's current signs are not written in French, but rather contain universally comprehensible symbolic instructions [6]. Moreover, software packages and phone apps giving potentially non-French speaking evacuees directions are easily written to accommodate different languages.
+
+# Introduction: Definitions and Roadmap
+
+# First Definitions
+
+In order to design an "efficient evacuation plan", we must first define both what an "evacuation plan" consists of and how exactly one might be "efficient".
+
+We define an evacuation plan simply as a collection of pathing procedures that evacuation officials supply and enforce for each evacuee. We allow the procedures to be conditional on the location and state of the evacuee; that is, two evacuees under different circumstances might be directed to two different exits. The simplest evacuation plan would be one completely adhering to the "natural flow" of evacuees in which evacuees move towards the exit nearest to them, in accordance with Assumptions 1 and 2.
+
+Now, to measure the efficiency of each evacuation plan, a first and common approach would simply be
+
+to estimate the time it takes to fully empty the building under the given evacuation plan. However, this measure is somewhat naive, since it discounts such factors as safety and is highly dependent on initial conditions. For example, a certain evacuation plan that minimizes exit time enacted on a day with the majority of visitors clustered around the Mona Lisa might attempt to funnel all of these visitors through one exit, which may compromise safety through overcrowding, trampling, and mob panic risks. Thus, this "fast" evacuation plan may not necessarily be desirable.
+
+As an alternative to the time measure, we consider instead maximum exit rate or, mathematically, the maximum of the time derivative of exited evacuees. From a surface level, this seems to be an identical measure, since a decrease in the time for complete evacuation would necessarily mean an increase in exit rate, while an increase in time would indicate a decrease in exit rate. However, these are average exit rates that are directly affected by time, rather than the maximum exit rate. Assuming that the exit rate reaches a peak sometime during the middle of evacuation, a graph of exited evacuees to time elapsed would look something like Figure 1.
+
+
+Figure 1: Sample graph: exited evacuees to time elapsed
+
+The benefit of using maximum exit rate is that while it still accounts for the value of a "fast" evacuation, the main focus is in allowing for a larger flow of people. In other words, when optimizing for maximum exit rates, we actually optimize the throughput of evacuees through the Louvre rather than the output. The value of optimizing maximum exit rates is two-fold: 1) if the Louvre is at high capacity, average exit rates should approach maximum exit rates; 2) if the Louvre is at low capacity, higher throughput should decrease crowding risks. As a result, maximizing throughput is directly correlated with maximizing public safety.
+
+# Modeling Roadmap
+
+We now proceed to developing a model that can adequately assess, design, and optimize evacuation plans with respect to our maximum exit rate measure. To achieve these goals, we implement a two-stage model. The first stage seeks to use computational agent-based modeling to understand local evacuee flow dynamics within sections of the Louvre. The second stage conglomerates information on the various sections into a flow network upon which we can assess and optimize evacuation plans. To clarify, a roadmap for our model is described by the following steps:
+
+1. Partition each floor of the Louvre into smaller subsections.
+2. Develop a computational agent-based model in NetLogo to study local evacuation phenomena and evacuee flow for each partition/section of the Louvre.
+
+3. Develop a global network that models each partition/section of the Louvre as nodes, passageways between them as edges, and evacuee flow as weights.
+4. Perform relevant graph algorithms to maximize evacuee flow and predict the effect of adding or removing edges.
+5. Interpret results of both the local model and the graph algorithms in real-world terms and infer useful policy suggestions based on these results.
+
+# Part I: The Local Section Model
+
+The primary content of this section are steps 1 and 2 described in the roadmap. Specifically, partitioning the Louvre and developing the local evacuation model.
+
+# Partitioning Sections
+
+The primary challenges we identified in discussing evacuation models out of the Louvre are the theoretical and computational difficulties involved with understanding the museum's complex layout or geometry. The Louvre consists of nested gallery layouts, several access points to other floors, and multiple exit points, making anything but very simple models of total evacuation flow across the building difficult. Moreover, models across the entire building begin to become computationally infeasible as the Louvre approaches tens of thousands of total visitors a day. As a result, any purely computational modeling paradigm would therefore be reduced to including only very simple behaviors in order to compensate. However, by partitioning the museum into smaller, less complex subsections to be modeled individually, we reduce both computational and theoretical complexity in our modeling, allowing richer, more meaningful extrapolations of real-world behavior.
+
+We chose to model the Louvre by splitting each floor into the five subsections demarcated by Figure 2 and label section A-E (for example, the bottom-left section on the ground floor would be labeled "ground floor A"). The Napoleon Hall, in addition, has a pyramid entrance that does not exist on any floor (as shown by Figure 3), and is in fact the only relevant subsection on that floor. We denote this "Napoleon P".
+
+
+Figure 2: We split each floor of the Louvre into five subsections for computational and theoretical feasibility in modeling. Starting from the bottom and moving counter-clockwise, they are labeled A-E, respectively.
+
+# Development
+
+Our local evacuation model (hereafter called the Local Section Model) is developed in NetLogo, an agent-based modeling software designed for studying complex systems built by Uri Wilensky [7]. The main idea
+
+
+Figure 3: The Napoleon Hall contains a pyramidal subsection unique to its floor, and will be denoted "Napoleon P". In fact, this is the only relevant subsection of the Napoleon Hall.
+
+of agent-based modeling is that agents are single units with specific, well-defined goals. While an individual agent's behavior is typically simple, the complex behavior of a system of agents is usually more than the sum of its parts [9]. In context of this problem, since each individual person's goal is to successfully evacuate, an agent-based model is highly applicable. Furthermore, Figure 4 shows that the resultant interface that is quite easy to use and interpret. In particular, this figure shows our representation of section Ground D, specifically its complex gallery system. The white agents acting on a grid of green (passable) and black (impassable) patches represent evacuees finding their way through various galleries. The blue patches represent entryways from which more evacuees enter, and the red patches represent exits through which the evacuees egress. The specific behavior logic of the agents is shown in Figure 5. Note that each agent is equipped with a variable panic attribute and fixed speed attribute. Further information on the speed attribute is contained in the following section, while the panic attribute is explained here.
+
+
+
+
+Figure 4: A representation of the complex galleries present on ground D.
+
+The logic in Figure 5 is consistent with our assumption that each agent acts in their own self-interest, such as
+
+
+Figure 5: A flowchart describing an individual agent's behavior logic.
+
+making locally optimal steps towards the nearest exit. However, this factor alone causes purely deterministic behavior that does not appear to accurately model human movements, especially in a tense situation. For this, we added a panic parameter to represent increased tension in the individual, as evacuees react to being unable to move due high crowd density by shuffling around, looking for other exit routes. While it is difficult to quantify panic in the exact sense, it is not difficult to include its effects in a model per our assumptions about panic. For this, we took inspiration from a probabilistic technique known as simulated annealing [14]. From this general technique, we chose to increase the agents' panic each time step they are stationary and compute $p_m$ , the probability of moving to a patch slightly further from their destination, by
+
+$$
+p _ {m} = \exp (- \frac {1}{\mathrm {p a n i c}})
+$$
+
+Our choice of $p_m$ in context of the general simulated annealing technique draws has been studied to draw analogies to real physical systems [14]. Additionally, it is useful because, for positive values of panic,
+
+$$
+\lim _ {\text {p a n i c} \rightarrow 0 ^ {+}} p _ {m} = 0, \lim _ {\text {p a n i c} \rightarrow \infty} p _ {m} = 1
+$$
+
+which means that all values of $p_m$ can actually be used as probabilities.
+
+Lastly, we should give some mention to the slider parameters that appear in Figure 4. The people slider refers to the initial population of agents present in the room. Scaling this parameter changes the initial condition. The ppp slider refers to the maximum density of agents per passable patch. Scaling this parameter up increases the maximum allowable crowdedness. The repulsion-factor slider is related to what each agent determines to be "too crowded" during local pathfinding. Scaling this parameter up decreases crowding tolerance. The speed-stdev slider refers to the variance of speed attributes in the population of agents, while
+
+min-speed refers to the minimum speed attribute an agent can have. The panic-increment slider refers to how much more panicked each agent gets when unable to move, while min-panic refers the minimum panic attribute an agent can have. Entry-rate and open-entry are related to the spawning of new agents through the blue entry patches.
+
+# Physical Interpretability
+
+In order to make sure our model had real, physically interpretable results, we took steps to ensure that our computed rates had reasonable physical counterparts. In order to scale up to dimensions of the Louvre, we used Google Maps to find the global coordinates of corners of the Louvre. Figure 6 shows points that we chose to use as reference. With these coordinates, we used a Python package called GeoPy [10] to calculate the distances, and examples of these distances are also shown on Figure 6
+
+
+Figure 6: Points pinged on Google Maps, with some example distances.
+
+To determine the speed of the agents, we referred to research done by Yosritzal et al., in which they simulated a tsunami evacuation in Indonesia [8]. Given that we are also modeling evacuations, we chose to incorporate their results. Their results give an average walking speed of an individual during an evacuation as $1.419\frac{\mathrm{m}}{\mathrm{s}} = 4.656\frac{\mathrm{ft}}{\mathrm{s}}$ . As such, we scale each subsection's representation such that one patch is four feet by four feet. As a result, the one tick (the unit time in Netlogo) in the model's progression is roughly equivalent to one second in real world time. We take this as a reasonable approximation because the agents are allowed to move in both the cardinal and diagonal directions. Given that these movements represent a 4 feet movement or $4\sqrt{2} \approx 5.66$ feet movement, respectively, it is reasonable to translate 4.656 feet per second into 1 patch per tick. Additionally, to model a diverse population's varying walking speed, we approximate the distribution of walking speeds with a normal distribution of mean 4.656 ft/s and standard deviation 0.170 ft/s assuming large population size of age 20-60 years. This distribution was derived from the statistics of [8] on a population of age 20-60.
+
+# Corner Bottlenecking: The Price of Turning
+
+Before discussing further results, which will be detailed later, discussing the effect of corners and turning is important. Inspection of agent behavior demonstrates that the act of turning and the simple presence of corners present significant bottlenecks to optimal evacuee flow in the Local Section Model. Moreover, the prevalence of both turns and corners makes analyzing their effects important to the overall global analysis of exit throughput.
+
+Given an individual agent's wish to take the shortest path to an exit, a compressionary phenomenon begins to manifest at both corners and orthogonal exits, where a orthogonal exit is defined as an exit perpendicular
+
+to the entry corridor from which the agent entered. Figure 7a gives an example of this compressionary phenomenon over the Porte Des Lions exit of ground A, where we can clearly see the effect of the individual agent's desire to escape as quickly as possible: a triangular wedge formation that restricts maximum flow outwards, rather than the linear flow described by Figure 7b. While mitigated significantly by our agents' repulsion force, the effect is still significant; whereas the orthogonal exit modeled by Figure 7a had a maximum output flow of 4.2 agents per second, the linear exit modeled by Figure 7b had a maximum output flow approaching 4.8 agents per second, a $14\%$ increase.
+
+
+Figure 7: Left (a): Describes the effect of turning on exit flow, which totals 4.2 evacuees per second. Right (b): Describes a linear exit flow, which totals 4.8 agents per second.
+
+This phenomenon can be extended to most turns, an extension which becomes apparent when examining corners such as those in Figure 8a. The model represented by Figure 8a has an output flow of 2 evacuees per second, whereas the model represented by Figure 8b has an output flow of 2.6 evacuees per second, an increase of $30\%$ .
+
+
+Figure 8: Left (a): Describes the effect of corners on exit flow, which totals 2 evacuees per second. Right (b): Describes a linear output flow for comparison, which totals 2.6 evacuees per second.
+
+We understand that the orientation of the Porte Des Lions exit cannot be realistically changed to match that of the entry doorway. However, one policy recommendation we propose is the presence of emergency personnel to direct evacuee traffic such that their flow more closely resembles that of $\text{7a}$ rather than $\text{7b}$ and that of $\text{8a}$ rather than $\text{8b}$ . Moreover, any future secret exits constructed should be oriented with this phenomenon in mind, i.e. facing the largest open corridor. We detail further results and further policy recommendations in later sections; we give this corner analysis as a demonstration of NetLogo's utility in understanding local evacuation phenomena.
+
+# Part II: The Global Flow Model
+
+This section addresses parts 3 and 4 of the road map, by abstracting the problem with a network flows formulation, solving for an optimal solution, and discussion of model adaptability.
+
+# Abstraction as a Network
+
+We use a graph to represent the different sections and the connections between them. A graph $G = (V, E)$ is a set of vertices $V$ and edges $E$ that represent the connections between vertices. Namely, an edge can be represented as an ordered pair $(u, v)$ , which means the edge starts at $u \in V$ and ends at $v \in V$ . In context of our problem, we let our predefined simpler sections represent vertices and let the existence of a pathway between these sections represent edges. These pathways could be either hallways or staircases. By examining floor plans [3], we determined the locations of all staircases were and their connected subsections. Figure 9 shows the resultant graph representation. Note that the nodes are represented by two letters, such as L_A, which corresponds to "Lower ground A". The rest of the nodes correspond to "Napolean": N, "Ground": G, "First": F, and "Second": S, similarly. For reference, the $A - E$ lettering is shown in Figure 2.
+
+
+Figure 9: General graph showing all possible edges
+
+# Network Flows Formulation
+
+In order to formulate this problem in an abstract setting, we introduce the problem of network flows [13]. In particular, we discuss the maximum flows problem and explicitly discuss the connection to our model. Consider a general directed graph $G = (V,E)$ with a source vertex $s\in V$ and sink vertex $t\in V$ . Additionally, define a capacity function on the edges. If we define a directed edge $e = (u,v)$ , then $c(u,v):E\to \mathbb{R}^+$ . An $s - t$ flow on a network is a function $f(u,v):E\to \mathbb{R}^{+}$ that satisfies the following constraints:
+
+Skew symmetry: $f(u,v) = -f(v,u), \forall (u,v) \in E$
+
+Capacity: $f(u,v) \leq c(u,v), \forall (u,v) \in E$
+
+Balance: $\sum_{v\in V}f(u,v) = 0,\forall v\in V\setminus \{s,t\}$
+
+Notice how the balance constraint excludes $s$ and $t$ . This is because the value of a flow, $|f|$ , is defined as
+
+$$
+| f | = \sum_ {v \in V} f (s, v) = \sum_ {v \in V} f (v, t)
+$$
+
+The maximum flows problem asks to find the flow of maximum value on a given network. Intuitively, the flow problem can be thought of as sending water through a network of pipes. If we think of flow as water, pipes as edges, and vertices as junctions, then the skew symmetry constraints says that the amount of water on the network being sent across an edge is equivalent to a negative amount of flow being sent in the opposite direction. Capacity constraints say that pipes have specific amounts of water that can flow through them. Lastly, balance constraints say that each unit of water that flows through any non-terminal junction must also flow out. The question then becomes, how much water can be sent from the source to the sink through the pipes in one unit time? A specific example is shown in Figure 10
+
+
+Figure 10: An example network with capacities written on the edges. Maximum flow $= 11$ (send 6 on the top path and 5 on the bottom path).
+
+In order to cast our evacuation problem to the specific case of network flows, we need to make slight modifications to the general network structure described earlier. For example, the classical network flows problem assumes a single source and a single sink. However, the initial state of the Louvre during an evacuation would have visitors starting in multiple sections of the museum. We solve this problem by creating what we call a "super-source" vertex $s$ that connects to each section of the museum. For a node $r$ , we set $c(s, r)$ to be the initial population of the section. When we compute the maximum flow, this extra set of edges can send at most the initial population of the room as flow, after which the movement out of from one section to another limits movement. Similarly, we add a "super-sink" and there will be a directed edge from every section that allows people to exit the museum to the "super-sink". This construction reduces our problem to the maximum flow problem. In terms of producing an answer to our specific network, there are many well-studied algorithms that give not only the maximum flow, but the allocation of flows on the edges. In particular, we use the Edmonds-Karp algorithm (1972) [15] in order to solve for maximum flow on our network.
+
+Once we have maximum flow value $m$ , we have the interpretation that the given evacuation plan allows for a maximum of $m$ people per second to be evacuated from the Louvre. This is an optimal rate under the constraints; if $p$ people are in the Louvre when the evacuation is called, then the minimum time to evacuate the Louvre, $t_{\mathrm{min}}$ is given by
+
+$$
+t _ {\mathrm {m i n}} = \frac {p}{m}
+$$
+
+The value of $t_{\mathrm{min}}$ gives us a heuristic by which we can rank different evacuation strategies. While it also leaves us some room for discussion about how much $t_{\mathrm{min}}$ underestimates the actual time, we can still rank the strategies relative to each other. Also, when we discuss where the bottlenecks are, we can make more assessments as to whether $t_{\mathrm{min}}$ actually appears to be a significant underestimate on time needed.
+
+# Finding Bottlenecks
+
+A big advantage of our reduction is that it allows us to easily find bottlenecks. In order to understand how, we need to introduce another abstract problem on graphs: the minimum cut problem. Similar to the maximum flow problem, we have a graph $G = (V,E)$ , a source vertex $s \in V$ and a sink vertex $t \in V$ , and a
+
+capacity function $c(u,v): E \to \mathbb{R}^+$ . An $s - t$ cut is defined by a subset of edges $C \subseteq E$ such that removing $C$ from $E$ results in a disconnected graph where $s$ and $t$ are in different components. The cost of a cut is given by
+
+$$
+\operatorname {c o s t} = \sum_ {(u, v) \in C} c (u, v)
+$$
+
+The minimum cut problem asks for a cut with minimum cost. The reason for introducing the min-cut problem is that it is intimately related to the max-flow problem by the max-flow min-cut theorem [16]. This theorem says that the max-flow from $s$ to $t$ in a network is exactly equal to the minimum $s - t$ cut. Furthermore, since this theorem is a consequence of what is known as strong duality [17] in linear programming, the optimal solution exhibits a property known as complementary slackness. In this context, complementary slackness says that the edges in the minimum cut are exactly those edges in $G$ that have flow equal to their maximum capacity. So, we can interpret this as the following: the edges that have max-flow in $G$ are exactly the set of edges that correspond to the bottlenecks.
+
+# Adapting our Model
+
+Since we are unaware of the exact location of any secret exits, we are forced to keep the model very flexible in the event that secret exits need to be opened. This requirement highlights the power of our abstraction using graphs and network flows. For example, the addition of a secret exit in a certain section can be represented by an edge from that vertex to the supersink, while blockages between sections are simply represented by removal of corresponding edges. This technique of removing edges can also represent closing off additional edges, such as Passage Richelieu, by removing the corresponding edge to the supersink. Although the supersource is not explicitly shown in the graph figures, it can also be used for adaptability. As discussed in the development section, we use the supersource vertex to feed in an initial population of people. So, the Louvre staff can set varying initial distributions of visitors throughout the museum to see the results on evacuation proceedings.
+
+Louvre management can also experiment with parameters within the NetLogo model; for example, they can observe the effect increasing panic rates have on individual and groups of agents. Alternatively, they can intentionally set some agents to be significantly slower than others and use this to represent disabled evacuees. With the flexibility in these parameters, the management can test both different evacuation plans and the effects of human behavior on these plans. In this way, they can decide on a set of evacuation strategies to deploy depending on varying evacuation situations.
+
+It is important to note that despite our concentration on the specifics of the Louvre's layout, our model is highly adaptable to buildings and floor layouts other than the Louvre. In fact, the only part of the model which needs to change to respond to different building layouts is the Local Section Model; in such a case, NetLogo representations of floor sections of a new building would have to be built, and different evacuation plans be represented in these NetLogo representations. However, we assert that these are the only changes that adapting our model to a new building requires, particularly since the Global Flow Model is non-specific to any particular building and only requires accurate identification of the edge weights (one example of which we provide in Appendix A.
+
+# Part III: Results, Discussion, and Recommendations
+
+In this section, we address step 5 of the roadmap through the discussion of the following topics
+
+1. The Pyramid is NOT a Bottleneck
+2. Passage Richelieu is Critical to Evacuation Proceedings
+3. Ground D is a Bottleneck
+4. Braess' Paradox
+
+5. The Residual Network Gives Pathing Recommendations for Emergency Personnel
+6. Summary of Policy Recommendations
+
+In this section, we discuss possible evacuation plans and our model results. For our main model run, we choose to allow the main pyramid entrance, the Passage Richelieu entrance, the Carrousel du Louvre entrance, and the Portes Des Lions entrance as museum exits, since they are both public from use in Affluences [12] and commonly used. In particular, we found the respective sections were Passage Richilieu in Lower ground D, Carrousel du Louvre in Ground E, and Portes Des Lions in Ground A. With this information in mind, our evacuation plan calculates, for each section, the closest exit section in terms of Euclidean distance and sends all people towards that exit. For this setup, we calculated a throughput of 17.8 people per second. For reference, since the Louvre had 8.1 million visitors in 2017, [1], and the Louvre is not open on Tuesdays [4], this leaves an average of about 26,000 people per day. Now, while it is likely that the busiest day had significantly more than the average, it is unlikely that all visitors happened to be in the Louvre at the same time. So, we take this 26,000 as a proxy for the worst case in an evacuation, but this is easily adaptable in our model as discussed previously. If we had been able to find information about the distribution of people throughout time, we could make a better judgment for this figure, but our experiments just use 26,000 as the number of people. Also, without extra information about the distribution of people, we simply allocated them uniformly across the rooms. Although, this is another easily adaptable part of the model. With these decisions, this evacuation plan gives
+
+$$
+t _ {\min } = \frac {2 6 0 0 0}{1 7 . 8} = 1 4 6 0. 6 7 \mathrm {s} = 2 4. 3 4 \mathrm {m i n}
+$$
+
+To see the bottlenecks, refer to Figure $\square$ for a visualization. There are bottlenecks from each of $L_{D}$ , $G_{A}$
+
+
+Figure 11: A representation of the bottlenecks on our constructed graph. Green vertices have a direct exit edge to the supersink. Red edges identify edges in the minimum cut, which represent inherent bottlenecks to outward exit flow. Any attempt to improve Louvre evacuation time should begin with these edges.
+
+and $G_{E}$ to the supersink. These are the edges corresponding to the Passage Richilieu, Portes Des Lions, and Carroll du Louvre entrances, respectively. This means that the global bottleneck for these sections are actually exiting the building, not getting there. This is in contrast to the $N_{P}$ vertex for the pyramid. For the pyramid, the bottle neck actually involves getting to the pyramid and not crossing through it. In particular, the stairs between $L_{B}$ and $N_{P}$ and between $L_{C}$ and $N_{P}$ form the bottlenecks for this particular evacuation plan.
+
+# The Pyramid is NOT a Bottleneck
+
+Napoleon P, or more colloquially the pyramid, is the main entrance provided for the public. Furthermore, it is rate-limited by a set of stairs leading up towards ground level. As such, we would intuitively expect the edge leading from Napoleon P towards the outside, or supersink, to be a part of the minimum cut presented in the section above, and therefore a bottleneck which requires further analysis.
+
+This turns out not to be the case. The pyramid itself is not the bottleneck; by examining the minimum cut, we see that by virtue of the edges from lower ground C to Napoleon P and from lower ground B to Napoleon P that it is, in fact, entering the pyramid which represents the true bottleneck, and not the pyramid itself.
+
+As such, our policy recommendation concerning the pyramid is theoretically simple: provide a higher access rate to the pyramid. In practice, however, we understand this to be difficult, requiring either the construction of new staircases or the widening of existing ones. A more feasible policy recommendation is a priority on the opening of secret exits surrounding the pyramid, which may provide relief to the entryways into the pyramid. More specifically, if secret exits exist in lower ground B or lower ground C, these exits would provide the most relief given our current evacuation plan. We will see in following sections why the pyramid exit has such a large effect on the efficacy of a given evacuation plan. Note also that we recommend these locations only in the event that museum staff decide to open secret exits at all, since recommending the construction of new public exits is beyond the scope of this project.
+
+# Passage Richelieu is Critical to Evacuation Proceedings
+
+The most important section identified in our evacuation plan is lower ground D, because of the relief effect it provides on entry into the pyramid. We see this not only because Passage Richelieu is a part of our minimum cut, but also because removal of its representative edge connecting lower ground D to the supersink, denoted $(L_{D},S)$ , constitutes a 6.8 evacuee per second reduction in our model's total throughput. This reduces our initial valuation of 17.8 evacuees per second to a modest 11.0 evacuees per second and our estimated total evacuation time from 24.34 minutes to 39.39 minutes, a staggering $64\%$ jump in evacuation time. However, $(L_{D},S)$ has a capacity of 7 evacuees per second; the 0.2 evacuee per second discrepancy comes in a rerouting in our max flow network. This in fact is an example of the adaptability of our model which we detailed in an earlier section.
+
+More importantly, however, we observe that the removal of $(L_D,S)$ changes our minimum cut dramatically; in particular, the edge from the pyramid to the supersink, denoted $(P,S)$ , becomes a bottleneck point. For the graph, this is irrelevant; output flow is output flow, wherever it may come. Qualitatively, however, safety concerns around the pyramid are more dire than around our other exit bottlenecks. The pyramid's glass composition is one concern; another is that given the pyramid's status as Louvre icon and main public entrance, it is likely to be targeted first in the event of external attack. As a result, protecting the Passage Richelieu is also a form of protection for the pyramid.
+
+Our policy recommendation is therefore increased security presence concentrated on securing the Passage Richelieu. This matches well with the fact that we model Passage Richelieu to have, at a rate of 7 evacuees per second, the largest exit throughput of any of our four public entrances. In addition, its additional utility in reducing strain on the pyramid entrance is highly important to safe evacuation in certain emergency types.
+
+# Ground D is a Bottleneck
+
+The most restrictive edge in our minimum cut, and therefore the most powerful bottleneck on our outgoing flow of evacuees, is the edge connecting ground D to ground E. In fact, this D-E edge actually only allows a maximum throughput of 1.8 evacuees per second, an unexpected number when given the width of the corridor connecting ground D and ground E, modeled conservatively at 4 patches, or 20 feet, wide.
+
+To understand this discrepancy, we turn to our Local Section Model for insight. Our given evacuation plan asks all evacuees in ground D to move towards the Carrousel du Louvre exit in ground E. However,
+
+
+Figure 12: A depiction of bottlenecking in ground D. Sources such as stairs and other hallways are marked in blue, whereas the exit corridor is marked in red. The most prominent bottlenecking section is boxed in purple, whereas an unused path is boxed in orange. An example of a secondary (corner) bottleneck is circled in yellow.
+
+their progress is impeded by narrow corridors and complex turns of complex and nested galleries, which are detailed in Figure 4. As a result, evacuees asked to traverse ground D towards the corridor leading to ground E are blocked by other evacuees, leading to the bottleneck described above as well as increasing levels of panic in the population. This is particularly problematic because ground D is the only section which leads to ground E; note, in Figure 12 how many evacuees are trapped in the middle common area, and how correspondingly few evacuees are able to push through to the exit. Moreover, rising panic levels, overcrowding, and the resulting public safety dangers contribute to difficulties beyond a limitation on exit flow.
+
+To understand the quantitative effect of ground D bottlenecking on total global flow, we note that since ground D is the only section connecting to ground E, the exit flow out from the Carroussel du Louvre exit is defined as $\min(F_e, F_d)$ , where $F_e$ and $F_d$ are the flow out from the Carroussel du Louvre, measured in evacuees per second, and the flow from ground D to ground E, respectively. Experimental results give values of $F_e = 4.2$ evacuees per second and $F_d = 1.8$ evacuees per second, giving a total exit flow out from the Carroussel du Louvre of 1.8 evacuees per second. As a result, the layout of ground D is directly responsible for a net loss of 2.4 evacuees per second. Theoretically removing the constraint of $F_{de}$ could therefore improve global exit flow from 17.8 evacuees per second to 20.2 evacuees per second, improving total evacuation time from 24.34 minutes to 21.45 minutes. This is the largest such potential improvement found across our model, and is therefore critical to understanding how a Louvre evacuation could be tightened.
+
+One other theoretical experiment we performed is the replacement of the Carrousel du Louvre exit with an equivalently-sized exit in ground D, placed precisely where the largest amount of evacuees begin to crowd (Figure 13). Call this new exit "passage ground D". Placement of passage ground D results in an exit flow of 4.8 evacuees per second, as opposed to the 1.8 evacuees per second provided by the Carrousel du Louvre exit. Note that this value matches well with the value provided by the linear flow out of the Porte Des Lions exit described by Figure 7b. Since the Porte Des Lions exit has been modeled to the same size as the Carrousel du Louvre exit, this indicates that passage ground D has reached some level of optimality in terms of its exit throughput. The presence of passage ground D would result in a global output flow of 20.8 evacuees per second, reducing total evacuation time to 20.83 minutes, a percentage decrease of $15\%$ when compared to the original value of 24.34 minutes.
+
+The Carrousel du Louvre is a public exit, and its use therefore does not involve the cost of revealing a secret
+
+
+
+
+Figure 13: Placing an exit, marked in red, where the largest amount of evacuees begin to congregate, provides a network flow of 4.8 evacuees per second as opposed to a base flow of 1.8 evacuees per second.
+
+exit; as a result, barring obstacles, the Carrousel du Louvre exit is important to a majority of evacuation plans. However, the Carrousel du Louvre exit is currently underutilized when compared to its maximum output capacity in our evacuation plan. In fact, this is true for all evacuation plans involving the Carrousel du Louvre exit, since ground E, where the Carrousel du Louvre exit is located, is connected only to ground D. However, its throughput is currently rate-limited by the large, complex network of galleries contained in ground D, especially by the section outlined in purple in Figure 12
+
+Several policy recommendations can be inferred from these observations. The first is the widening of the section Figure 12 outlines in purple; the proximity of gallery walls to museum walls constricts the natural flow of evacuees towards the exit and causes increasing levels of panic as well. Opening this section would allow more evacuees to take advantage of this space.
+
+A second policy recommendation is the placement of emergency personnel to direct evacuees around the "back", through the section Figure 12 outlines in orange. Currently, agents following their self-interest in following the shortest path to the exit begin to squeeze towards the purple-outlined section, causing our bottleneck; by directing people through the orange-outlined path, a locally suboptimal path for one person would increase total throughput and therefore provide a globally more optimal solution. Alternatively, a software package such as a phone app to highlight unused or less crowded routes to a destination would also optimize the global evacuation situation.
+
+Another policy recommendation we make is the removal of "secondary" bottlenecks, or corner bottlenecks, one example of which is circled in yellow in Figure 12. Several of these exist in ground D, restricting evacuee flow and providing multiple bottlenecks to the exit point, trapping a large number of people in a small space and, in restricting their ability to feel progression towards their end goal, increasing population panic levels. Note that some of these second bottlenecks may be hidden by the large flood of people we use to extract maximal capacity from the exits. Providing simpler gallery structures would help produce a more natural flow of evacuees, potentially increasing throughput and lowering public safety concerns about overcrowding.
+
+More drastic recommendations center around analysis around the exit depicted by Figure 13. Direct construction of a new exit is beyond the scope of reasonable recommendations to be put forth by this paper; however, if there exists a secret exit located in ground D, its opening would provide the largest gain in exit throughput predicted by our model. Furthermore, if a new exit, public or secret, were to be constructed, we predict that the optimal location for such a new exit would be approximately where passage ground D is
+
+currently located in Figure 13.
+
+# Braess' Paradox
+
+One procedural recommendation that we can make relates to the sections of the museum with the general shape of sections such as first floor B. We can draw this section in graph form as shown in Figure 14 The purpose of this example is to illustrate Braess' Paradox [11], which states that removing edges, i.e. movement options, from a network can actually increase traffic flow across a network. Note that flow in this context relates to the actual amount of time taken for people to cross the network, not network flows in the abstract setting as discussed in Part II.
+
+
+Figure 14: Example to illustrate Braess' Paradox, given by [II].
+
+In Figure 14 assume that there are 4000 people attempting to cross from the start node to the end node. The time taken to traverse an edge is either 45 or $0.01P$ depending on the label, where $P$ is the number of people currently on the edge. First, consider the case where the edge labeled $x$ does not exist. Let us denote the path through $v_{1}$ as the $v_{1}$ path and similarly for the $v_{2}$ path. If $A$ people take the $v_{1}$ path and $B$ people take the $v_{2}$ path, then the time for each path will be $0.01A + 45$ and $0.01B + 45$ , respectively. Assuming rational people will take the path to minimize their own shortest route, they will reach an equilibrium where each path takes the same amount of time, giving the following system of equations:
+
+$$
+\begin{array}{l} 0. 0 1 A + 4 5 = 0. 0 1 B + 4 5 \\ A + B = 4 0 0 0 \\ \end{array}
+$$
+
+This system solves to $A = B = 2000$ . In this case, it takes each person $45 + .01(2000) = 65$ units of time. Now, consider the case where edge $x$ does exist, but it takes a very generous 0 time to traverse it. Consider the case where a single person takes the path $start - v_{1} - v_{2} - end$ . For this person, the time to traverse becomes $0.01(2000) + (0.01)(2001) = 40.01$ , a close to 25 unit saving. However, multiple people would try this route, and with each additional person, the time of the route would increase until 2500 people decide to take the route, at which point the route takes $0.01(2500) + 0.01(4000) = 65$ units of time, the same as before. However, those taking only the $v_{2}$ path will find that their route now takes 85 units of time, so they are incentivized to take the $start - v_{1} - v_{2} - end$ path as well. Now, everyone's path takes time $.01(4000) + .01(4000) = 80$ units of time, and anyone taking either of the original paths would require time $45 + .01(4000) = 85$ . So, everyone will take this new route and the addition of an extra route has actually made global flow rate worse for everyone.
+
+This section demonstrates the effect Braess' Paradox can have in evacuation situations under selfish-behavior assumptions. Regardless of this theoretical example, it has been shown by Valiant et al. [11] that with high probability, Braess' Paradox will occur in networks and produce similar results to the example shown by Figure 14. It is purely a result of each individual's selfish choices, which is consistent with our assumption of people's behavior in evacuation contexts. Per this discussion, we make a policy recommendation that for any sections of the Louvre that have this middle corridor, such as the B sections which we defined, that emergency personnel be place there. In particular, the emergency personnel should work to route the people out of the middle corridor of any of these sections. Not allowing anyone to take the middle corridor would allow a higher global evacuation flow.
+
+# The Residual Network Gives Pathing Recommendations for Emergency Personnel
+
+Another consequence of the Global Flow Network is the fact that by using the max-flow solution over our constructed graph, we can calculate a residual network, defined in [13], in order to find entry pathways for emergency personnel. A residual network is defined such that a path from any exit point (Pyramid, Carrousel Du Louvre, Passage Richelieu, or Porte Des Lions) to any interior section of the museum exists if and only if there exists a path from the supersink to the interior node in the residual graph. Note that if such a path does not exist, emergency personnel will have to force their way totally against the flow of traffic; however, by using a pathway in the residual network, emergency personnel will not obstruct the flow of evacuees out while they enter the building. This highlights another reason for securing the Passage Richelieu; by keeping the edge from the supersink to the pyramid out of the min-cut, i.e. not a bottleneck, we ensure a pathway by which emergency personnel can enter the museum.
+
+As such, we come to more policy recommendations. The first is to prioritize evacuation plans such that there exists an entry pathway in the residual network; in many ways, this can be summarized into preventing the pyramid exit from becoming a bottleneck. The second is to use the residual network to find entry pathways for emergency personnel such that their entrance into the building does not affect the flow of evacuees out of the building.
+
+# Summary of Policy Recommendations
+
+We summarize our preceding policy recommendations in the following section, and add a few which are not directly to our results but rather to model structure.
+
+- Recommendation 1: Increase access to the pyramid
+- Recommendation 2: If the emergency situation calls for revealing secret exits to the public, prioritize secret exits in sections surrounding the pyramid provided such exits exist. Further prioritize these exits in particularly pyramid-sensitive emergency situations as well as situations where a large number of emergency personnel is needed.
+- Recommendation 3: Increase security presence around the Passage Richelieu, and place higher priority on securing its efficacy for evacuation use in order to also secure the safety of the pyramid exit.
+- Recommendation 4: Use emergency personnel to direct evacuees in a fashion such that their orientation towards exits resembles the linear flow detailed in Figure 7b.
+- Recommendation 5: Any future secret or public exits constructed should be oriented such that a linear flow is established through the nearest entryway or corridor.
+- Recommendation 6: Use emergency personnel to direct evacuation flow around the "back" of ground D, i.e. the section highlighted in orange in Figure 12. Alternatively, technology such as a phone app could be easily implemented to assign paths to different individuals or groups, and may in fact be the optimal implementation of this recommendation.
+- Recommendation 7: Remove corners and complications from sections such as ground D, where complex gallery layouts produce secondary bottlenecks decreasing public safety. Simpler gallery layouts would increase evacuation efficacy dramatically.
+- Recommendation 8: If the emergency situation calls for revealing secret exits to the public, prioritize a secret exit in ground D provided such an exit exists. Further prioritize this exit in situations where higher throughput is the primary goal.
+- Recommendation 9: Use emergency personnel to direct evacuees away from middle corridors, such as those detailed in [14].
+- Recommendation 10: Prioritize evacuation plans such that the residual network of the max-flow solution on our constructed graph has an path into the museum.
+
+- Recommendation 11: Use such paths as described above so that emergency personnel can enter the museum without inhibiting evacuee flow outwards.
+- Recommendation 12: Use technology and appropriate signage to mitigate the effects of language barriers on a diverse population. Any routing phone app such as recommended in Recommendation 6 will necessarily be made multilingual.
+
+# Evaluation of the Model
+
+# Sensitivity Analysis
+
+Our Global Flow model, in particular, is quite robust. Provided appropriate edge capacities, we use an algorithm proven to find the optimal solution [15]. As such, the model displays sensitivity mostly in the Local Section Model due to the high-variance nature of agent-based models and need for several meaningful hyperparameters to describe human behavior. As a result, our sensitivity analysis involved varying some of the parameters in the NetLogo model to observe its robustness in response to such variance. In particular, we choose to focus on varying the following 4 parameters: repulsion factor, speed-stdev, panic-increment, and ppp (people per patch). Given the base values of the constants with we used for our prior results, we use controlled experiments over a single variable at a time. We then repeated this for each of these 4 parameters, and found results for each of 3 section types: a gallery, a tight corridor, and a corner. The base values for the constants are shown in Table [1(a)]. For sake of comparison, we also provide the flow capacities we found with our default parameters.
+
+| Section Type | Default Constants | Section Type | Flow Capacity |
| repulsion | .35 | Gallery | 1.8 |
| speed-stdev | .0365 | Corridor | 2.58 |
| panic-increment | .1 | Corner | 2.00 |
| ppp | 1 | | |
+
+Table 1: Left (a): Default values for computation in the Local Section Model, Right (b): Flow values with defaults parameters
+
+| Section Type | Repl | Speed-stdev | Panic-increment | ppp |
| Gallery | 1.14 | .98 | 1.06 | 2.38 |
| Corridor | 2.16 | 2.45 | 2.56 | 4.78 |
| Corner | 1.93 | 1.88 | 1.99 | 4.44 |
+
+Table 2: Summary of results by holding 3 variables at default and changing the other. Values: repulsion factor = 1, speed-stdev = .15, panic-increment = .2, ppp = 2
+
+From the above tables, we see that the NetLogo model sensitivity is dependent on the parameters in terms of raw outputs, as expected. However, the relative ordering across these three types never changes across the experiments, which is what we would expect given our qualitative understanding of the complexity of each layout. Also, none of the results are unreasonable in context of our model, which shows both stability and, importantly, meaningfulness in these parameter choices. A poorly developed computational model with heavily tuned parameters would not be able to simply double or triple some of its parameters without resulting in unreasonable answers.
+
+Increasing repulsion factor increases agent aversion to crowded areas, pushing them away from each other. As a result, fewer people tend to cross the exit line. However, an interesting side effect of such a high repulsion factor is that its performance is similar to the default parameters across the corner. Inspection of this phenomenon reveals that a high repulsion effect actually leads to more optimal agent turning behavior, as shown in Figure 15. More specifically, high repulsion induces agents to use the width of the corner more effectively, in a very rare instance of locally greedy behavior leading to globally optimizing behavior.
+
+
+Figure 15: Left (a): Describes the effect of corners on exit flow, which totals 2 evacuees per second. Right (b): Describes a high-repulsion output flow for comparison, which also totals almost 2 evacuees per second despite negative effects of increased repulsion on exit flow in general.
+
+The increase in standard deviation of walking speeds would, a priori, appear to have no effect because of the symmetry in the sampled normal distribution. However, this is not true because these walking speeds of a group are controlled by the slowest moving members in front. As a result, having some people move even slower would cause the people behind them to slow down as well, so our results have reasonable real-world interpretations as well. Panic-increment is also interesting, since it heavily affects the Gallery section but neither corner nor corridor by a large amount. We manually inspected these runs and found that the increase in panic only comes into play because of the complicated layout in the gallery. In the other two, the people can quite easily maintain movement, so the panic does not increase nearly as much, which makes sense in context as well; more qualitatively, we might say that high panic is most important if people are trapped in more complicated, more maze-like surroundings. Lastly, for ppp the observed increase in rate calculation matched our reasonable expectation that more people per patch would indicate a large final exit throughput.
+
+Overall, our sensitivity analysis shows that the model does not behave erratically with regard to large increases in parameters. Not only does this imply the model is stable, but the Louvre management will be able to test a large range of scenarios and plans and still retain the full utility of our model.
+
+# Strengths & Weaknesses
+
+# Strengths
+
+- Agent based modeling allows us to observe how complex behaviors emerge from a set of simple rules. In addition, NetLogo provides an intuitive interface to change parameters. We also took many steps to ensure physical interpretability so we could map solutions back to the real world.
+- Our decision to break up the Louvre into multiple simpler sections allows us to observe and understand agent behavior within specific sections and behavior that leads to bottlenecks.
+- In our global flow model, we use a high level of abstraction that makes it easy to integrate results from the local section model to assess the entire system. Also, the use of strong duality allows for a very natural interpretation in terms of global bottlenecks.
+- The high level of abstraction naturally lends itself to adaptability. Most test scenarios such as secret or blocked pathways can easily be mapped onto the Global Flow Model and incorporated into the strategy.
+- The Local Section Model is quite robust in regards to large changes in the parameters. Additionally, the results of parameter changes have reasonable interpretations in the context of the situation.
+
+# Weaknesses
+
+- Our choice to maximize throughput (or max flow), as opposed to time elapsed, yields an evacuation plan that is theoretically great in the worst case, but may not be the quickest in average or more common cases.
+- The Local Section Model inherently models both the Louvre layout and agent pathfinding in a discrete context. Given sufficient computational power, a more continuous model would give more realistic results.
+- The agent logic in our Local Section Model places a very heavy emphasis on distance from an exit, which causes the majority of people to crowd towards one exit. Only when huge queues of people form behind bottlenecks are some agents able to find alternate exits.
+- There is a discrepancy between the optimization of the agents in the Local Section Model and flow optimization of the Global Flow Model. In particular, as the names suggest, agent logic in the Local Section Model prioritizes locally optimal, selfish behavior, whereas the Global Flow Model finds a solution that is optimal over the totality of the populace.
+
+# Conclusion & Future Work
+
+In our paper, we designed a highly interpretable two-part model that is able to accurately evaluate the efficacy of any particular evacuation plan and the safety risks therein, and therefore can be used to search for an optimal Louvre evacuation plan. This model, divided into an agent-based computational model denoted the Local Section Model and a maximum flow graph computation denoted the Global Flow Model, is also able to identify key bottlenecks in such evacuation plans, giving Louvre staff an opportunity to understand how any given evacuation plan might be improved with respect to museum layout, evacuee path-finding, etc. In particular, we identified the critical bottleneck in our evacuation plan, as we felt that accurately addressing this bottleneck problem was the most critical problem with regards to public safety. Importantly, intelligent use of the Local Section Model also allows Louvre staff to model the effects of a diverse population - including different primary languages, a handicapped population, or large families. Our model is furthermore not restrained to the four given public exits, but can have any combination of exits and obstacles mapped onto the Global Flow Model; in fact, our model can be adapted to any other large building without significant changes in its construction. We furthermore communicated a clear plan of twelve recommendations for the Louvre staff to consider, including an emphasis on securing the pyramid exit in order to ensure an entry path for emergency personnel and to mitigate increased public safety difficulties with respect to the pyramid exit in particular. As such, we identify and provide policy, procedural, and technological recommendations regarding our general evacuation plan.
+
+Future work would including addressing some weaknesses in our model, including developing a stronger and more robust path-finding algorithm to more accurately model exit queueing phenomena as well as providing a higher level of granularity in the NetLogo representations of each floor section of the Louvre.
+
+# References
+
+[1] “8.1 Million Visitors to the Louvre in 2017.” Louvre Press Release, 25 Jan. 2018, presse.louvre.fr/8-1-million-visitors-to-the-louvre-in-2017/.
+[2] Reporters, Telegraph. "Terror Attacks in France: From Toulouse to the Louvre." The Telegraph, Telegraph Media Group, 24 June 2018, www.telegraph.co.uk/news/0/terror-attacks-france-toulouse-louvre/.
+[3] "Interactive Floor Plans." Louvre - Interactive Floor Plans — Louvre Museum — Paris, 30 June 2016, www.louvre.fr/en/plan.
+[4] "Hours, Admissions, and Directions." Louvre. https://www.louvre.fr/en/hours-admission-directions
+[5] "Elevator Safety." National Elevator Industry Inc. http://www.neii.org/safety_elevator.cfm
+[6] "Exit Sign". https://commons.wikimedia.org/wiki/File:Way_Out_sign, Louvre, 25_November_2011.jpg
+[7] Wilensky, U. (1999). NetLogo. http://ccl.northwestern.edu/NetLogo/. Center for Connected Learning and Computer-Based Modeling, Northwestern University, Evanston, IL.
+[8] Yosritzal, Yosritzal M Kemal, B PhD, Purnawan Putra, Hasdi. (2018). An observation of the walking speed of evacuees during a simulated tsunami evacuation in Padang, Indonesia. IOP Conference Series: Earth and Environmental Science. 140. 012090. 10.1088/1755-1315/140/1/012090.
+[9] M. Morris, Dean Adams, Kevin. (2013). The Whole is more than the Sum of its Parts: Understanding and Managing Emergent Behavior in Complex Systems. CrossTalk. 26. 15-19.
+[10] https://geopy.readthedocs.io/en/stable/#module-geopy(distance
+[11] Valiant, Gregory Roughgarden, Tim. (2010). Braess's Paradox in large random graphs.. Random Struct. Algorithms. 37. 495-515.
+[12] https://www.affluences.com/lovvre.php
+[13] http://www.cs.cmu.edu/\~avrim/451/lectures/lect1005.pdf
+[14] https://en.wikipedia.org/wiki/Simulated_annealing
+[15] https://en.wikipedia.org/wiki/Edmonds%E2%80%93Karp_algorithm
+[16] https://en.wikipedia.org/wiki/Max-flow_min-cut_theorem
+[17] https://en.wikipedia.org/wiki/Strong_duality
+
+# Appendix A
+
+| Edge | Flow Capacity |
| (SA,SB) | 7.8 |
| (SB,FB) | 5 |
| (SC,FC) | 5.6 |
| (SD,FD) | 2.6 |
| (SE,SD) | 7.8 |
| (FA,FB) | 7.8 |
| (FB,GB) | 5 |
| (FC,GC) | 5.6 |
| (FD,GD) | 2.6 |
| (FE,FD) | 7.8 |
| (GA,SS) | 4.2 |
| (GB,GA) | 7.4 |
| (GC,LC) | 5 |
| (GC,GD) | 9 |
| (GC,GB) | 3 |
| (GD,GE) | 1.8 |
| (GE,SS) | 4.2 |
| (LA,LB) | 7.8 |
| (LB,NP) | 3.2 |
| (LC,NP) | 3.3 |
| (LD,NP) | 4.8 |
| (LD,SS) | 7 |
| (LE,LD) | 7.8 |
| (NP,SS) | 5 |
+
+Table 3: An example of edge weights found through the Local Section Model and inputted into the Global Flow Model. Each floor is given an abbreviation through the following: (Napoleon, N), (Lower Ground, L), (Ground, G), (First, F), (Second, S), and G_A represents the section of ground A. Note that SS is taken to represent the supersink, not a section on the second floor.
\ No newline at end of file
diff --git a/MCM/2019/D/2019_ICM_Authors_Com/2019_ICM_Authors_Com.md b/MCM/2019/D/2019_ICM_Authors_Com/2019_ICM_Authors_Com.md
new file mode 100644
index 0000000000000000000000000000000000000000..0865884f02440ba0beb72bcc96d7c298a6608b4d
--- /dev/null
+++ b/MCM/2019/D/2019_ICM_Authors_Com/2019_ICM_Authors_Com.md
@@ -0,0 +1,59 @@
+# Author's Commentary: Time to Leave the Louvre
+
+Michelle L. Isenhour
+Operations Research Dept.
+Naval Postgraduate School
+Monterey, CA 93940
+mlisenho@nps.edu
+
+# Introduction
+
+Crowd evacuation and response to emergency situations is a relatively new and interesting research field. It is truly an interdisciplinary subject area, sitting at the intersection of mathematics, physics, civil engineering, fire safety engineering, computational science, and human psychology.
+
+Conceptually, the process of evacuating a building is relatively simple: safely move individuals from inside of the building to outside. However, the process becomes significantly more intricate as building complexity and occupancy increase. Given a sizable building with a large number of occupants, the challenge to maintain order, ensure individual safety, and evacuate as quickly as possible becomes difficult.
+
+From a mathematical standpoint, there is an abundance of macroscopic and microscopic evacuation models. These include
+
+network flow models;
+- discrete microscopic models, such as the social force model, cellular automata model, particle-swarm optimization models, and agent-based models; and
+- continuous macroscopic models, such as fluid dynamic models and gaskinetic models [Zhou et al. 2018].
+
+As is the case with most models, the simplicity or complexity of the model is determined solely by the application and implementation of the model.
+
+A serious incident inside the building, such as a fire, chemical spill, or a terrorist attack, further complicates the evacuation process. Risk-management considerations must be made to assess the situation quickly
+
+and determine impacts. Exits may be blocked, emergency personnel may need to respond, and the occupants may become unpredictable and/or panic.
+
+Potential validation data from a routine fire drill in a student center was presented at the 2016 Pedestrian and Evacuation Dynamics conference at the University of Science and Technology in Hefei, China [Isenhour and Löhner 2017]. A routine fire drill in a modern building roughly $30,000\mathrm{m}^2$ in size with fewer than 400 occupants took $9\mathrm{min}$ to evacuate. So it is reasonable to expect that a unique historical building such as the Louvre, which is more than twice that size, with limited egress points and many more occupants, would take much longer to evacuate, especially under conditions of duress.
+
+# Formulation and Intent of the Problem
+
+The interdisciplinary nature of evacuation modeling, combined with the challenges of the most-visited in art museum in the world, in a city often targeted by terrorism, led directly to this year's Interdisciplinary Contest in Modeling (ICM $^{\text{TM}}$ ) problem "Time to Leave the Louvre."
+
+The goal of the problem was for teams to develop interdisciplinary solutions that would allow museum emergency planners to explore a range of options, policies, and procedures to quickly and safely evacuate visitors from the museum.
+
+From the outset, the problem challenged teams to model the building, the number and diversity of visitors, and the locations of exits. Using their evacuation models, the teams were asked to identify potential bottleneck (congestion) areas and consider how technology could be used to aid the evacuation process. The desired end result was a set of policy and procedural recommendations for emergency evacuation of the Louvre.
+
+Evacuating the Louvre intentionally presented an ambiguous problem set. Since most teams would be unfamiliar with the Louvre, they would first need to determine the physical size and layout of the Louvre. Then they would need to determine an appropriate quantity, type, and distribution of museum occupants.
+
+Following the determination of the inputs and implementation of the evacuation model, student teams were asked to use the model to explore alternative scenarios and provide policy and procedural recommendations to the management staff of the Louvre, as well as discuss the portability of their model to other similar structures.
+
+# Comments on the Results
+
+Many teams opted to explore and implement some combination of the aforementioned evacuation models. The very best teams realistically modeled all five floors of the Louvre and used a single model to find reasonable total evacuation times under a variety of situations. Rather than focus on the model, they focused on using the model to conduct analysis and provide recommendations to the emergency planners, demonstrating how to use the model to make decisions.
+
+Although simulation of the evacuation was not explicitly required, many teams took the extra step and created a simulation of the evacuation, using software tools such as AnyLogic, QGIS, NetLogo, Pathfinder, BuildingX-ODUS, Exit89, and MATLAB. Used correctly, these 2D and 3D representations of the evacuation enhanced the understanding of the evacuation flow and helped teams identify the locations of potential bottlenecks. However, in most cases, the inclusion of an evacuation simulation did not enhance the paper, because the simulation tool used was often disconnected from the evacuation model previously described.
+
+# References
+
+Isenhour, M.L., and R. Löhner. 2017. Validation data from the evacuation of a student center. *Collective Dynamics* 1: 472-479.
+
+Zhou, M., H. Dong, B. Ning, and F.Y. Wang. 2018. Recent development in pedestrian and evacuation dynamics: Bibliographic analyses, collaboration patterns, and future directions. IEEE Transactions on Computational Social Systems 5 (4): 1034-1048.
+
+# About the Author
+
+Michelle Isenhour is an Assistant Professor in the Operations Research Dept. at the Naval Postgraduate School in Monterey, CA where she teaches statistics and data analysis. Her research focuses on the microscopic modeling and simulation of pedestrians during evacuations and emergency scenarios, with a particular emphasis on the role of the "micro" in the analysis of the data.
+
+sis on pedestrian initial response to emergency and/or evacuation situations. Michelle holds a Ph.D. in Computational Science and Informatics from George Mason University and an M.S. in Applied Mathematics from Western Michigan University.
+
+
\ No newline at end of file
diff --git a/MCM/2019/D/2019_ICM_Judges_Com/2019_ICM_Judges_Com.md b/MCM/2019/D/2019_ICM_Judges_Com/2019_ICM_Judges_Com.md
new file mode 100644
index 0000000000000000000000000000000000000000..1d8e7fa8721c92a49bb3cb58de9c06af2864ec70
--- /dev/null
+++ b/MCM/2019/D/2019_ICM_Judges_Com/2019_ICM_Judges_Com.md
@@ -0,0 +1,382 @@
+# Judges' Commentary: Time to Leave the Louvre
+
+Ralucca Gera
+
+Dept. of Mathematics
+
+Naval Postgraduate School
+
+Monterey, CA 93940
+
+rgera@nps.edu
+
+Michelle L. Isenhour
+
+Operations Research Dept.
+
+Naval Postgraduate School
+
+Monterey, CA 93940
+
+Jessica Libertini
+
+Dept. of Applied Mathematics
+
+Virginia Military Institute
+
+Lexington, VA 24450
+
+Eleanor Ollhoff
+
+Mathematician
+
+Bonn, Germany
+
+Jack Picciuto
+
+Systems Engineer
+
+IT Cadre
+
+Ashburn, VA
+
+Troy Siemens
+
+Dept. of Applied Mathematics
+
+Virginia Military Institute
+
+Lexington, VA 24450
+
+Csilla Szabo
+
+Skidmore College
+
+Saratoga Springs, NY
+
+Robert Ulman
+
+US Army Research Office
+
+Research Triangle Park, NC
+
+Rui Wang
+
+Office of Performance Measurement and Evaluation
+
+New York State Office of Mental Health
+
+Albany, NY
+
+# Introduction
+
+Imagine that you are at the Louvre in Paris, France, standing and admiring Leonardo da Vinci's Mona Lisa, or maybe Antonio Canova's Psyche Revived by Cupid's Kiss. Suddenly, there is a commotion, perhaps a loud explosion. The alarm sounds and you must evacuate. What do you do?
+
+More than 5,200 teams investigated responses to such a scenario as part of the 2019 $\mathbf{ICM}^{\mathrm{TM}}$ . The teams used the iterative mathematical modeling process to develop interdisciplinary solutions that would allow museum
+
+emergency planners to explore a range of options, policies, and procedures to evacuate visitors from the museum quickly and safely.
+
+Problem D, "Time to Leave the Louvre," was this year's problem in network science or operations research. Intentionally less structured this year, the problem was designed to give teams considerable latitude in the development of a solution. Since the Louvre is the largest art museum in the world, we expected teams to be unfamiliar with its physical layout and number of visitors. We anticipated that teams would also be uninformed about its current policies and procedures. Even the tasks to be accomplished were not explicitly specified.
+
+After a brief explanation of the judges' expectations, we offer a discussion of the seven Outstanding papers. As you will see, the very best student teams
+
+- appropriately framed and scoped the problem,
+- made valid and justifiable assumptions,
+- applied a mathematical model (or combination of models),
+- thoroughly examined the model,
+- described how the model informs policy, and
+- even investigated portability of their model to other large crowded structures.
+
+# Judges' Criteria
+
+Since the overall objective was to evacuate visitors from the museum quickly and safely, the most critical items that the judges looked for were
+
+- implementation of a dynamic evacuation model representing the flow of visitors during the evacuation, and
+- inclusion of results of the evacuation tests or simulations.
+
+Basically, the judges wanted the student teams to provide a reasonable answer to the question, "How long will it take to evacuate?"
+
+Additionally, to resolve some of the deliberate ambiguity associated with the problem, the judges expected the student teams to describe and implement submodels for many of the key evacuation model inputs, such as a description of the physical space, location of incident(s), quantity and initial distribution of visitors, and locations of exits.
+
+The final item that the judges considered essential to a strong solution was the inclusion of outcome-based policy and procedural recommendations for emergency management of the Louvre.
+
+As is always the case, the professional expertise and experience of this year's judging panel spanned a variety of disciplines, including evacuation
+
+research, applied mathematics, mathematical modeling, network science, operations research, and engineering.
+
+From the perspective of the judges, the four items described above served as their minimal expectations. Therefore, the judges looked also for a variety of other elements, including
+
+- identification of potential bottlenecks in the model, which limited movement towards the exits;
+- the injection of emergency personnel into the model;
+- consideration of various threats, including potential incident locations inside the Louvre and their implications on route availability in the evacuation model;
+- human behavioral impacts on the dynamics of the crowd movement; and
+- an example adaptation and implementation of the model to other large, crowded structures.
+
+The inclusion of an evacuation simulation was not specified nor implied; however, many student teams attempted to include a visualization of the model dynamics or tried to apply a commercially-based simulation of the evacuation. In evaluating the use of simulation, the judges looked for a logical continuation from the mathematical model to the simulation. The expectation was that any simulation should be an implementation of the model (or models) previously described by the student team.
+
+In addition to assessing the quality of the model, the judges were looking for papers with excellent exposition in their writing, an aspect that is strongly connected to the interdisciplinary nature of the ICM. Unfortunately, many teams struggle with this part of the process, which is why this issue of this Journal also includes an On Jargon column on exposition [Libertini and Siemers 2019]. While even some of the strongest papers fell short of this goal, the judges were seeking papers with
+
+- a well-written executive summary that included results,
+- model development understandable by a non-specialist, and
+- frequent ties that drew meaningful and convincing connections between the real-world phenomena and the inputs and outputs of the model.
+
+# Discussion of Outstanding Papers
+
+Without a doubt, the judges sought papers that
+
+- applied a recognized modeling process using good science and mathematics,
+
+found measurable results,
+- conducted additional analyses, and
+- communicated the entire process with both clarity and completeness.
+
+A common theme this year was that many of the papers receiving the distinction of Outstanding were well-structured, often using a graphic to describe the method of solution and then centering the rest of the paper around this image.
+
+Additionally, all of the Outstanding papers attempted to answer the primary question, "How long will it take to evacuate?" Some of those evacuation times may not be realistic, but the judges value the effort put forth to develop the model and obtain a solution. Ultimately, seven papers earned the distinction of Outstanding. We offer some highlights from these papers, as well as possible areas of improvement, since even the best papers are never perfect.
+
+# Duke University: "Time to Leave the Louvre: A Computational Network Analysis"
+
+This paper was not only selected as an Outstanding Winner but also received the Leonhard Euler Award and a COMAP Scholarship Award. The judges were extremely impressed by the team's approach to tackle the problem on both the micro-level, modeling individual behavior with an agent-based model, and on the macro-level, by creating a network flow model to determine the maximum flow and evacuation time. Overall, the paper is very well organized and the writing clear and concise, with modeling steps and output coherently laid out for the reader.
+
+The team begins with a very strong executive summary, which many papers lack. In their summary, the team introduces the problem, briefly outlines the model, gives specific model output, and addresses strengths and weaknesses of the model. The Introduction, Background, and Restatement of the Problem sections are average as compared to other papers. However, the Assumptions section is another aspect of the paper that sets it apart from others. The authors state the assumptions and also justify them with references to support their modeling choices. Few papers make this effort in justifying assumptions. The team continues to provide in-text citations throughout the paper, an approach that the judging panel wished all teams would apply.
+
+The team analyzes the problem on two levels, individual behavior simulated with an agent-based model and a larger network flow model. They use NetLogo, a multi-agent programmable modeling environment, to create simulations of individual behavior in small sections of the museum. While several other papers use this approach, this paper stands out in the
+
+clear manner in which it steps the reader through the set-up and parameter selection. The team offers a flowchart, shown in Figure 1, as a helpful addition to understand the rules that govern the agent movement.
+
+
+Figure 1. Flowchart describing the Duke University team's agent-based model.
+
+Another strength of the paper is that it considers human behavior and incorporates a "panic parameter" in the agent movement; it also assumes a probability distribution of speeds for agents that is supported by research of an evacuation, and uses distances calculated from Google Maps.
+
+The paper continues by taking a macro-view of the problem. The team creates a directed graph/network to model the flow through the museum structure, with vertices representing sections of the museum and weighted edges representing flows along pathways (hallways, stairs, etc.). The team calculates the maximum flow rate for evacuation of the Louvre and uses it to find the minimum time for evacuation.
+
+The team provides a thorough analysis of bottlenecks using both a NetLogo simulation and the network flow model to identify locations of congestion and later to recommend where to place additional exits. These areas are clearly labeled in figures provided in the paper as shown in Figure 2.
+
+
+Figure 2. NetLogo simulation and bottleneck identification from the Duke University team's agent-based model.
+
+The paper concludes with an excellent discussion of results, list of recommendations, and strengths and weaknesses of the models. Although some recommendations may be difficult to implement (e.g., additional exits and changed doorways), the team also has suggestions for technology and emergency personnel to aid in evacuations.
+
+This was an Outstanding paper overall, but one area for improvement is in the sensitivity analysis. While the paper does perform a sensitivity analysis on some parameters used in NetLogo simulations, it falls short on connecting the analysis back to the key question, "How long will it take to evacuate the Louvre?" A helpful addition would be to report on how the evacuation time changes as the parameters are changed and possibly how additional exits reduce evacuation time.
+
+# Xi'an Jiaotong University:
+
+# "A Systematic Dynamic Route Planning Model"
+
+This paper presents an emergency evacuation model that explores options to evacuate the Louvre while allowing emergency personnel to enter the building as quickly as possible. The team proposes a dynamic framework that accomplishes the required objectives while also considering scenarios, such as the scale or type of visitors. All unfolded subproblems or objectives are articulated in Figure 3, which facilitates an-easy-to-follow structure for readers.
+
+Throughout the paper, the team clearly lists the assumptions and defines the mathematical symbols. The models applied are reasonable, with sufficient justifications. Moreover, the team provides bottleneck analysis and sensitivity analysis to support further the soundness of their work. The team provides a graphical depiction and overview of this modeling process and implementation (Figure 4).
+
+The team's comprehensive work involves four major parts.
+
+
+Figure 3. Dissection of the problem as articulated in the paper from Xi'an Jiaotong University.
+
+
+Figure 4. Method towards a solution as articulated in the paper from Xi'an Jiaotong University.
+
+- First, they design multiple human flow-speed models for different locations, including staircases, corridors, and exits. They analyze the influence of individual factors, then apply the Nelson and MacLennan model found in the SPFE Handbook of Fire Protection Engineering [Hurley et al. 2016] to describe the relationship between evacuation speed and flow density, and implement a Stable Evacuation Speed (SES) model to assess the average evacuation speed.
+- Second, they apply graph theory to establish a spatial node graph for the venue. Specifically, the team models the flow activities using graph theory and simplifies the "multi-target to multi-target" problem to a "single-goal to single-goal" scenario by adding two virtual nodes: a "sink point" that represents the external environment or the flow of people leaving the building from the exit, and a "source point" that does not have practical meaning but can reliably represent the remaining number of people. The team then incorporates these into traffic matrix models to carry out numerical calculations.
+- Third, they set the optimization objective so that the exit flow speed at any time for all exits is the maximum that can be achieved, and then dynamically observe the evacuation and readjust the parameters through the use of an iterative process. They adopt the Ford-Fulkerson algorithm
+
+to model the systematic planning of evacuation routes. The designed algorithms are able to handle both general and emergency cases, where a flow matrix and margin matrix can be derived in each iteration by identifying an additional "augmented path" from the source point to the sink point. Finally, the team obtains specific path planning from all traffic matrices.
+
+- In their fourth and final part, they evaluate the adaptability of the proposed framework. The team addresses three types of bottleneck problems (imbalance in channel flow, uneven distribution of exports, and single room export) through quantitative experiments. The adaptive analysis also addresses a wide range of considerations and various types of potential threats.
+
+Overall, the judges found this team's submission to be complete and the construction of their models to be sound; but the panel also noted several areas that could have been stronger. For example, rather than providing a standard conclusion, the team lists the strengths and limitations of their work, while the policy and procedural recommendations as well as overall conclusions are buried elsewhere in the text. The paper would have been stronger if the team had captured a summary at the end. From a more technical standpoint, in addition to the limitations stated for the model, the judges were concerned that the calculation and storage cost of the method could become expensive and may even be impossible as the number of visitors reaches a certain threshold; the team could have also included an analysis of this complexity.
+
+# Northeastern University in Shenyang, China
+
+This submission by a team from Northeastern University in Shenyang, China is an exemplary demonstration of interdisciplinary modeling complete with a well-organized and constructed write-up of their work.
+
+The paper starts with a concise executive summary that includes the actual quantitative results. Most other submissions neglect to put results in the summary and instead focus on what the team will be doing (techniques, models, etc.). A vital part of any executive summary is to give the reader the actual results of the paper!
+
+Many other teams use multiple models to capture ways to exit the Louvre in an emergency situation. This team includes a diagram of their modeling and thought process to enable the reader to follow more easily their modeling process from start to finish. Shown in Figure 5, this flowchart is a great reference for how the team tackles the problem.
+
+Many other factors make this paper stand out in this year's competition. Specifically, the team does not assume that each of the floors is the same. They account for different layouts, population densities, square footage, and other attributes that need to be modeled differently. For each floor,
+
+
+Figure 5. Flowchart describing the Northeastern University in Shenyang team's methodology to solve the problem.
+
+the team determines the time to evacuate that floor and compares it to a random and naive evacuation to help show the benefit of their model. Comparing a proposed solution to a baseline is a powerful technique to demonstrate the benefits of the model.
+
+The team also categorizes disasters into five unique levels of severity, from a smoke alarm all the way to a major explosion or attack. Not all emergencies should be treated the same; the severity factor impacts the time for both the entry of emergency personnel and the evacuation of visitors.
+
+Another way that this paper distinguishes itself is the diversity of recommendations and the methods by which the team introduces them to the reader. While not all recommendations might be feasible for a historic museum like the Louvre, the team offers a myriad of ideas that could help this museum, or any large common area visited by volumes of people on a daily basis, in an emergency evacuation situation. The team uses the easy-to-follow diagram in Figure 6 to address several areas when dealing with a large-scale evacuation of a population with a wide range of physical abilities, together with multilanguage barriers.
+
+The team also discusses how their modeling to evacuate the Louvre could be applied to other structures. Very few other teams took the time to try to apply their techniques to structures other than the Louvre.
+
+This team does an outstanding job with selecting a model, defining a manageable set of variables, modifying and applying the model to fit the scenario and providing a clear solution to the reader. This team does not get lost in the modeling and forget the problem that they were trying to
+
+
+Figure 6. Ideas presented by the team from Northeastern University in Shenyang, China to reduce overall evacuation time.
+
+solve.
+
+Although the judges were impressed by the overall quality of this paper, the panel also noted that it could have been made even stronger with the inclusion of some additional sensitivity analyses.
+
+# Seattle Pacific University:
+
+# "Getting a Move On When the Louvre's Bombed"
+
+The team from Seattle Pacific University creates a network model to determine the placement of exit signs, follows with agent-based modeling for simulation and validation, and complements the agent-based model with a differential equation model on the network capturing the deterministic behavior and providing a theoretical base for their solution.
+
+The team visualizes their work with a heat map of the network based on each location's proximity to an exit, as shown in Figure 7. The team uses the information in order to identify flow and potential bottlenecks in the network.
+
+The team's spectral-graph-theory-based approach and the unique use of the Laplacian set the paper up for success by finding upper bounds on the theoretical conductivity and potential for bottleneck.
+
+The team uses agent-based modeling to simulate the response of the Louvre's emergency personnel, taking into account different arrival times. The team goes a step further to demonstrate how this arrival time depends on the varying population density, as shown in Figure 8. The inclusion of this graph explains why the arrival of emergency personnel is important and how the arrival impacts the larger evacuation scenario.
+
+Lastly, the team uses differential equations to create a differential flow model to determine the flow rate from site to site, given the density of the
+
+
+Figure 7. Seattle Pacific University's heat map of the network model, based on proximity to exits.
+
+
+Figure 8. Arrival time (in seconds) of emergency personnel as a function of density, by the team from Seattle Pacific University.
+
+crowd at both sites. This flow model builds on the adjacency matrix of the network and the agent-based model through the use of a model analogous to a non-Newtonian fluid model. Here their model captures the behavior of the system by solving numerically for the change in density of each room. Considering the density distribution over time for all rooms, the team identifies when rooms are empty, experiencing a bottleneck, or at equilibrium.
+
+The team does an outstanding job of summarizing the results of their simulations by using graphs such as the one in Figure 9. Here they also demonstrate how the type of density (uniform versus partitioned) and number of exits (or blockage of exits) impact the overall evacuation times.
+
+Overall, the paper is very well written; it states the assumptions clearly and nicely folds in existing work. More importantly, the team structures the paper around a single theme—what the evacuation plan means—which
+
+make the work cohesive and extremely relevant.
+
+
+Figure 9. Seattle Pacific University's summary of results.
+
+The paper ends with an appealing counterintuitive conclusion based on their research, that "our methods of identifying bottlenecks indicate that there are situations where a longer path might actually reduce congestion and hence counterintuitively speed up progress."
+
+Their simple and useful recommendations with signage towards the exit signs speaks to the relevancy of their work, as they are implementable and appropriate for this modern age.
+
+Although the judges were impressed with this paper, they noted that some of the methodologies and the concepts behind the models were not always clear, and the paper could have been even stronger if the team had targeted their explanations to an interdisciplinary audience who may not be familiar with the specific techniques.
+
+# Xi'an Jiaotong Liverpool University: "Escape the Louvre"
+
+While many teams abstract the problem too much in an attempt to cover a more generalized building, the team from Xi'an Jiaotong Liverpool University really focuses on unique characteristics of the Louvre. Also, while most of the papers come to the unsurprising conclusion that stairwells and doorways are bottleneck areas, this paper goes further and looks at the specific Louvre stairwells and analyzes their geometries (rectangular stairs, spiral stairs, double stairs), dimensions, slopes, and carrying capacities.
+
+This paper does an excellent job of exploring sensitivities by considering a variety of cases, including the number of open exits as well as cases where terrorists are present, causing a part of the Louvre to be impassible. The team also makes great use of visualizations to illustrate how they tailor their work specifically to the Louvre, such as is shown in Figure 10.
+
+In considering differing characteristics of the tourists themselves, many
+
+
+Figure 10. The Louvre's Daru staircase, Xi'an Jiaotong Liverpool University's model of it, and their simulation of flow of people on that staircase.
+
+
+
+
+
+teams identify handicapped tourists; but this team goes one (small) step further to state the scarcity of ramps and suggests that more be installed for safety and faster evacuation. Overall, the team's focus on Louvre-specific details is a clear way for the team to demonstrate their ability to translate between the real world and their model.
+
+This team's use of a genetic algorithm (GA) was one of the most novel approaches seen in the final rounds of judging; and as previously noted, the specificity they brought to the problem was also seen as a strength in final judging.
+
+However, even the best papers have room for improvement; in this case, the panel noted that the paper could have been strengthened by clarifying how the components in the GA (genes and chromosomes) aligned with characteristics of the population in the Louvre.
+
+# Nanjing University: "Analysis of the Optimal Evacuation Plan Based on 2D and 3D Models"
+
+There were several factors that identified this paper from the Nanjing University team as Outstanding, but the main factor was the manner in which it was written. This well-written paper has a cohesiveness and flow that makes it a wonderful example of an interdisciplinary modeling paper. The clear and understandable way in which the team explains how they build their 2D Island-Bridge Model and the figures that they use to illustrate the model are exemplary. Figure 11 clearly illustrates this team's methods to build their model.
+
+In addition to these figures, the explanation of assumptions and how the team accommodates them in their model is coherent and complete. Another aspect of their submission that stood out is consideration of the source/location of the danger and how this affects evacuation time and routes. They also have a realistic discussion of how the "additional" exits could be utilized.
+
+The recommendations from the Nanjing University team are both reasonable and actually implementable, such as installing additional sensors to monitor tourist density and adding a navigation function to the Afflu
+
+
+a. Depiction of factors.
+
+
+b. Dimensionality reduction method.
+Figure 11. Nanjing University team's use of figures to explain their model.
+
+ences app to help with emergency situations. These are both recommendations that could be followed by the Louvre staff to assist evacuation efforts.
+
+While the judges definitely found this paper to be Outstanding, there is always room for improvement. The judges noted that the first two models that this team uses are well constructed and utilized; however, the derived evacuation times appear to be unrealistically short. Additionally, the team develops a third model, a 3-D model; and the judges questioned whether it was a constructive addition to the paper. Teams are encouraged to evaluate whether their models are giving reasonable results and whether the use of an additional model really adds value to the analysis or results.
+
+# University of Electronic Science and Technology of China: "A Model for Determining Evacuation Routes"
+
+The team from the University of Electronic Science and Technology of China uses an ant-colony algorithm to create their Visitor Emergency Evacuation (VEE) model, which they apply to analyze the Louvre's visitor patterns. The team applies sound modeling that allows them to focus on the specifics of certain areas inside the Louvre. They present evacuation results for several different situations.
+
+The team models the daily and weekly variations in the number of visitors by deriving it from the Affluences wait times. They model the variation in visitor density by noting that there are three signature art treasures in the Louvre and assuming that those areas would have three times as many people as other similar spaces. They also model the topology of the Louvre, taking into account the width of passages, which along with the model of the standard person's size, gives rise to the number of people who will fit and be able to flow through the passages and stairways.
+
+The evacuation model starts using ant-colony routing; but then the team modifies this algorithm to take into account the fact that if people see a long queue, they will likely leave and try another exit. The team also uses their model to estimate the width of stairs and other passageways and make the flows proportional to the widths. So the final algorithm has an attraction to follow others, but a repulsion to avoid long queues. The judges were impressed by the creativity and effectiveness of this approach.
+
+The team also models the entrance of emergency personnel. They assume that different staircases will be used, so as to not create more congestion. They use a Dijkstra routing algorithm for the emergency personnel, but the team does not explain why their existing model was insufficient or why they needed to introduce a whole new model. They do discuss dynamically changing link weights depending on congestion, so the judges thought that perhaps the Dijkstra algorithm may be used to facilitate this. However, the judges should not have to guess the rationale for adding a model; the explanation is incumbent on the teams.
+
+Using these algorithms and the Louvre model, the team calculates optimal evacuation routes, describes the process in detail, and presents the routes in Figure 12, with (orange) arrows indicating the direction of evacuation.
+
+Based on data from the Louvre, they find the maximum number of visitors in the Louvre at any one time is 10,380, for which they calculate that it would take approximately 7 min to evacuate—a value that appears to be too low. They then investigate the effects of opening additional exits and find that doing so saves approximately 1 min in evacuating their maximum-size crowd of 10,380.
+
+They conduct sensitivity analysis on the speed of evacuation as well as on the capacity of the stairs. There is a qualitative discussion of the use
+
+
+Figure 12. Optimal evacuation route presented by the University of Electronic Science and Technology of China.
+
+of the model on other structures, noting that there may be issues for a tall building, with many floors using the same stairwells. The summary gives a good overview of the sensitivity analysis results, and the recommendations are also good; but the judges noted that the team should have taken the opportunity to summarize the results from the rest of the paper.
+
+# Conclusion and Recommendations for Future Participants
+
+As the judges read through the final papers, they were pleased to see papers that stood out for a variety of reasons, including
+
+- strong expository writing,
+
+- creative and diverse modeling approaches that addressed unique aspects of the problems stemming from architectural issues or human behavior,
+- clever visualizations to help readers digest the results,
+good use of citation practices,
+- ingenious approaches to the sensitivity analyses,
+- attempts to transfer their analyses to other large buildings, and
+- practical recommendations informed by the analyses presented in the paper.
+
+Each of the seven Outstanding papers has its own strengths, and we encourage future teams to learn from these examples. Additionally, each had room for improvement, and many of the imperfections cited were common across the whole field of more than 5,000 submissions for this problem. Therefore, we encourage participants to take note of those areas and keep them in mind as they develop their own solutions.
+
+As always, teams should bring their own strengths to a problem—one of the joys of the ICM is that there is not one single "right" way to model the problem, and the judges are always impressed by the ingenuity that teams bring to the problem.
+
+Also, teams should keep time management in mind; the ICM problems are wide-ranging and involved, and it is important to submit a well-written and complete solution paper.
+
+Lastly, the judges would like to stress the importance of participating in the competition with academic integrity practiced in good faith; in other words, teams are expected to do their own work and make use of proper citations to appropriate references wherever applicable. Sadly, many teams are disqualified for lack of academic integrity, and we would like to see more teams avoid this outcome by making a habit of practicing academic integrity.
+
+# Reference
+
+Libertini, Jessica, and Troy Siemers. 2019. Exposition. The UMAP Journal of Undergraduate Mathematics and Its Applications 40 (2): 259-265.
+
+Hurley, Morgan J., Daniel T. Gottuk, and John R. Hall, Jr. (eds). 2016. SFPE Handbook of Fire Protection Engineering. 5th ed. New York: Springer.
+
+# About the Authors
+
+Dr. Ralucca Gera is the Associate Provost for Graduate Education and Professor of Mathematics at the Naval Postgraduate School. She is also a researcher in the Center for Cyber Warfare at the Naval Postgraduate School, as well as an associate researcher in the Network Science Center at United States Military Academy. Her research interests are in graph theory and network science, with applications to the study of the Internet, cyber networks and natural lan
+
+
+
+guage processing, sponsored by multiple Dept. of Defense organizations. Dr. Gera is the founder and director of the Academic Certificate in Network Science. She actively participates in network science education of the young generation through teaching short courses for professors and researchers, and organizing workshops for teachers. She has published over 50 journal and conference papers, one chapter book, and one edited book in mathematics and network science.
+
+
+
+Michelle L. Isenhour is an Assistant Professor in the Operations Research Dept. at the Naval Postgraduate School in Monterey, CA. She has an M.S. in Applied Mathematics from Western Michigan University and a Ph.D. in Computational Science and Informatics from George Mason University in Fairfax, VA, where she researched pedestrian and crowd modeling in the Center for Computational Fluid Dynamics under Dr. Rainald Lohner. Her research focuses on microscopic modeling and simulation of pedestrians during emergency scenarios, with a particular emphasis on initial response.
+
+Dr. Jessica M. Libertini holds advanced degrees in both engineering and applied mathematics. She has served as Senior Engineer at General Dynamics, National Research Council Fellow at West Point, Science & Technology Policy Fellow in the Office of the Secretary of Defense, and currently Associate Professor at Virginia Military Institute. She became involved with the MCM/ICM in 2008, first serving as a team advisor, then as a triage judge and commentary writer, and now as a head judge.
+
+
+
+
+
+Eleanor Ollhoff studied at the University of Tennessee and has a background in undergraduate mathematics instruction and pedagogy and pure mathematics, specifically low-dimensional topology and differential geometry. She taught in the Mathematics Dept. at Appalachian State University, the University of Tennessee, and the U.S. Military Academy. She started as an ICM triage judge in 2014, and has been on the final judging panel for three of the past four years.
+
+Dr. Jack Picciuto has over 10 years of judging mathematical modeling competitions, including the ICM, MCM™, HiMCM™, and the Moody's Math Challenge. He previously served on the mathematics faculty at the U.S. Military Academy, and since his retirement from the U.S. Army, he has worked as a senior systems engineer and consultant in the private sector.
+
+
+
+
+
+Dr. Troy Siemers has a Ph.D. in Mathematics from the University of Virginia. He has worked at the Virginia Military Institute in the Dept. of Applied Mathematics since 1999 and since 2010 as department head. He has been a triage judge for ICM for several years and a finals judge for two years, and has taught the VMI senior capstone course based on preparing for the ICM contest. He has conducted research with faculty in Economics and Business, Physics, Psychology, Chemistry, and Applied Mathematics.
+
+Csilla Szabo is a Teaching Professor of Mathematics at Skidmore College in Saratoga Springs, NY. She received a bachelor's degree in mathematics from Western New England University in Springfield, MA in 2004 and her Ph.D. in mathematics from Rensselaer Polytechnic Institute in Troy, NY in 2010. Csilla's research interests include mathematical biology and network science. Prior to Skidmore, Csilla was a visiting assistant professor at the U.S. Military Academy and at Bard College. She participated in the MCM/ICM as a student and more and as a final judge.
+
+
+
+Robert Ulman received his B.S. from Virginia Tech, M.S. from Ohio State University, and Ph.D. from the University of Maryland, all in electrical engineering. He worked as a communication systems engineer at the National Security Agency 1987-2000. Since then, he has been at the Army Research Office, where he worked as the program manager in wireless communications networking. More recently, he has been
+
+building a new program in Network Science and Intelligent Agents, engaging European scientists, and facilitating collaboration with U.S. laboratory scientists.
+
+
+
+Rui Wang is a research and data scientist at the New York State Office of Mental Health, with academic background in biostatistics and computer science. She has 17 years of progressive experience of developing and managing analytics deliverables for healthcare programs. Her expertise includes exploratory data analysis, data mining, machine learning, statistical modeling, and forecasting.
\ No newline at end of file
diff --git a/MCM/2019/E/1902029/1902029.md b/MCM/2019/E/1902029/1902029.md
new file mode 100644
index 0000000000000000000000000000000000000000..cf47a02be451e8ff63f8dd05d12ce79eca8125a2
--- /dev/null
+++ b/MCM/2019/E/1902029/1902029.md
@@ -0,0 +1,634 @@
+For office use only
+
+T1
+T2
+T3
+T4
+
+Team Control Number
+
+1902029
+
+Problem Chosen
+
+For office use only
+F1
+F2
+F3
+F4
+
+E
+
+# 2019 Mathematical Contest in Modeling (MCM) Summary Sheet
+
+(Attach a copy of this page to each copy of your solution paper.)
+
+# Land counts! Better Use & Lower Cost
+
+# Summary
+
+Land use change is a mirror of human-land relationship, which most directly reflects the impact of human activities on the environment. Estimation of ecosystem service value based on land use/cover change has become the focus of environmental economics research. Our paper selects developing countries, which have more land use changes and covers a wide area, as the research object. A land ecosystem service value evaluation model based on the unit area value equivalent factor method is constructed, and a dynamic comprehensive assessment of the value of 14 ecosystem types and 11 types of ecological service functions on a spatial scale is realized.
+
+We use the model to calculate the ecosystem service value of 14 regions in China, and to verify the validity of the model by comparing with the expert data. After that, we select China's Yangtze River Delta and Huangguoshu Natural Scenic Spots as examples to analyze the changes in their ecosystem service value over time. According to the calculated ecological cost, a cost-benefit analysis model is introduced to study the changes in real economic costs.
+
+In order to give more reference to policy makers, we introduce a multi-objective nonlinear programming model to study the optimization of land use options under different regional development principles. Taking Jiangsu Province and Heilongjiang Province as examples, we study the sensitivity of ESV and GDP to various land use area changes.
+
+The trend of the model changes with time is explored. The seasonal variation of the model is analyzed on the monthly scale during the year. The grey prediction model is established on the scale of the year during the year, and the trend of the short-term model is explored.
+
+In general, although further improvements are needed, the evaluation system constructed by the thesis provides a relatively comprehensive evaluation plan for the spatial and temporal dynamic assessment of ecosystem service value, thus providing a scientific basis for natural asset assessment and ecological compensation.
+
+Key words: ecosystem services; value methods; value equivalence factors; dynamic assessment.
+
+# Contents
+
+# 1 Introduction 1
+
+1.1 Background 1
+1.2 Our work 1
+1.3 notation 2
+
+# 2 Assumptions and Justifications 2
+
+# 3 A model for ecological services valuation 3
+
+3.1 Basic valuation method 3
+3.2 Computing VC 4
+3.3 The Model 4
+
+3.3.1 Ecosystem classification 4
+3.3.2 Evaluation framework 4
+3.3.3 Standard Equivalent and Base Equivalent 6
+
+3.4 Sensitivity Index 7
+3.5 Implementation 7
+
+3.5.1 Calculation and results 7
+3.5.2 Consistency test 8
+3.5.3 Evaluation of ecological service value 9
+
+# 4 The environmental costs of land use projects 11
+
+4.1 Large Project - Yangtze River Delta 11
+
+4.1.1 Project Description 11
+4.1.2 Adjustment of the value of ecological services 11
+4.1.3 Calculating ESV 11
+4.1.4 Sensitivity analysis 12
+4.1.5 Advices 13
+
+4.2 Small project - Huangguoshu Scenic Area 13
+
+4.2.1 Project Description 13
+4.2.2 Adjustment of the value of ecological services 13
+4.2.3 Calculating ESV 14
+4.2.4 Sensitivity analysis 14
+4.2.5 Advices 14
+
+4.3 A cost benefit analysis of land use development projects 14
+
+# 5 Land Use Project Plan Assessment 15
+
+5.1 Multi-objective nonlinear programming model 15
+5.2 Plan Assessment 17
+
+5.2.1 Impact of area change 17
+5.2.2 Policy evaluation 17
+
+# 6 Change of time 18
+
+6.1 Seasonal change 18
+6.2 Annual change 19
+
+7 Strengths and Weaknesses 20
+
+7.1 Strengths 20
+7.2 Weaknesses 20
+
+8 Appendix 21
+
+References 24
+
+# 1 Introduction
+
+# 1.1 Background
+
+With population growth and economic development, the superposition of human factors and natural factors has caused rapid changes of ecology, depletion of resources, shortage of lands, and degradation of environment. One of the legitimate reasons for the above problems is that people do not have a deep understanding of the ecological value of land use.
+
+In 1995, the International Geosphere and Biosphere Initiative (IGBP) and the 'Human Areas Program for Global Environmental Change' (HDP) jointly proposed the 'Land Use/Cover Change' Research Program (LUCC). So far, the ecological impact of land use change has begun to be widely recognized. Land use change is the result of the continuous adjustment of the purpose of land use. Therefore, it is of great significance to study the value of land ecosystem services, to explore the economic benefits of ecosystems from the perspective of value, and to provide scientific decisions for the planning of social development. Among the various methods of measuring estimates, ecosystem service value assessment is an effective method to measure the environmental impact of land use.
+
+Ecosystem service functions are the utility provided by ecosystems to meet and sustain human life needs. Costanza (1997) et al. divided ecosystem services into 17 types and estimated them in monetary terms [1]; the 2005 Millennium Ecosystem Assessment Report divides it into four categories. [2] On this basis, this paper proposes an evaluation method for the value of ecosystem services and conducts a series of empirical studies.
+
+# 1.2 Our work
+
+First, based on the ecosystem service value accounting model proposed by Costanza, we constructed a new ESV accounting model to measure the economic value of ecosystem services, and proposed how to use ESV to analyze its ecosystem service functions in combination with local GDP and area. Using the constructed ESV index, we selected 14 regions in China to calculate their ESV levels, and compared the results with the data measured by experts. The results show that our model is effective for measuring land use projects of different scales.
+
+Based on the model, we conducted a case study. We selected China's Yangtze River Delta as a representative of large-scale projects and Huangguoshu Natural Scenic Area as a representative of small-scale projects, to analyze the changes in their ecosystem service value with land use and conducted sensitivity tests. Then, taking into account the ecological costs, we carry out a cost-benefit analysis of the project. Compare the real cost of the project with the economic cost and propose
+
+criteria for project evaluation.
+
+In order to give more reference to policy makers, we have further introduced a multi-objective nonlinear programming model to study the optimization of land use options under different regional development principles. We use Jiangsu Province and Heilongjiang Province as examples to illustrate how policy makers should weigh the relationship between economic development and ecological well-being.
+
+Finally, since our model is calculated on a yearly basis, we incorporate time changes into the model. In response to seasonal differences in ecosystem service functions over the course of a year, we introduced dynamic equivalents that changed with the month. In view of the future trends of different ecosystem services over time, we use the GM (1,1) model to predict short-term development and prospect for long-term development.
+
+# 1.3 notation
+
+Table 1: List of Notations
+
+| Symbol | Definition |
| ESV | Ecosystem service value |
| Ak | Area of ecosystem k |
| VCk | Ecological value coefficient per unit area of ecosystem k |
| Eak | Ecological density coefficient of ecosystem k in region a |
| D | A standard equivalent factor of ecosystem service value |
| CS | Coefficient of Sensitivity(ESV sensitivity to VC) |
| ESC | Ecological service capacity |
| GDP | Gross Domestic Product |
| Iij | The value of land type j in all land types in the year i |
+
+# 2 Assumptions and Justifications
+
+- Ecosystem services are effective for humans and ecosystem services are scarce. Ecosystem services have become a scarce resource due to the destruction of the ecological environment by human economic development. At the same time, more and more people recognize the important role that ecosystem services play in human survival. Based on this assumption, ecosystem services have utility value.
+- Each unit area of ecosystem serves as a functional unit to provide ecosystem services and products. Natural disasters, bad weather and other factors can affect normal ecological functions and reduce ecological value. Since this cost of destruction is difficult to measure, this article does not consider the damage to the ecology caused by major disasters. Therefore, such
+
+assumption provides a simplistic but commercial approach for ecosystem service valuation.
+
+- The basic model uses a static assessment method that does not take into account the temporal changes in the ecosystem. In the short term, the ecology is basically in balance and the value of ecological services is stable. In an improved model that considers time, this assumption will no longer be valid. The changes in ecological values within a year and between years will be analyzed separately later in this paper.
+- The ecosystems in the study area are well developed. In the normal evolution of nature, regional ecology is diverse. Based on this assumption, there are enough land types in the study area to provide a basis for our estimation of the value equivalent table.
+
+# 3 A model for ecological services valuation
+
+# 3.1 Basic valuation method
+
+The human socioeconomic system and natural ecosystems co-exist everywhere. To accurately assess the total economic products and services provided by all human activities, a large and complex statistical system has been established to estimate the gross domestic product (GDP). [3]
+
+The ecosystem service value accounting model proposed by Costanza et al. [1] is still the most widely used ecosystem value accounting method. The ecosystem service value evaluation method of this paper is also based on this model and partially improved. The calculation method is based on an equivalent factor. If the monetary value of different ecosystem services from per unit land area can be identified, the total ESV will be quantified for the certain ecosystems and regions with the land area of different ecosystems. The formula is as follows:
+
+$$
+E S V = \sum A _ {k} \cdot V C _ {k} \tag {1}
+$$
+
+where ESV is the value of ecological services; $A_{k}$ is the area of ecosystem k; $VC_{k}$ is the ecological value coefficient per unit area of ecosystem k.
+
+In order to make the ecosystem service value equivalence suitable in different regions and more accurately estimate the value of regional ecosystem services, we introduce the ecological service value equivalent correction coefficient $E_{ak}$ :
+
+$$
+E _ {a k} = N _ {a k} / N _ {k} \tag {2}
+$$
+
+where $N_{ak}$ refers to the eco-environmental quality index of ecosystem type k in region a, and $N_{k}$ represents the annual average ecology of such ecosystems nationwide. (The eco-environmental quality index refers to ISO Environmental Quality
+
+Manual[4].) Thus the model is adjusted:
+
+$$
+E S V _ {a} = \sum A _ {k} \cdot V C _ {k} \cdot E _ {a k} \tag {3}
+$$
+
+# 3.2 Computing VC
+
+With reference to the research of natural capital by Costanza et al.[1], the Equivalent Factor Method is based on the differentiation of different types of ecosystem services, based on quantifiable criteria to construct the value equivalence of various service functions of different types of ecosystems, and then combined with the distribution area of the ecosystem to assess [6].
+
+$$
+V C _ {k} = \sum V C _ {k i} \tag {4}
+$$
+
+$VC_{ki}$ denotes the ecological value coefficient per unit area of the i-th service type of ecosystem $k$ .
+
+In this paper, the equivalent factor method is improved from the horizontal axis index and the vertical axis index. On the basis of the existing equivalent factor table, the classification of land use types is enriched (horizontal axis index). Also, the value classification equivalent factor method (vertical axis index) based on the service value of different classification ecosystems is proposed:
+
+$$
+V C _ {k} = \sum V C _ {k j} \tag {5}
+$$
+
+$VC_{kj}$ denotes the ecological value coefficient per unit area of the j-th value type of ecosystem k.
+
+# 3.3 The Model
+
+# 3.3.1 Ecosystem classification
+
+Ecosystem refers to the natural complex formed by the interaction and interdependence between biomes and their living environment within a certain geographical area. Based on the classification of land use and vegetation types, this study identified six types of first-level ecosystems (cultivated land, forest land, grassland, water area, residential construction land, unused land) and 14 types of secondary ecosystems. To comprehensively cover major ecosystem types. Marine ecosystems have not been included in this study due to the lack of systematic research data on the functions and values of marine ecosystem services.
+
+# 3.3.2 Evaluation framework
+
+Based on MA's ecosystem service value assessment framework, integrating the research of Costanza, Turner, de Groot, Dai Junhu, etc., we build the assessment
+
+framework as shown in figure 1. The main workflow involved consists of four steps:
+
+
+Figure 1: Evaluation framework
+
+# 1. Determination of assess target and the scope of study
+
+According to the MA report, ecosystem services are in short abbreviations of ecosystem products and services, referring to all the benefits that human derive from various ecosystems.[7] Ecosystem services and functions do not necessarily present a one-to-one correspondence[1].
+
+# 2. Determination of ecosystem service classification system
+
+MA divides ecosystem services into four categories:
+
+- Support Services (services essential for the production of all other ecological services)
+Supply Services (from products in the ecosystem)
+- Regulation Services (obtained from the regulation of ecosystem processes) Various benefits
+- Cultural Services (various non-material gains from ecosystems)
+
+# 3. Value assessment of various ecosystem services
+
+Supply services, regulation services, and cultural services often have a relatively direct short-term impact on humans. Support services are the backbone of these three types of ecosystem services. Therefore, we recommend not evaluating support services to avoid double the value of ecosystem services.
+
+# 4. Classification and aggregation of values
+
+Considering the research results of Qing Yang, Gengyuan Liu, etc.[5], according to the principle of non-repetition, we divide the value into three
+
+categories, and the total value of the ecosystem is the sum of the three values.
+
+(a) Direct value represents a product or service in an ecosystem that can be directly consumed by human consumption, including food supply, water supply, and raw material/energy supply.
+(b) Indirect value represents the value of the ecosystem that does not directly enter the production and consumption process, but provides the necessary conditions for the normal production and consumption, including gas regulation, hydrological regulation, soil conditioning and purification of the environment.
+(c) Existing value represent indirect services brought about by the existence of ecosystems, including biodiversity, climate regulation, aesthetic landscapes and cultural education.
+
+# 3.3.3 Standard Equivalent and Base Equivalent
+
+# Standard Equivalent
+
+The standard equivalent (D) is the equivalent factor of the ecosystem service value of a standard unit ecosystem. This paper refers to the calculation method of Xie Gaodi et al. [8], and takes the net profit of grain production per unit area of farmland ecosystem as the standard equivalent. The grain yield value of farmland ecosystems is mainly calculated based on the three main food products of rice, wheat and corn. The formula is as follows:
+
+$$
+D = S _ {r} \times F _ {r} + S _ {w} \times F _ {w} + S _ {c} \times F _ {c} \tag {6}
+$$
+
+$S_{r}, S_{w}$ and $S_{c}$ respectively represent the percentage of planted area of rice, wheat and corn as a percentage of the total area of the three crops. $F_{r}, F_{w}$ and $F_{c}$ respectively represent the average net profit per unit area of rice, wheat and corn in a country.
+
+According to China Statistical Yearbook 2016 [9] and formula (6), the standard equivalent value applicable to China in 2014 is $1827.62 \, \text{yuan/hm}^2$ .
+
+# Base Equivalent
+
+Base equivalent refers to the value coefficient of various service functions per unit area of different types of ecosystems, and reflects the annual average value level of various ecosystem service functions of different ecosystems. Based on the research results of Zhang Xingyu et al. [6] and the China Statistical Yearbook 2016 [9], we constructed the basic equivalents of different ecosystem types and different value categories, and obtained the following figure.
+
+
+Figure 2: The basic equivalents of different ecosystem types and different value categories
+
+# 3.4 Sensitivity Index
+
+The Sensitivity Index (CS) was used to determine the sensitivity of the ESV to VC, to test whether the Ecosystem Service Value Factor per Ecosystem is suitable for the ecosystem being assessed. The meaning of CS refers to the change of ESV caused by one percent change of VC. If $CS > 1$ , it indicates that ESV is sensitive and flexible to VC; if $CS < 1$ , it indicates that ESV is inelastic to VC. The greater the ratio, the more critical the accuracy of the VC is for the estimated ESV. CS can be calculated as follows:
+
+$$
+C S = \frac {\left(E S V _ {j} - E S V _ {i}\right) / E S V _ {i}}{\left(V C _ {j k} - V C _ {i k}\right) / V C _ {i k}} \tag {7}
+$$
+
+where VC is the amount of ecological service value per unit area of land, i represents the initial state, j represents the adjusted state, and k is the ecosystem type.
+
+# 3.5 Implementation
+
+In this section, we will use the above model to measure the value of ecological services in 14 regions of China, and compare the calculated results with authoritative data to test the validity of the model. In addition, we will measure and analyze the ecological service capacity of different provinces in two ways.
+
+# 3.5.1 Calculation and results
+
+According to the above standard equivalent and basic equivalent table, the unit area value table of different land use types can be calculated, wherein the total ecological service value of the first-level land use type is taken as the average value of the secondary land type value.
+
+Among the 14 regions analyzed, the value of ecological services varies greatly among different provinces. Among them, Inner Mongolia Autonomous Region
+
+| Ecosystem | Cultivated Land | Woodland | Grassland | Construct- ion Land | Waters | Unutilized land | ESV | Cultivated Land Contributi- on Rate |
| Beijing | 21.24 | 437.48 | 30.77 | -67.23 | 12.29 | 0.00 | 434.55 | 4.89% |
| Hebei | 640.11 | 2719.87 | 1001.92 | -423.46 | 133.85 | 2.17 | 4074.46 | 15.71% |
| IM | 908.84 | 13737.21 | 21409.66 | -356.60 | 333.02 | 203.39 | 36235.51 | 2.51% |
| Liaoning | 488.35 | 3321.25 | 391.41 | -298.32 | 149.79 | 0.21 | 4052.69 | 12.05% |
| Jilin | 686.54 | 5235.87 | 245.12 | -227.38 | 113.66 | 10.78 | 6064.59 | 11.32% |
| Heilongjiang | 1555.99 | 12905.30 | 733.92 | -339.69 | 342.12 | 25.63 | 15223.26 | 10.22% |
| Jiangsu | 448.74 | 151.91 | 14.11 | -445.87 | 468.73 | 1.09 | 638.72 | 70.26% |
| Anhui | 576.01 | 2213.79 | 26.48 | -384.77 | 285.67 | 0.04 | 2717.23 | 21.20% |
| Jiangxi | 302.58 | 6105.75 | 99.98 | -227.58 | 197.66 | 0.07 | 6478.45 | 4.67% |
| Shandong | 746.77 | 877.69 | 159.43 | -556.85 | 254.26 | 3.10 | 1484.40 | 50.31% |
| Henan | 796.25 | 2044.45 | 234.13 | -522.78 | 159.05 | 0.28 | 2711.39 | 29.37% |
| Hubei | 514.92 | 5083.33 | 101.77 | -314.62 | 323.73 | 0.11 | 5709.25 | 9.02% |
| Hunan | 407.28 | 7221.26 | 172.41 | -317.40 | 238.43 | 0.02 | 7721.99 | 5.27% |
| Sichuan | 660.97 | 13101.49 | 4435.31 | -371.46 | 162.66 | 12.49 | 18001.45 | 3.67% |
+
+Figure 3: the ESV of 14 provinces in China (Billion yuan)
+
+Heilongjiang Province, and Sichuan Province have higher ecological service values, and the ecological value measured by currency amount exceeds 100 billion yuan, mainly because The land use of cultivated land, woodland and grassland in these three provinces is relatively large.
+
+In Jiangsu Province, although the cultivated land occupation area is at a medium level in 14 provinces and cities, the scarcity of forest land and grassland resources makes Jiangsu Province have more than $70\%$ ESV farmland contribution rate. According to the collected data, the forest land area and grassland area of Jiangsu Province were both among the lowest in the 14 provinces analyzed.
+
+# 3.5.2 Consistency test
+
+The article "Costanza model based on the evaluation of the ecological service value of China's major grain-producing areas" uses the Costanza model to measure the ecological service value of the above 14 provinces. Compare our calculation results with authoritative data to get a scatter plot in figure 4.
+
+Among the provinces we selected, the area varies from 167,000 square kilometers to 1.18 million square kilometers. It can be seen from the figure that the trends of ESV levels calculated by the two methods are consistent, indicating that our models are suitable for both small and large areas. Therefore, it can be concluded that our model can effectively and objectively assess the value of an ecosystem's service value.
+
+
+Figure 4: Scatter plot comparing the results of two calculations
+
+# 3.5.3 Evaluation of ecological service value
+
+In 3.4.1, the value of ecological services in different provinces is quite different, mainly caused by the large difference in land area between different provinces. In order to more objectively evaluate the service capacity of an ecosystem, This paper proposes two ways to eliminate the impact of total area on ecological value assessment.
+
+1 Dividing the value of the ecological services in each province by the area of the province gives the value of the ecological services per unit area.
+
+$$
+E S C _ {i} = E S V _ {i} / A _ {i} \tag {8}
+$$
+
+2 Dividing the value of the ecological services in each province by the province's GDP, the value of the ecological services per unit of GDP per province is obtained.
+
+$$
+E S C _ {i} = E S V _ {i} / G D P _ {i} \tag {9}
+$$
+
+# By Area
+
+The ecological service value per unit area of Jiangsu Province and Shandong Province is less than 1 million $yuan/km^2$ , which has weak ecological service capacity. In the future development, the two provinces should increase the emphasis on the environmental cost of project construction and promote the sustainable development of the region. The ecological service value per unit area of Jiangxi Province, Hunan Province and Sichuan Province is higher than 3.5 million $yuan/km^2$ , indicating that the ecological service capacity is strong. It's Suitable
+
+
+Figure 5: Ecological service capacity calculated by unit area
+
+
+Figure 6: Ecological service capability classification
+
+for people to live, and is conducive to ecological balance and sustainable development.
+
+# By GDP
+
+Inner Mongolia Autonomous Region and Heilongjiang Province have the high-
+
+
+Figure 7: Ecological service capacity calculated by unit GDP
+
+est ecological service capacity, with values of 2.00 and 0.99 respectively, but their corresponding GDP is relatively low.
+
+The lowest ecological service capacity is in Jiangsu Province, Beijing Municipality and Shandong Province, and the ecological service capacity accounting values are 0.01, 0.02 and 0.02 respectively. Among them, the GDP level of Shandong Province and Jiangsu Province is very high, and the level of economic development is at the forefront of the country. In the process of economic construction, the ecology is inevitably damaged. The two provinces should deal with economic value and ecological value in the future development. Weighed and restored the
+
+service function of the ecosystem and improved the environment on the basis of maintaining a certain level of economic development.
+
+Beijing's ecological service capacity and GDP are at a relatively low level. The possible reason is that Beijing, as a political and cultural center of China, has certain peculiarities. In terms of resource allocation, more consideration should be given to political law and cultural education. As a result, investment in ecological input and economic development is relatively insufficient.
+
+# 4 The environmental costs of land use projects
+
+# 4.1 Large Project - Yangtze River Delta
+
+# 4.1.1 Project Description
+
+Yangtze River Delta is located on the eastern coast of the Chinese mainland and has diverse surface cover. The artificial urban construction land expansion and rapid urbanization lead to rapid changes in land type. [11] Therefore, it is important to study the environmental costs brought about by land use change here.
+
+We obtained land use data of the region from 2010 to 2015 from the Ministry of Natural Resources website. (See Appendix) In the past five years, the area of forest land, grassland, waters and cultivated land in the Yangtze River Delta has been decreasing, and construction land has increased significantly.
+
+# 4.1.2 Adjustment of the value of ecological services
+
+In combination with local geography and existing land types, we have made the following adjustments.
+
+Table 2: Ecosystem Service Value(yuan/hm²) of Land Use Type in Yangtze River Delta
+
+| Woodland | Grassland | Farmland | Wetland | Waters | Unutilized land |
| 37979.23 | 12584.42 | 12000.82 | 108001.3 | 79956.43 | 734.44 |
+
+# 4.1.3 Calculating ESV
+
+Then we calculate ESV of the Yangtze River Delta from 2010 to 2015.(See Appendix)In the past five years, the ESV of the Yangtze River Delta has decreased year by year, from 560.3 billion yuan in 2010 to 542.7 billion yuan in 2015. According to the results, we carry out the following analysis.
+
+# Analysis by value
+
+The Yangtze River Delta has a complex ecosystem and a large population density. We take into account the direct value, indirect value and existence value of all ecosystems in the calculation, and obtain the changes of the three types of values over time. (See Appendix) According to the results, all three values show a downward trend, and the reduction ratio of indirect value and existential value is greater.
+
+We can see from figure 8 that although the expansion of urban construction leads to the simultaneous reduction of the three types of values, in contrast, the indirect value and the existence value of environmental degradation are reduced at a faster rate. However, in real-world decision-making, land managers often only consider the decline of direct value, ignoring more serious changes in the indirect value and existing value.
+
+# - Analysis by various ecosystems
+
+Using the ratio of the various ecosystem ESV in 2015 and 2010, the radar map is drawn in figure 9.
+
+The ESV of the aquatic ecosystem is the highest, and the ESV of the grassland ecosystem is second. In the five years, the ESV of the waters and woodland ecosystems has the largest decline. The reason may be that the waters and forests in the Yangtze River Delta are vast in area and have high ecological conservation functions, which is of great significance for regulating the ecology of the region. As a result, the dramatic reduction in waters and woodlands has led to a significant decline in ecosystem services across the region.
+
+# 4.1.4 Sensitivity analysis
+
+The ecological value coefficient of the land use type is mobilized by $50\%$ to analyze the change of the ecosystem service value and the sensitivity to the value coefficient. The calculation results are shown in the appendix. According to the results, the sensitivity index of the waters is about 0.59; the second is the forest land, the sensitivity index is 0.40. The rest are between 0.1 and 0.4. Taken together
+
+
+Figure 8: Rate of Decline of Three Values
+
+
+Figure 9: The Ratio of ESV of Various Ecosystems
+
+er, the sensitivity index of ESV to VC is less than 1, indicating that our model is valid.
+
+# 4.1.5 Advices
+
+From the perspective of the type of value
+
+It is necessary for us to adopt a more scientific approach to quantify the indirect value and existential value of losses in urban expansion and to incorporate them into the assessment system for land-use projects.
+
+From the perspective of the type of ecosystem
+
+Land use planners should increase the density of artificial ditches, while paying attention to the protection of existing water areas and strengthening pollution control.
+
+# 4.2 Small project - Huangguoshu Scenic Area
+
+# 4.2.1 Project Description
+
+The Huangguoshu Scenic Area is located in Anshun City, Guizhou Province. From 2009 to 2012, the Huangguoshu Scenic Area was reconstructed and has changed the service value of the ecosystem within a small geographical scope. The ArcGIS 9.2 software was used to export and sort out the land use dynamic data of Huangguoshu Scenic Area from 2009 to 2012 (See Appendix).
+
+# 4.2.2 Adjustment of the value of ecological services
+
+The ecological service value of the construction land in Huangguoshu Scenic Area is small, so the value of its ecosystem service function is not estimated. The changes in land use types have an impact on ecosystems only in a small geographical area, and have little impact on the function of ecosystems to regulate climate and maintain biodiversity, so the value of climate regulation and biodiversity services are not considered. Based on the above adjustment methods, the ecological value equivalents of each land use type in Huangguoshu Scenic Area are obtained.(see in figure 10)
+
+| Land Use Type | Cultivated land | Garden plot | Woodland | Grassland | Waters | Swamp and tidal flats | Other land |
| Ecological Value per Unit Area (yuan/(hm^2·a) | 3613.65 | 7606.45 | 11426.68 | 3786.23 | 24040.47 | 32794.82 | 219.64 |
+
+Figure 10: Ecosystem services value unit area of different land types in Huangguosh
+
+# 4.2.3 Calculating ESV
+
+Using the adjusted ecological value equivalent table and the data of the land use area in each year, the value of the ecosystem service function of the Huangguoshu Scenic Area from 2009 to 2012 is calculated, and the ecological cost of the scenic area reconstruction project is obtained.(See Appendix)
+
+It can be seen that the ecological service value of the forest land in Huangguoshu Scenic Area is the largest, accounting for nearly $70\%$ of the total value, followed by cultivated land, waters and gardens. The scenic area renovation project of Huangguoshu caused the total value of its ecological service value to drop by 323,300 yuan in four years, which is the total ecological cost of the scenic area reconstruction project.
+
+# 4.2.4 Sensitivity analysis
+
+The sensitivity index in Huangguoshu project are calculated (see Appendix). The sensitivity index of various land use types is less than 1, from high to low, followed by forest land, cultivated land, pasture, water, garden, marsh and tidal flat, and other land, indicating that the ecological service value is inelastic to the change of the ecosystem service value coefficient. The results of this study are credible.
+
+# 4.2.5 Advices
+
+The ecological cost of the Huangguoshu scenic area reconstruction project calculated by our model is about 323,300 yuan. Considering that forest land contributes the most to the value of ecosystem services in scenic spots, Huangguoshu Scenic Area should pay attention to the protection of forest land, pasture and other ecological land in future development and improve the overall function of regional land ecosystem services.
+
+# 4.3 A cost benefit analysis of land use development projects
+
+From the perspective of cost-benefit analysis, we study the level of benefits before and after considering ecological costs.
+
+If ecological costs are not considered, then
+
+$$
+T o t a l c o s t = c o n s t r u c t i o n c o s t + o p e r a t i n g c o s t \tag {10}
+$$
+
+where construction costs are paid in one lump sum at an early stage, and operating costs occur annually.[12]
+
+If considering ecological costs, then
+
+where the ecological cost is the loss value of the ESV in each year. The total benefit of the land use project is
+
+$$
+T o t a l b e n e f i t = e c o n o m i c b e n e f i t + s o c i a l b e n e f i t + e c o l o g i c a l b e n e f i t \tag {12}
+$$
+
+As shows in figure 11 and figure 12:
+
+- If the ecological cost is not considered, as time changes, at $T_{1}$ , the total cost and the total benefit curve intersect at point A. In the view of policy makers, all investments are compensated at this time, and the project becomes profitable after $T_{1}$ .
+- If the ecological cost is considered, the slope and intercept of the total cost curve increase simultaneously.
+
+# 5 Land Use Project Plan Assessment
+
+In this section, a multi-objective nonlinear programming model is used to analyze land use scenarios under different priority objectives. Based on the evaluation results in Section 3.5, two typical provinces were selected to evaluate their existing land use policies.
+
+# 5.1 Multi-objective nonlinear programming model
+
+The land use is the way of using and using the land by humans. [13]
+
+Basically, there are four land use principles: (1) Natural growth. (2) Economic development. (3) Ecological protection. (4) ECO development. Assuming that:
+
+1. Land use projects mainly change the number of ecosystem types and types of ecosystems.
+2. The land use plan of a certain area is not adjusted in a short period of time, and the land use is highly efficient. The ecosystem type of different projects
+
+
+Figure 11: Cost-benefit curve(a)
+
+
+Figure 12: Cost-benefit curve(b)
+
+has a consistent direction of change, that is, all land use projects in the area are assumed to be of the same direction of influence to the ecosystem type. (Increase or decrease in area).
+
+3. Select the target in the short term, only considering the value change before and after the project.
+
+The area of each ecosystem type after the implementation of the land use project can be expressed as $A_{k} + \nabla A_{k}$ . The calculation method of analogy ESV proposes the accounting method of economic value GDP:
+
+$$
+E S V = \sum A _ {k} \cdot L V C _ {k} \tag {13}
+$$
+
+$$
+G D P = \sum A _ {k} \cdot N V C _ {k} \tag {14}
+$$
+
+Where: $LVC_{k}$ indicates Ecological value coefficient per unit area for ecosystem type k(same as $VC_{k}$ above) $\check{c}\dot{z} NVC_{k}$ indicates Economic value coefficient per unit area for ecosystem type k.
+
+Under the ecological protection principle, the model can be constructed as follows:
+
+min $P_{1}\cdot d_{1}^{+} + P_{2}\cdot d_{2}^{-}$
+
+s.t $\sum \nabla A_{k} = 0$
+
+$$
+A _ {k} + \nabla A _ {k} \geq M i n N e e d _ {k}
+$$
+
+$$
+\begin{array}{l} \sum \nabla A _ {k} \cdot I (\nabla A _ {k} > 0) \cdot (P C C _ {k} + P O C _ {k}) + \sum \nabla A _ {k} \cdot I (\nabla A _ {k} < 0) \cdot E R C _ {k} \\ \leq \min \{B u d g e t, R e s o u r c e \} \\ \end{array}
+$$
+
+$$
+\begin{array}{l} \sum \nabla A _ {k} \cdot I (\nabla A _ {k} > 0) \cdot P I _ {k} \geq \sum \nabla A _ {k} \cdot I (\nabla A _ {k} > 0) \cdot \left(P C C _ {k} + P O C _ {k}\right) \\ \sum (A _ {k} + \nabla A _ {k}) \cdot L V C _ {k} + d _ {1} ^ {-} - d _ {1} ^ {+} = M i n N e e d _ {E S V} \\ \sum (A _ {k} + \nabla A _ {k}) \cdot N V C _ {k} + d _ {2} ^ {-} - d _ {2} ^ {+} = M i n N e e d _ {G D P} \\ \end{array}
+$$
+
+Where: $I(\nabla A_k > 0)$ and $I(\nabla A_k < 0)$ are characteristic functions of $\nabla A_k$ . $d_i^+$ and $d_i^-$ are positive and negative deviation variables.
+
+In the above constraints, the first formula indicates that the total land area is constant, and the increased land area and the reduced land area cancel each other out. The second formula indicates that the area of each land type after the project is implemented is not less than the minimum need for the land. The capital or resource input in the soil utilization project in a certain area can be divided into three parts: project construction investment, ecological restoration investment and project operation input, and the third formula is introduced. The sum of project construction and operating costs for each additional land and the ecological restoration cost of each reduced land is limited by the amount of budget and resources. The fourth formula indicates that the sum of project benefits from increased land is greater than the sum of project construction and operating costs. The last two formulas indicate that ESV and GDP meet the minimum requirements for them.
+
+# 5.2 Plan Assessment
+
+# 5.2.1 Impact of area change
+
+According to the previous evaluation results, take Heilongjiang Province and Jiangsu Province as study subject. Through Monte Carlo simulation, take the number of tests $N = 200$ times, and results go as follow:
+
+# For Heilongjiang
+
+(A1 means forest land and A2 means residential construction land). The impact of forest area change on ESV and GDP is moderate. When the forest area increases by more than $2.1\%$ , the forest area elasticity of ecological service value approaches 0. At this time, the forest area approximates a saturation level, and even if the forest area increases significantly, it will not contribute to the increase of ecological value.
+
+# For Jiangsu
+
+(A2 means residential construction land and A3 means cultivated land) The figure shows that the percentage change of cultivated land area has a great impact on ecological value and economic value, and the marginal ecological value and marginal economic cost brought by the increase of area show a decreasing trend. The change in the area of residential construction land has a greater impact on GDP and ESV within a certain range. When the percentage change exceeds $0.8\%$ or is lower than $-2.4\%$ , the impact on GDP is no longer obvious.
+
+# 5.2.2 Policy evaluation
+
+- Heilongjiang Province began to implement the ecological construction strategy in 2001. [14] We compare and analyze the value of ecosystem service function before and after the implementation of land use policy in Heilongjiang Province. The results show that the various measures of the policy are basically reasonable and can take into account the environmental ben
+
+
+Figure 13: the impact of area change in Hei-Figure 14: the impact of area change in longjiang Jiangsu
+
+
+
+efits while developing the economy, while efficiency of land use should be emphasized in land use policies.
+
+- Under the dual role of economic development and land resource management policies, Jiangsu Province has strengthened the protection of agricultural land. However, the results of data analysis show that the economic aggregate of Jiangsu Province has expanded rapidly and there is a problem of insufficient applicability of current planning.
+
+# 6 Change of time
+
+# 6.1 Seasonal change
+
+During the year, due to changes in climate, temperature and other factors, the value of $VC_{k}$ is also changing. According to the published statistical documents[15][16], we sort out the amount of ecosystem value according to different months. The changes in the value of ecological services for various ecosystems during the period from January to December are shown in the figure15. Among them, the an
+
+
+Figure 15: Ecosystem value change chart during the year
+
+nual service value of residential construction land, desert, bare land and glacial snow is extremely small, and the trend of other ecosystem service value changes is generally the same, with July as the highest point, gradually decreasing to the adjacent months on both sides, January and The lowest point was reached in December.
+
+The reason is analyzed. From winter to summer, the temperature rises, the rainfall increases gradually, the growth of vegetation accelerates, the biomass accu
+
+mulation rate of the ecosystem reaches the fastest, and the functions of various ecosystems gradually increase. Therefore, the ecosystems provide the value of ecological services has gradually increased.
+
+# 6.2 Annual change
+
+According to the data of different ecosystems in China in 2004, 2008, 2012 and 2016 [17][18][19][20], using gray forecasting analysis, we have predicted future short-term data while long-term changes are expected. As the annual volume units change, we use the importance to measure the proportion of different ecosystems in the total ecosystem value in each year. The importance is calculated according to the following formula:
+
+$$
+I _ {i j} = V _ {i j} / V _ {i} \tag {15}
+$$
+
+Where: The importance level $I_{ij}$ indicates the value of the ecosystem type $j$ in all ecosystem types in the i-th year, $V_{ij}$ represents the value of the ecosystem type in the i-th year, and $V_i$ represents the sum of the values of various ecosystem types in the i-th year. Then, gray prediction and normalization are performed on different ecosystem types to obtain the importance level table.
+
+
+Figure 16: The table of Different ecosystem importance levels
+
+# The impact of short-term interannual changes on the model
+
+With the development of science and technology, technological advancement, and more scientific planting, the value of cultivated land ecosystems has been enhanced, and there has been no significant change in the value of other types of land.
+
+The impact of long-term interannual changes on the model The value of shortterm cultivated land ecosystems continues to increase, ultimately, due to the resource constraints of cultivated land ecosystems, the value of cultivated land area will not increase so significantly. Also, though the ecological environment value
+
+of residential construction land ecosystem is minimal or even negative in current study, in the future, with the development of green economy, the implementation of the environmental protection will increase its value by a large margin.
+
+# 7 Strengths and Weaknesses
+
+# 7.1 Strengths
+
+1. Introducing the ecological service value equivalent correction coefficient. The ecological value base equivalent table adopts the national average level. When the ecological value of a certain area is specifically analyzed, the basic equivalent table can be adjusted according to the local specific conditions to make the final result more accurate.
+2. Classification of ecological service values. When conducting ecological value assessment, the value of ecological services is divided into three categories, thus the scale structure and change trend of various values more intuitively can be analysed.
+3. Analysis of ecological service capabilities. Dividing the value of regional ecological services by regional area and GDP is conducive to a more comprehensive evaluation of the stability of the ecosystem, and also to a more intuitive discussion of the relationship between ecological protection and economic development.
+4. Introduce the concept of a saturated area of value. As the value of marginal utility declines and the land area of a certain type reaches a certain level, its continued growth has almost zero impact on ecological value. This has a good reference for regional land use planning.
+
+# 7.2 Weaknesses
+
+1. The base equivalent has a certain timeliness. The basis value is judged by the willingness to pay the questionnaire survey, and the resulting value equivalent table can evaluate the ecological value in a short period of time.
+2. There is a problem of double counting in value. For example, there is a certain cross-value between biodiversity and aesthetic landscape, culture and education, which makes the final evaluation value large.
+3. Part of the value is difficult to quantify or the accuracy may not be high after quantification. For example, the value of genetic inheritance, human disease regulation and other functions is difficult to quantify, which will lead to bias in the final evaluation results.
+
+# 8 Appendix
+
+| Year. | Land Use Type |
| Woodland | Grassland | Waters | Farmland | Construction land | Unused land |
| 2010. | 599.29 | 14.60 | 419.45 | 676.74 | 358.62 | 29.71 |
| 2011. | 598.42 | 14.41 | 416.89 | 675.71 | 364.52 | 29.98 |
| 2012. | 597.69 | 14.11 | 414.42 | 675.23 | 369.36 | 30.09 |
| 2013. | 596.81 | 13.89 | 411.71 | 674.82 | 374.36 | 30.37 |
| 2014. | 595.87 | 13.73 | 409.32 | 673.91 | 379.67 | 30.48 |
| 2015. | 595.09 | 13.61 | 407.28 | 674.33 | 383.33 | 30.49 |
| 2010-2015 Area Change. | -4.20 | -0.99 | -12.17 | -2.41 | 24.71 | 0.79 |
| 2010-2015 Change Rate. | -0.70% | -6.80% | -2.90% | -0.36% | 6.89% | 2.65% |
+
+Figure 17: Land Use Changes of Yangtze River Delta,2010-2015
+
+| Year. | Land Use Type | Total |
| Woodland | Grassland | Waters | Farmland | Construction land | Unutilized land |
| 2010 | 2276.07 | 18.37 | 3354.46 | 812.81 | -860.68 | 2.16 | 5603.20 |
| 2011 | 2272.75 | 18.13 | 3334.04 | 811.57 | -874.84 | 2.18 | 5563.84 |
| 2012 | 2269.99 | 17.75 | 3314.26 | 811.00 | -886.46 | 2.19 | 5528.74 |
| 2013 | 2266.65 | 17.47 | 3292.56 | 810.51 | -898.46 | 2.21 | 5490.95 |
| 2014 | 2263.05 | 17.27 | 3273.47 | 809.41 | -911.21 | 2.22 | 5454.23 |
| 2015 | 2260.11 | 17.12 | 3257.16 | 809.92 | -920.00 | 2.22 | 5426.55 |
| 2010-2015 | | | | | | | |
| ESV Change | -15.95 | -1.25 | -97.30 | -2.89 | -59.31 | 0.05 | -176.65 |
| 2010-2015 | | | | | | | |
| Change Rate | -0.70% | -6.8% | -2.90% | -0.36% | 6.89% | 2.65% | 3.15% |
+
+Figure 18: Changes of Ecosystem Service Value of Yangtze River Delta,2010-2015
+
+| Year | Direct Value/ Million Yuan | Indirect Value/ Million Yuan | Existing Value/ Million Yuan |
| 2010 | 390.026 | 3668.560 | 1545.788 |
| 2011 | 388.763 | 3643.175 | 1534.065 |
| 2012 | 386.977 | 3619.002 | 1523.916 |
| 2013 | 385.021 | 3593.827 | 1513.046 |
| 2014 | 383.847 | 3570.250 | 1502.070 |
| 2015 | 383.314 | 3550.045 | 1493.998 |
| 2010-2015 Value Change | -6.713 | -118.515 | -51.790 |
| 2010-2015 Change Rate | -1.72% | -3.23% | -3.35% |
+
+Figure 19: Changes of Three Values of Yangtze River Delta,2010-2015
+
+| Year | Land Use Type (Coefficient of Sensitivity) |
| Woodland | Grassland | Waters | Farmland | Construction land | Unused land |
| 2010. | 0.40621 | 0.00328 | 0.59867 | 0.14506 | 0.15438 | 0.00039 |
| 2011. | 0.40621 | 0.00323 | 0.60867 | 0.14335 | 0.15456 | 0.00038 |
| 2012. | 0.43462 | 0.00328 | 0.60008 | 0.14335 | 0.15444 | 0.00040 |
| 2013. | 0.40621 | 0.00328 | 0.59867 | 0.14665 | 0.15438 | 0.00039 |
| 2014. | 0.40126 | 0.00325 | 0.59867 | 0.14506 | 0.15454 | 0.00038 |
| 2015. | 0.43462 | 0.00322 | 0.59867 | 0.14501 | 0.15234 | 0.00039 |
+
+Figure 20: Coefficient of Sensitivity of Yangtze River Delta,2010-2015
+
+| Land Use Type | Amount of Change | Land Use Dynamics |
| Cultivated Land | 32.81 | 0.93 |
| Garden Plot | -2.99 | -0.70 |
| Woodland | -21.54 | -0.37 |
| Grassland | -43.86 | -1.83 |
| Construction Land | 33.88 | 4.69 |
| Waters | -0.18 | -0.09 |
| Swamp and Tidal Flats | 0.00 | 0.00 |
| Other Land | 1.88 | 0.15 |
+
+Figure 21: The dynamic change of land use in Huangguoshu from 2009 to 2012
+
+| Land Use Type. | Ecological Service Value. | Ecological Cost of the Project. |
| 2009. | 2010. | 2011. | 2012. |
| Cultivated Land. | 1268.91. | 1274.84. | 1282.53. | 1280.77. | -11.89. |
| Garden Plot. | 323.77. | 323.53. | 322.80. | 321.49. | 2.27. |
| Woodland. | 6734.23. | 6713.13. | 6717.61. | 6709.62. | 24.61. |
| Grassland. | 950.63. | 899.62. | 890.28. | 889.02. | 16.61. |
| Swamp and Tidal Flats. | 79.69. | 79.69. | 80.12. | 79.69. | 0.00. |
| Waters. | 502.76. | 502.78. | 467.37. | 502.33. | 0.43. |
| Other Land. | 28.23. | 28.27. | 28.37. | 28.27. | -0.04. |
| Total. | 9843.22. | 9821.85. | 9789.07. | 9811.19. | 32.03. |
+
+Figure 22: The total ESV of different land types in Huangguoshu
+
+| Land Use Type | Coefficient of Sensitivity (CS) |
| 2009 | 2010 | 2011 | 2012 | ... |
| Cultivated Land | 0.1285 | 0.1294 | 0.1306 | 0.1301 | ... |
| Garden Plot | 0.0328 | 0.0328 | 0.0329 | 0.0327 | ... |
| Woodland | 0.6820 | 0.6813 | 0.6840 | 0.6817 | ... |
| Grassland | 0.0917 | 0.0913 | 0.0907 | 0.0903 | ... |
| Swamp and Tidal Flats | 0.0081 | 0.0081 | 0.0082 | 0.0081 | ... |
| Waters | 0.0511 | 0.0512 | 0.0477 | 0.0520 | ... |
| Other Land | 0.0029 | 0.0029 | 0.0029 | 0.0029 | ... |
+
+Figure 23: CS of ecosystem services value of different land types in Huangguoshu
+
+# References
+
+[1] Costanza R, daíArge R, de Groot R, et al. The value of the worldárs ecosystem services and natural capital. Nature, 1997,387:235-260
+[2] Millennium Ecosystem Assessment (MEA). Ecosystems and HumanWell-Being: Synthesis[M]. Washington DC: Island Press, 2005:1-13
+[3] Xie G D, Zhang C X, Zhen L. et al. Dynamic changes in the value of China's ecosystem services[J]. Ecosystem Services. 2017, 26:146-154.
+[4] ISO Environmental Quality Manual
+[5] Yang, Q., Liu, G., Casazza, M., Campbell, E., Giannettia, B., Brown, M., December 2018. Development of a new framework for non-monetary accounting on ecosystem services valuation. Ecosystem Services 34A, 37-54.
+[6] XIE Gao-di, ZHEN Lin, LU Chun-xia, et al. Expert knowledge based valuation method of ecosystem services in China. Journal of Natural Resources, 2008, 23(5): 911-919.
+[7] Ecosystem services: From eye-opening metaphor to complexity blinder Richard B. Norgaard Energy and Resources Group, University of California, Berkeley, United States
+[8] XIE Gao-di, LU Chun-xia, LENG Yun-fa, et al. Ecological assets valuation of the Tibetan Plateau. Journal of Natural Resources, 2003, 18(2): 189-196.
+[9] National Bureau of Statistics. China Statistical Yearbook 2016 [EB/OL].2016.
+[10] National Bureau of Statistics. China Statistical Yearbook 2017 [EB/OL].2017.
+[11] Liu Guilin, Zhang Luocheng, Zhang Qian. The Impact of Spatial and Temporal Changes of Land Use on the Value of Ecosystem Services in the Yangtze River Delta[J]. Chinese Journal of Ecology, 2014, 34(12):3311-3319
+[12] Yin Qi. Economic Analysis of Land Use Planning [D]. Zhejiang University, 2006.
+[13] Wang Xiulan, Bao Yuhai. Discussion on Research Methods of Land Use Dynamic Change[J].Progress in Geography,1999,18(1):81-87.
+[14] Ran Shenghong, Lv Changhe, Jia Kejing, et al. Environmental Impact Assessment of Land Use Change in China Based on Ecological Service Value[J]. Environmental Science,2006,27(10): 2139-2144.
+[15] National Bureau of Statistics of the People's Republic of China. Chinese Statistical Yearbook. Beijing: China Statistics Press, 2011.
+[16] State Forestry Administration of the People's Republic of China. China Forestry Statistical Yearbook. Beijing: China Forestry Publishing House, 2010.
+
+[17] WANG Zong-ming, ZHANG Shu-qing, ZHANG Bai. Effects of land use change on the ecosystem service value of Sanjiang Plain[J]. China Environmental Science, 2004, 24(1): 0-0.
+[18] Xu Xu, Li Xiaobing, Fu Na, et al. Application of Ecosystem Service Value Accounting in Strategic Environmental Assessment of Land Use Planning: A Case Study of Beijing[J]. Resources Science, 2008, 30(9): 1382-1388.
+[19] Ouyang Zhiyun, Wang Xiaoke. Preliminary Study on the Service Function and Ecological Economic Value of Terrestrial Ecosystem in China[J]. Chinese Journal of Ecology, 1999, 19(5): 607-613.
+[20] ZHANG Xing-yi, HUANG Xian-jin, ZHAO Xiao-feng. Accounting of ecosystem service value of land use change in coastal areas of Jiangsu Province[J]. Soil and Water Conservation Research, 2015, 22(1): 252-256.
\ No newline at end of file
diff --git a/MCM/2019/E/1902054/1902054.md b/MCM/2019/E/1902054/1902054.md
new file mode 100644
index 0000000000000000000000000000000000000000..c598913f054015f212aaaf2dece2fe1fe502ac30
--- /dev/null
+++ b/MCM/2019/E/1902054/1902054.md
@@ -0,0 +1,302 @@
+# 2019
+
+# MCM/ICM
+
+# Summary Sheet
+
+(Your team's summary should be included as the first page of your electronic submission.)
+
+Type a summary of your results on this page. Do not include the name of your school, advisor, or team members on this page.
+
+We build a mathematical framework to measure the value of ecological services in an area intended for some land use projects. We use a composite coefficient set to indicate the status quo of the land before the project, considering various types of services in different mediums like air, water and land. Then, we use a logistic growth model to predict the impact of project to the value of the land.
+
+The model was applied to three different cases, Kubuqi Desert greening, steel production by China Baowu Steel Group, and panda habitat Wolong National Nature Reserve. The results show that Kubuqi Desert greening increases the initial deficit value of local land to positive, and the model also informs the planners the "tipping point" for creation and maximization of value. The example of China Baowu Steel Group shows that the pollution exacerbates with more factories. Wolong National Nature Reserve does not have a significant impact to the original environment regardless of the area which agrees with the purpose of a nature reserve. This model suits various land types with modification of coefficients. However, the determination of these coefficients could be complicated in order to achieve an accurate number.
+
+# What is the Cost of Environmental Degradation?
+
+# Abstract
+
+We build a mathematical framework to measure the value of ecological services in an area. We use a composite coefficient set to indicate the status quo of the landform before the project, considering categories of provisioning services, regulating services, cultural services, supporting services in different mediums like air, water and land. Then, we use a logistic growth model to represent the impact from implementing the project on the landform, which is exponential change limited by a carrying capacity.
+
+We then apply the model to three cases, Wolong National Nature Reserve, China Baowu Steel Group, and Kubiqi Desert greening, representing different types of land use project: improvement, pollution and remedy, respectively. Finally we discuss the implication of this model to project planners, and suggest future research directions.
+
+Keywords: ecological services, valuation, logistic growth model
+
+# 1. Introduction
+
+Ecological services are the various kinds of benefits humans receive from the natural environment and a properly-functioning ecosystem, such as the purification of air and water; nutrient cycling; resources like timber, fuel and fiber. However, natural ecosystems worldwide are under enormous pressure from human activities, especially from many land use projects. Therefore, it is important to remind people that the loss of these services will leave us worse off.(IUCN, 2005)
+
+# 2. Assumptions
+
+- The valuation model measures ecological services in monetary terms such that it will be efficient for project planners to use this model in the traditional cost-benefit analysis
+- We define ecological service as the benefits human receives from the natural environment and a properly-functioning ecosystem freely
+- We define valuation as the estimating the current worth of something
+- We define land use project as the total of arrangements, activities, and inputs that people undertake in a certain land cover type, which can be categorized as
+
+residential land use, commercial land use, industrial land use, agricultural land use, recreational land use, transport space, public land Use or open space, etc.
+
+# 3. Defining the economic cost of land use projects
+
+We define the true economic cost of land use projects to include and account for their impacts on ecosystem services. The impacts on environmental segments include but are not limited to:
+
+- Surface water and groundwater pollution caused by detrimental contaminants dumping, water wastage, ineffective water recycling systems, etc.
+- Soil pollution caused by industrial activities of the projects, which mainly result in poisonous man-made waste. The waste generated from the nature itself such as rotten plants and dead flora and fauna only increases the fertility of the soil, but the chemical waste poured by industrial land use projects, the mining activities and other land-use activities can be of great harm to the soil and therefore should be taken into account.
+- Air pollution caused by industrial sources of emissions and air quality degradation due to human activities of sensitive land use, etc.
+
+# 4. The model
+
+To quantify the value of a land use project, framed as "value" (V), which is measured in monetary terms, we consider original ecological value (E) and influence from the implementation of this project (I). Also, we assume the value are uniform across the area of the project, therefore all above values are functions of area (A).
+
+$$
+V (A) = E (A) + I (A) \tag {1}
+$$
+
+To define the original ecological value $\mathrm{E}$ , we use a coefficient set each indicating the monetary value per unit area the chosen land should bring, which can be positive or negative.
+
+$$
+E (A) = \left(\alpha_ {1} + \alpha_ {2} + \dots\right) \cdot A _ {0} = \sum_ {i} ^ {n} \alpha_ {i} A _ {0} \tag {2}
+$$
+
+where
+
+- coefficient $\alpha_{i}$ is determined by corresponding metric for assessing ecological services (see section 5 for details).
+- n is the total amount of coefficients.
+- $A_0$ is the area of land planned to use.
+
+To define the influence of implementation to the project value (I), we consider the phenomenon of accumulation of impact in an ecological system. Since an ecosystem is a complicated, interrelated system, an initial impact will lead to an exponential higher consequential impact. However, since the amount of ecological services cannot be infinitely high or infinitely low, the environment puts a limit on the exponential growth. Drew inspiration from Pierre-Franois Verhulst's population dynamics(Cramer, 2003), we formulate the I as a logistic growth function in the following way,
+
+$$
+\frac {d I}{d A} = C _ {0} I \left(1 - \frac {I}{K}\right) \tag {3}
+$$
+
+We can solve the logistic equation by separation of variables,
+
+$$
+\frac {d I}{I \left(1 - \frac {I}{K}\right)} = C _ {0} d A \tag {4}
+$$
+
+$$
+\left(\frac {1}{I} + \frac {1}{K - I}\right) d A = C o d A \tag {5}
+$$
+
+Then, integrate both sides,
+
+$$
+\int \left(\frac {1}{I} + \frac {1}{K - I}\right) d A = \int C o d A \tag {6}
+$$
+
+$$
+\ln I - \ln (K - I) = C _ {0} A + D \tag {7}
+$$
+
+$$
+\frac {K - I}{I} = e ^ {- C _ {0} A - D} = D e ^ {- C _ {0} A} \tag {8}
+$$
+
+Rearrange to get the analytical solution of the I value,
+
+$$
+I (A) = \frac {K}{K e ^ {- C _ {0} A} + 1} \tag {9}
+$$
+
+where
+
+- $C_0$ represents the impact of the implementation of the project to the ecosystem. The magnitude of $C_0$ is the maximum per area rate of change to the value, which is a reflection of the extent of the ecological change the project will cause.
+- $K$ is the carrying capacity, which is the maximum ecological value change a particular project can cause. A negative $K$ shows the project is polluting the ecosystem; a positive $K$ indicates the project is improving the ecosystem, and when $K = 0$ the project is not affecting the ecological services of the land.
+
+Therefore, we have the entire model: value of the project as a function of area, with coefficient set $\alpha$ and $C_0, \lambda$ ,
+
+$$
+V (A) = \sum_ {i} ^ {n} \alpha_ {i} A _ {0} + \frac {K}{K e ^ {- C _ {0} A} + 1} \tag {10}
+$$
+
+# 5. Metrics for assessing ecological services value (coefficient set $\alpha$ )
+
+To approximate the coefficients given due to various ecological services, we first define the term "ecological services" in more accurate categories. The categories below are similar to The Millennium Ecosystem Assessment (MA), a major UNSponsored effort to analyze the impact of human actions on ecosystems and human well-being.(Board, 2005)
+
+- Provisioning Services
+
+A provisioning service is any type of benefit to people that can be extracted from nature. Along with food, other types of provisioning services include drinking water, timber, wood fuel, natural gas, oils, plants that can be made into clothes and other materials, and medicinal benefits.
+
+- Regulating Services
+
+A regulating service is the benefit provided by ecosystem processes that moderate natural phenomena. Regulating services include pollination, decomposition, water purification, erosion and flood control, and carbon storage and climate regulation.
+
+- Cultural Services
+
+A cultural service is a non-material benefit that contributes to the development and cultural advancement of people, e.g. the building of knowledge and the spreading of ideas; creativity born from interactions with nature; and recreation.
+
+- Supporting Services
+
+The most fundamental natural processes that allow the Earth to sustain basic life forms, such as photosynthesis, nutrient cycling, the creation of soils, and the water cycle. These processes are the basis for provisional, regulating, and cultural services.
+
+# 6. Categorize types of land use project
+
+With the above model, we are able to distinguish different situations of land use projects based on specifics of the coefficients, each combination of variables represents a type of land use projects.
+
+# 1. positive coefficient set $(+\mathrm{E}(\mathrm{A}))$
+
+This indicates the original ecosystem of the land provides ecological services to people. This may be direct, like a supply of drinking water or timber; or indirectly like stabilizing the climate in the area.
+
+
+Figure 1: Positive initial system $(+\mathrm{E}(\mathrm{A}))$
+
+with negative $K$ (-I(A))
+
+This is the most typical type of land use projects for commercial use, as the project is built for profit. The effect of the project will have different consequence for different stakeholders. An example may be a steel production factory near a river, if the investors are only interested in making money, they will probably avoid processing harmful emissions and polluted water, which will greatly jeopardize the original ecological services like drinking water source, clean air, and the biodiversity in the region.
+
+with neutral $K$ $(\mathrm{I}(\mathrm{A}) = 0)$
+
+This type of land use project usually falls under open space or recreational land use, for example, a national park trying to preserve the natural view.
+
+As the project is not for private use, it may maintain the status quo of the ecosystem, hence the value of the land will not change after implementing the project.
+
+with positive $K(+\mathrm{I}(\mathrm{A}))$
+
+This type of land use will take place when there is sufficient government funding or donations, as these projects try to enhance the ecosystem which needs a large amount of inputs of personnel and funding for nothing monetary in return. An example may be the establishment of Wolong National Nature Reserve in Sichuan, central China. To protect the endangered giant pandas, China Conservation and Research Center for the Giant Panda was established at Wolong with the efforts of both World Wildlife Fund (WWF) and the Chinese government, with funding primarily granted by the government. This reserve not only protects the giant pandas, but is also home to 4000 different species.(Liu et al., 2001)
+
+2. negative coefficient set $(-E(A))$
+
+This indicates the original ecosystem of the land presents ecological deficiencies. For example, this may be an area of soil already polluted and needs cleaning to be reused, or this is a natural landform that can hardly be used. In this case, the value of the land is initially negative.
+
+
+Figure 2: Negative initial system (-E(A))
+
+with positive $K(+\mathrm{I}(\mathrm{A}))$
+
+This type of land use is generally a kind of remedy or improvement. They are usually performed under the pressure of survival or a great altruism. An example is desert greening, a process of man-made reclamation of deserts for ecological reasons. Thar Desert in India remains dry for much of the year and is prone to soil erosion. High speed winds blow soil from the desert, depositing some on neighboring fertile lands, and causing shifting sand dunes within the desert, which buries fences and blocks roads and railway tracks. A permanent solution to this problem of shifting sand dunes can be provided by planting appropriate species on the dunes to prevent further shifting and planting windbreaks and shelter belts.(Berdell, 2011)
+
+with neutral $K$ $(\mathrm{I}(\mathrm{A}) = 0)$
+
+Not many projects will be in this category in common sense. Since this project will not have any effect on the already negative ecological situation of the land. One possibility is the building of residential or commercial facilities in the city when the magnitude of the ecological deficiency is low.
+
+with negative $K$ (-I(A))
+
+This type of project should be closely monitored by the local authority as they will potentially cause serious events detrimental to the local environment and people. The famous Great Smog of London can be an example of this category. London had suffered since the 13th century from poor air quality, which worsened in the 1600s. However, the energy released from the industrial revolution lures people to establish more and more factories, eventually leading to this notorious event that has the most significant effect of regulations afterwards.(Brimblecombe, 1976)
+
+# 7. Case studies
+
+In this section, we discuss three cases in detail, which are Kubyqi desert greening; steel production of China Baowu Steel Group; Sichuan Wolong National Nature Reserve.
+
+# 7.1 case 1: Desert greening in Kubuqi desert, China
+
+Kubuqi Desert, positions at Inner Mongolias, spans around 18,600 sq km of golden sand. Centuries of grazing had denuded the land of all vegetation, and the regions 740,000 people were wallowing in isolated poverty. In 1988, the Chinese firm Elion Resources Group partnered with local people and the Beijing government to combat desertification. Almost three decades later, one third of Kubuqi has been greened. Special plants have been grown to grip the shifting sands and to prevent the dunes encroaching on farms and villages. (Campbell and BaoTou, 2017)
+
+
+Figure (a):Satellite image of Kubiqi desert before greening in 2000
+
+
+Figure (b): Satellite image of Kubiqi desert after greening in 2016
+Figure 3: Comparison of kubuqi desert landform between year 2000 and 2016 (yu, 2017)
+
+To evaluate the value of this particular land use project, we have to find the relevant coefficients first in order to apply the model. From the quote we know the area of the desert to be $18600km^2$ , and $1/3$ of the area will be around $6000km^2$ . Before the project starts, we estimate the coefficient sum to be $-0.8$ for the natural condition of a desert, and the carrying capacity K to be $5000$ , $C_0$ value to be $0.002$ . Therefore, we have the following equation and plot,
+
+$$
+V (A) = - 4 8 0 + \frac {5 0 0 0}{\left(5 0 0 0 e ^ {- 0 . 0 0 2 A} + 1\right)} \tag {11}
+$$
+
+
+Figure 4: Applying the model to the desert green case
+
+From Figure 4 above we can see that the project turns the value of the land to neutral $(\mathrm{V} = 0)$ when the area implemented is $2091~km^2$ , and brings the value to a steady maximum after area reaches $4500~km^2$ . As this project actually takes around $6000~km^2$ , which is well above the maximum value amount. This implies this project will greatly improve the local ecosystem and boost the value of the ecological services there.
+
+There is more to consider besides the ecological change itself. Turning a land of quicksand to stable, arable soil with plants covered will first significantly reduce the frequency of sandstorm, while at the same time preventing further desertification, enhancing biodiversity, and bringing some agricultural opportunities. To earn profits from the greening, people in Kubuqi are encouraged to grow licorice, which doesn't require much water and can be sold for large sums for use in Traditional Chinese Medicine (TCM). (Campbell and BaoTou, 2017) However, at the same time, we have to be aware the potential risk lying behind this grand project. Since resources (nutrient element, rainfall, etc.) in desert are extremely limited, trying to profit by growing crops may deplete the ecosystem even more, thereby compromising the effect of the project.
+
+Therefore, to understand the value of the project more comprehensively, we have to also consider the cost of the project, which we assume to be an initial investment plus a fixed cost per unit area,
+
+$$
+C (A) = - 8 0 0 - 0. 5 A \tag {12}
+$$
+
+Also, the economic income generated from the new arable land is set to be
+
+$$
+B (A) = 0. 2 A \tag {13}
+$$
+
+To put all factors together, we get the following equation and plot,
+
+$$
+N e t (A) = - 4 8 0 + \frac {5 0 0 0}{\left(5 0 0 0 e ^ {- 0 . 0 0 2 A} + 1\right)} - 8 0 0 - 0. 5 A + 0. 2 A \tag {14}
+$$
+
+
+Figure 5: Cost benefit plot of Kubuqi greening project
+
+From Figure 5, we can see that the project is losing money before the area of project expands to $4285 \, \text{km}^2$ , and it reaches the maximum gains at $5980 \, \text{km}^2$ , then the project's value starts to slowly slide down because of the limit of water source and nutrient elements. Finally, if the project expands too much, it runs at the danger of depleting resources and would decrease the value of the land.
+
+To visualize the cost-benefit ratio, we can use the following graph to see the clear increase in value of the land, with the much higher ecological value with the creation of indirect economic value from agriculture, solar panel industry, and even tourism.
+
+
+Cost-benefit analysis of desert greening of $6000\mathrm{km}^{\wedge}2$ in Kubiqi, China
+Figure 6: Cost-benefit analysis of desert greening of $6000km^2$ in Kubyqi, China
+
+Based on the cost-benefit stacked column chart above, it is clear that for this particular project of area $6000km^2$ in Kubuqi desert, it should be implemented and
+
+it will improve not only the ecological terrain, but more importantly the economic opportunities in the region, thereby raising the residents' standard of living in the process.
+
+# 7.2 case 2: China Baowu Steel Group
+
+Steel, as a necessary material in many different areas from military use to the structure of buildings, has played an important role since the Worlds Industrial Revolution. Steel industry thus has been seen as the key to economic prosperity. Large steel companies could generate employment, export earnings and tax revenues. However other than these social benefits, steel industry also has a significant impact to environment.
+
+The process of making steel starts from the production of iron. Iron is made from iron ore going through reduction reaction with carbon monoxide under extreme high temperature. The carbon monoxide using here is made from heating carbon with oxygen. The side products of this reaction contribute to major air emission in the process of making iron which are carbon dioxide, sulfide and nitride. Methods for manufacturing steel from iron involve basic oxygen steelmaking(BOS) and electric arc furnace(EAF). Like the process of making iron, both of these two methods require extremely high temperature, thus costing significant amount of energy. To provide this high requirement of energy, burning coal is usually involved to generate heat. Also, significant amount of water is involved in the cooling process.
+
+Thus, due to the huge production of steel and various side product involved and the own characteristics of the production process, steel industry has brought the environmental problems to the table. The costs of ecosystems services must not be ignored.
+
+
+Figure 7: Air pollution in Shanghai
+
+China Baowu Steel Group is selected as an example for analyzing environment costs using the model mentioned above. China Baowu Steel Group ranked second in
+
+2017 in the production of steel by volume according to The World Steel Association with the production of 65.4 millions of tonnes is the largest steel producer in China. It employs 130,401 employees and has annual revenues of around $21.5 billion. China Baowu Steel Group located in Shanghai, a port city in the southern China, takes up around 60 square kilometers including various departments in the production process.
+
+As mentioned above, steel factory has a huge impact to the surrounding environment, thus $C_0$ (measurement of the impact) is chosen to be 0.001. With selected K value, our model for this particular steel factory becomes as following:
+
+$$
+V (A) = E (A) + I (A) = 5 0 0 0 - \frac {3 0 0 0}{1 + 3 0 0 0 e ^ {- 0 . 0 0 1 A}} \tag {15}
+$$
+
+
+Figure 8: Applying the model to the steel production case
+
+As shown in Figure 8, the total value of this project decreases with increasing land-use area. This agrees with reality. With the significantly increasing steel producing area, Shanghai has become an extremely polluted city with severe air pollution problem. Chinese government thus published many steel production acts to regulate steel producers and one step is to reduce the production area by shutting down many factories.
+
+# 7.3 case 3: Wolong National Nature Reserve in Sichuan, China
+
+Located in the southwest of Wenchuan County of Sichuan Province, central China, Wolong National Nature Reserve is the third largest nationwide nature reserve in China. As the largest protected area in Sichuan Province with the most complex natural conditions and the largest number of rare animals and plants species, Wolong Nature Reserve covers about an area of over 200,000 hectares, targeting mainly at protecting the natural ecosystem of alpine forest regions and rare animals species such as pandas. The nature reserve experiments the innovative semi-wide stocking mode in order to realize the goal of returning the artificial propagated pandas back into nature. Therefore, the entire nature reserve is constructed based on the local nature principles, following the original fold-and-thrust belt landscape and thus forming a relatively complete alpine forest ecosystem, becoming home to over 4,000 species.
+
+
+Figure 9: Predictions of giant panda distribution in Wolong Nature Reserve before and after livestock application (Liu, 2017)
+
+It is obvious to confer from the quote that Wolong National Nature Reserve has a positive impact on the surrounding environment and ecosystem services. For example, according to a prediction research of giant panda distribution in Wolong Nature Reserve before and after semi-livestock mode application conducted in 2017, May(Figure 3), the density of giant panda distribution undergoes a significant in-
+
+crease in Wolong's experimental semi-wide stocking mode, which will make a great contribution to maintain the biodiversity of the ecosystem.
+
+In order to evaluate the true economic cost of Wolong National Nature Reserve, it is crucial to consider the impacts on the natural conditions of the surrounding landscape of the protected area. According to Liu Peng and Jiang Shiwei's analysis on the impacts of nature reserves, a strictly regulated nature reserve could provide highly valuable ecosystem services and benefits in terms of carbon stores and recreation. In practice, Wolong Nature Reserve not only protects the balance of ecosystems by keeping and increasing species diversity, but also improve the climate of the surroundings. To enumerate the value of this land use project, it is crucial to determine the coefficients in order to apply the above model. According to the quote, the area of Wolong Nature Reserve is estimated to be $2,000km^2$ , with a positive K value representing enhancing and implementation of the ecosystem services. We estimate the coefficient sum to be 1 for Wolong Nature Reserve, and the carrying capacity K to be 410. Therefore, we have,
+
+$$
+V (A) = E (A) + I (A) = 4 1 0 + \frac {2 0 0 0}{1 + 2 0 0 0 e ^ {1 \cdot A}} \tag {16}
+$$
+
+
+Figure 10: Applying the model to the panda nature reserve case
+
+which depicts a positive appropriate value of the true economic cost of Wolong Nature Reserve.
+
+# 8. Discussion and Conclusion
+
+From the analysis and results from the case studies, we can conclude that this model can effectively estimate the value of ecological services from different types of land use projects. By analysing the coefficients set and applying this model to various land use projects, the process of understanding the true economic costs of these projects will be more accurate with detailed and appropriate considerations of ecosystem services.
+
+This model provides a way to easily view the threshold of value creation and maximization. Despite the strength of the model shown in the case studies, it also reveals its limitations. To accurately represent the value of the project in specific cases, the coefficients in the model has to be chosen carefully to best illustrate the type and impact brought by the project. Also, this model relies on the assumption that the value and cost generated across the area is uniform, which is not necessarily true in reality. To improve this model, we suggest dividing the area into various sections by various land types and use different coefficients. Thus this model could become a summation of many small sections.
+
+# References
+
+Nicol-Andr Berdell. Rethinking landscapes. July 2011.
+Millennium Assessment Board. Millennium ecosystem assessment. Washington, DC: New Island, 13, 2005.
+Peter Brimblecombe. Attitudes and responses towards air pollution in medieval england. Journal of the Air Pollution Control Association, 26(10):941-945, 1976.
+Charlie Campbell and BaoTou. China's greening of the vast kubuqi desert is a model for land restoration projects everywhere. TIME, July 2017.
+James S Cramer. The origins and development of the logit model. Logit models from economics and other fields, pages 149-158, 2003.
+IUCN. How much is an ecosystem worth?: assessing the economic value of conservation. World Bank, 2005.
+Jianguo Liu. Divergent responses of sympatric species to livestock encroachment at fine spatiotemporal scales. Biological Conservation, May 2017.
+Jianguo Liu, Marc Linderman, Zhiyun Ouyang, Li An, Jian Yang, and Hemin Zhang. Ecological degradation in protected areas: the case of Wolong nature reserve for giant pandas. Science, 292(5514):98-101, 2001.
+Yu yu. Unep: Kusuqi, our desert, our home. Phoenix Business, June 2017.
\ No newline at end of file
diff --git a/MCM/2019/E/1902917/1902917.md b/MCM/2019/E/1902917/1902917.md
new file mode 100644
index 0000000000000000000000000000000000000000..336121f349d582ca435ee7d1df67dd3f11c9eb5d
--- /dev/null
+++ b/MCM/2019/E/1902917/1902917.md
@@ -0,0 +1,629 @@
+# 2019
+
+# MCM/ICM Summary Sheet
+
+# Take Environmental Effect into Consideration: Cost Benefit Analysis on Land
+
+# Use Project
+
+Ecosystem services should be included in the cost benefit analysis of land use development projects in order to assess true values of them comprehensively. We establish a short run model and a long run model of ecosystem service evaluation respectively, and incorporate the environmental cost into the cost benefit analysis. The models are applied to land development projects of varying sizes for examination and analysis.
+
+In the short run model, we adopt the replacement cost method to measure the current impact of the land development project on the environment. After obtaining the monetary value of the environmental cost, we add it to the cost benefit analysis.
+
+In the long run model, we develop the meaning of the environmental cost from the economic value of short run replacement cost to the cost of three aspects: impact on the quality of human life, the impact on the sustainable development of future generations and the impact on the whole natural ecosystem. Since it is difficult to measure long run environmental cost by monetary unit, we select five corresponding environmental indicators and use Analytic Hierarchy Process (AHP) to analyze the cost and benefit of land development projects.
+
+The short run model is then applied to the case of the construction project of a small paper mill in China. And we find that the project is unfeasible after adding environmental cost. Likewise, the long run model is applied to the case of electric power development project in the Tennessee Valley in the United States. This time we find that the total benefit of the construction project of thermal power stations is still greater than the total cost even after the adjusting and optimizing the model, which indicates that the construction of Tennessee thermal power stations has large real value under comprehensive evaluation. This also confirms the validity of our model.
+
+We analyze the sensitivity of the model, taking Tennessee as an example to demonstrate the results of changing the degree of emphasis on economy and environment. Finally, we evaluate the model and give some suggestions for improvement.
+
+# Contents
+
+1 Introduction 3
+
+1.1 Background 3
+1.2 Our Work 3
+
+2 Fundamental Assumptions 4
+3 Symbol Description. 4
+4 Ecological Services Valuation Model. 5
+
+4.1 Basic Model in Short-term Perspective 5
+
+4.1.1 Cost Analysis 5
+4.1.2 Benefit Analysis 6
+
+4.2 Improved Model in Long-term Perspective 7
+
+4.2.1 Cost Analysis 7
+4.2.2 Benefit Analysis 12
+
+4.3 Cost benefit analysis 12
+
+5 Case Studies 13
+
+5.1 Short run empirical analysis: a story of a paper mill 13
+
+5.1.1 Our assumptions 13
+5.1.2 Analysis by a short run model 13
+
+5.2 Long run empirical analysis: the electric power development in the Tennessee Valley 14
+
+5.2.1 Background of the electric power development in the Tennessee Valley
+14
+5.2.2 Selection of project 15
+5.2.3 Analysis by a long run model 15
+5.2.4 Initial matrix optimization 17
+
+6 Sensitivity Analysis. 17
+7 Implications on Land Use Project Planners and Managers 19
+8 Evaluation of the Model 19
+
+7.1 Strengths 19
+7.2 Weaknesses and improvements 20
+
+9 Conclusion. 20
+
+References 21
+
+# 1 Introduction
+
+# 1.1 Background
+
+Ecosystem services are the conditions and processes through which natural ecosystems and the species that make them up, sustain and fulfil human life, which provide a series of goods and services that people perceive to be important for production, protection and maintenance and so on.[1] Therefore, ecosystem services play an important role in maintaining the coordinated development of social economy and ecological environment. However, with the rapid development of urbanization, economic development and population growth have brought great pressure on the ecological environment. In particular, changes in land use caused by human activities affect ecosystems, having the most immediate negative impact on ecosystem services.
+
+However most economic theories that the decisions of land use projects depend on ignore the impact of land use projects on ecosystem services because of ecosystem services' externalities. Therefore, the environmental problems such as resulted from ignorance of ecosystem services become more and more serious. Hence, the maintenance of ecosystem services has gradually become the hot topic in the world today.
+
+There are various of methods to evaluate ecological services, but most of them only assess the dynamic changes of ecological services without considering the future impact like the damage to ecological services caused by environmental degradation. In addition, these models are usually applied to analyze the ecological services among an area but rarely applied to analyze the influence of a certain project. For example, the InVEST model can simulate the dynamic changes of carbon reserves, crop yield, habitat quality and other ecosystem services of the terrestrial ecosystem according to the time-space changes of the land use map, but it can't evaluate the ecological services in long run.
+
+So an ecological services model that can evaluate environmental cost in different time scale and can be applied to assess the influence of a certain project is required.
+
+# 1.2 Our Work
+
+We are asked to build an ecological services model to evaluate the environmental cost so as to measure the true economic cost of land use development projects. Then we should take the environmental cost into the cost benefit analysis to assess the land use projects varying from sizes and in different time scale.
+
+In order to solve the problem, we did the following work:
+
+- We give several fundamental assumptions to simplify the model and define
+
+symbols as different indexes.
+
+- In the short run, we use the replacement cost method to measure the monetary value of environmental costs in the current period according to the virtual treatment costs of solid, liquid and gas pollutants; build the short run cost benefit analysis model by substituting the monetary value of environmental cost that we have defined before into the cost-benefit analysis of land development projects.
+- In the long run, we consider the long run environmental cost of environmental degradation from three perspectives of life quality, sustainable development and ecosystem; build the long run cost and benefit analysis model by ranking and scoring each cost and benefit with the Analytic Hierarchy Process (AHP).
+- Apply the short run model to the cost benefit analysis of a construction project of a small paper mill.
+- Apply the long run model to the cost benefit analysis of the large project of Tennessee thermal power stations development and evaluate the validity of the model based on the reality.
+- Conduct the sensitivity analysis of weights setting in AHP; evaluate the model and give some suggestions for improvement.
+
+# 2 Fundamental Assumptions
+
+- The long run or short run of the model depend on measurement of environmental cost in long run or short run. The short run model only measures the cost of environmental damage in the current period, while the long run model also considers the cost of damage to sustainable development and ecosystem after environmental degradation over time.
+- Regardless of the construction time of the project.
+- Project builders maximize profits, regardless of the positive environmental benefit brought by land use projects.
+- Project evaluators attach equal importance to environmental cost and economic cost.
+- People can make an assessment of the importance of various aspects of the environment in pairs.
+
+# 3 Symbol Description
+
+Table 1: Notations
+
+| Symbols | Definition | Unit |
| \(C_S\) | Short run cost | \(\$ |
| \(C_L\) | Long run cost | \(\$ |
| \(PC\) | Economic cost | \(\$ |
| \(EC\) | Environmental cost | \(\$ |
| \(P\) | The amount of pollution generated by businesses | |
| \(LP\) | The amount of pollution which can be controlled by self-purification capability of nature | |
| \(LP_S\) | Self-purification capability of soil | \(\mathrm{m}^2\) |
| \(LP_W\) | Self-purification capability of water | \(\mathrm{m}^3\) |
| \(LP_A\) | Self-purification capability of atmosphere | \(\mathrm{m}^3\) |
| \(PS\) | The amount of pollution of soil | \(\mathrm{m}^2\) |
| \(PW\) | The amount of pollution of water | \(\mathrm{m}^3\) |
| \(PA\) | The amount of pollution in atmosphere | \(\mathrm{m}^3\) |
| \(CRS\) | Replacement/ restoration cost of solid waste pollution per square meter | \(\$/\mathrm{m}^2\) |
| \(CRW\) | Replacement/ restoration cost of water pollution per cubic meter | \(\$/\mathrm{m}^3\) |
| \(C_{RA}\) | Replacement/ restoration cost of air pollution per cubic meter | \(\$/\mathrm{m}^3\) |
+
+Where we define the main parameters while specific value of those parameters will be given later.
+
+# 4 Ecological Services Valuation Model
+
+We build an ecological services valuation model especially considering the environmental cost to perform cost benefit analyses of land use development projects on different time scales.
+
+# 4.1 Basic Model in Short-term Perspective
+
+In general, we establish the basic model by incorporating the environmental cost into cost benefit analysis, which does not involve the influence of time provisionally.
+
+# 4.1.1 Cost Analysis
+
+In our model, the cost of a land use development project includes economic cost and environmental cost, which is:
+
+$$
+\mathrm {C} _ {\mathrm {S}} = \mathrm {P C} + \mathrm {E C}
+$$
+
+Without considering the impact or the changes of ecosystem services, in most cases, the benefit cost analysis of land use development projects only takes economic
+
+cost into account, which is usually considered as the cost of construction PC.
+
+Now we build an ecological services valuation model to evaluate the environment cost of land use development projects. There are many methods to measure environmental cost in previous studies. In this model, we adopt replacement cost method to assess the ability of ecosystem services in order to measure the monetary value of the cost of making the ecosystem revert to original standard after being polluted. The replacement cost is calculated by multiplying the amount of pollution discharged by the construction project beyond the environmental carrying capacity and the treatment cost per unit of pollution together. In addition, we divide the environmental pollution caused by land use projects into solid waste pollution, water pollution and air pollution. Then we calculate the replacement cost of each kind of pollution respectively. And the environmental cost is:
+
+$$
+\mathrm {E C} = \mathrm {f} (\mathrm {P} - \mathrm {L P}) = \mathrm {C} _ {\mathrm {R S}} ^ {*} \left(\mathrm {P} _ {\mathrm {S}} - \mathrm {L P} _ {\mathrm {S}}\right) + \mathrm {C} _ {\mathrm {R W}} ^ {*} \left(\mathrm {P} _ {\mathrm {W}} - \mathrm {L P} _ {\mathrm {W}}\right) + \mathrm {C} _ {\mathrm {R A}} ^ {*} \left(\mathrm {P} _ {\mathrm {A}} - \mathrm {L P} _ {\mathrm {A}}\right)
+$$
+
+Therefore, the total cost of a land use development project is:
+
+$$
+C _ {S} = P C + C _ {R S} ^ {*} \left(P _ {S} - L P _ {S}\right) + C _ {R W} ^ {*} \left(P _ {W} - L P _ {W}\right) + C _ {R A} \left(P _ {A} - L P _ {A}\right)
+$$
+
+Since in the above premise, the cost analysis in general can also be simply considered as the short run cost analysis.
+
+To demonstrate our analysis of short run cost, we use diagram in Figure.1 to show the relationship among every factor in it.
+
+
+Figure 1: Schematic Diagram of Short Run Cost
+
+# 4.1.2 Benefit Analysis
+
+We divide the benefit of a land use development project into economic benefit EB and social benefit SB. Then we have:
+
+$$
+\mathrm {B} _ {\mathrm {S}} = \mathrm {E B} + \mathrm {S B}
+$$
+
+We use the aggregated present discounted values of the project's profits in its duration to measure its economic benefit. The formula of economic benefit is:
+
+$$
+\mathrm {E B} = \sum_ {\mathrm {t} = 0} ^ {\mathrm {T}} \frac {\mathrm {B t}}{\left(1 + \mathrm {r}\right) ^ {t}}
+$$
+
+Where:
+
+T the duration of the project
+
+social discount rate
+
+Likewise, we suppose that social benefit can be divided into two aspects. One is $\mathrm{B}_{21}$ , the monetary value of the increase of employment rate because of the land use project. The other is $\mathrm{B}_{22}$ , the monetary value of the improvement of city image resulted from the project.
+
+So, the total benefit of a land use development project is:
+
+$$
+\mathrm {B} _ {s} = \sum_ {\mathrm {t} = 0} ^ {\mathrm {T}} \frac {\mathrm {B t}}{(1 + \mathrm {r}) ^ {t}} + \mathrm {B} _ {2 1} + \mathrm {B} _ {2 2}
+$$
+
+Similarly, we can consider this benefit analysis as the short run benefit analysis. And the relationship among every factor of it can be viewed in Figure.2.
+
+
+Figure 2: Schematic Diagram of Short Run Benefit
+
+# 4.2 Improved Model in Long-term Perspective
+
+Now we already have a model without considering time scale. So, we will improve our model by taking influence of time into account.
+
+# 4.2.1 Cost Analysis
+
+Based on the theory of the economic analysis of environmental pollution cost (Xu, 1995)[2], considering the impact of environmental degradation in the long run, the
+
+concept of environmental pollution cost expands in three aspects – compared with the original health-economic understanding of pollution cost. The original one is merely the economic measurement of short run replacement cost. It expands from the narrow perspective of economic value to extensive attention on humans' quality of life, the sustainable development of future generations and the whole natural ecosystem.
+
+Therefore, environmental pollution cost can be divided into three categories. These three categories of costs can be defined as follows:
+
+LWC the pollution cost in respect of humans' quality of life, which reflects the impact of pollutants on the influential elements of human quality of life.
+- SDC pollution cost in respect of sustainable development, which reflects the impact of the destruction of environmental elements caused by pollution on the sustainable development ability of future generations. These environmental elements include water, climate, soil and creatures.
+- ELC pollution cost in respect of ecology, which reflects the destruction of biodiversity caused by pollution.
+
+Therefore, in long-term perspective, what we call environmental pollution cost $C_L$ actually equates to the sum of the three categories of environmental pollution costs:
+
+$$
+C _ {L} = L W C + S D C + E L C
+$$
+
+As these environmental costs are difficult to be measured by money in long run, we have to use another method to evaluate these costs. Next, we will introduce Analytic Hierarchy Process (AHP) to measure the long run environmental cost in the alternatives layer that is at the bottom. Then we will measure the environmental cost and economic cost comprehensively in the criteria layer in the middle. Finally, we will obtain the overall cost evaluation of the goal layer at the top.
+
+# 1. Alternatives layer
+
+According to the mathematical model in economics, ecology, and environment (Hritonenko, Yatsenko, 2006)[3], we evaluate the long run environmental cost of a land development project from the three categories of environmental pollution costs, including variation of five indexes: biodiversity, air quality, soil quality, water quality, and the humans' feelings of environmental health.
+
+# (1) Biodiversity
+
+Simpson diversity index (SDI) is used to measure the biodiversity.
+
+$$
+\mathrm {S D I} = 1 - \frac {1}{\mathrm {N} (\mathrm {N} - 1)} \sum_ {i} \left(\mathrm {N} _ {i} \left(\mathrm {N} _ {i} - 1\right)\right)
+$$
+
+Where:
+
+$\mathbf{N}\mathbf{i}$ number of entities belonging to the ith type
+
+N the total number of entities
+
+# (2) Air quality
+
+We use the air quality index (AQI) to measure the quality of the air. AQI is determined by the contents of five pollutants in the air: $\mathrm{O}_3$ above the earth's surface, particulate matter, CO, $\mathrm{SO}_2$ and $\mathrm{NO}_2$ , labeled with numbers from 1 to 5. Pi represents the air quality index related to pollutant $i$ :
+
+$$
+\mathrm {P} _ {i} = \frac {C - C _ {l}}{C _ {h} - C _ {l}} (P _ {h} - P _ {l}) + P _ {l}
+$$
+
+Where:
+
+C pollutant concentration
+
+$C_l$ the critical concentration under C
+
+$C_h$ the critical concentration above $C$
+
+$\mathbf{P}_l$ the critical exponents corresponding to $\mathbf{C}_l$
+
+$\mathbf{P}_h$ the critical exponents corresponding to $\mathbf{C}_h$
+
+$$
+\mathrm {A Q I} = \max \{P _ {i} | i \in [ 1, 5 ] \}
+$$
+
+Finally, we take the maximum of Pi to obtain AQI among five pollutants.
+
+# (3) Soil quality
+
+As the soil quality Index (SQI) is too complicated for our model, we adopt the Normalized Difference Vegetation Index (NDVI) to measure the soil quality which is represented by the impact of human activities on the vegetation cover and plant productivity. R and NIR stand for the spectral reflectance measurements acquired in the red (visible) and near-infrared regions[4]. They separate vegetation from water and soil, then NDVI is:
+
+$$
+\mathrm {N D V I} = \frac {N I R - R}{N I R + R}
+$$
+
+# (4) Water quality
+
+Here we use the Ross Water Quality Index to measure the water quality. WQI selects four parameters from the 12 parameters of routine monitoring as evaluation parameters for calculating the WQI and gives different weight coefficients to the four parameters by grading each parameter with grading score respectively. WQI is the weighted mean of these four parameters:
+
+$$
+\mathrm {W Q I} = \frac {\sum \text {g r a d i n g s c o r e}}{\sum \text {w e i g h t c o e f f i c i e n t}}
+$$
+
+# (5) Environmental Health Indicator (EHI)
+
+Due to environmental pollution, people may feel uncomfortable by inhaling excessive waste gas, eating crops with excessive heavy metal content, drinking polluted water and so on, which leads to the decline of humans' life quality. Hence, we use the Environmental Health Indicator (EHI) as the indicator of human bodies' feeling of environmental health. It is a quantitative index used to quantify the
+
+human health or human feelings related to environment. EHI is a dimensionless index with a quantitative value varying from 0 to 100. A higher value means better environment or more comfortable humans feel.
+
+The Environmental Performance Database (EPI) offers the reference values of the above indicators in national projects. And local governments' websites offer the required data in local projects.
+
+Now we will build a matrix of indexes to represent the relative importance of SDI, AQI, NDVI and WQI. These weights may change with respect to different emphases of comparison: different projects cause great effect on one or several indexes due to different construction processes; Meanwhile, the attention that the residents near the project pay to the environmental health of these four aspects is also crucial to our consideration.
+
+In our model, the selection of indexes in matrix will be discussed. And the sensitivity analysis will be carried out in the next section.
+
+We establish the initial Comparative Matrix
+
+$$
+J = \left(a _ {i j}\right) _ {5 \times 5}
+$$
+
+The value of $a_{ij}$ indicates the relative importance of index $i$ to index $j$ .
+
+Now we will assign a reasonable value for $\mathbf{J}$ . We come up with an initial matrix of indexes estimation method: the comparative importance degree of two elements is estimated according to the ratio of the number of the literature on biodiversity, air quality, soil quality and water quality. After searching keywords on Google Scholar, we obtained 5,540,000 results for "biodiversity", 3,180,000 results for "air quality", 5,130,000 results for "soil quality", 2,200,000 results for "water quality" and 6,410,000 results for "environmental health indicator". So we estimate the relative importance of each index in the matrix:
+
+$$
+\mathbf {J} _ {0} = \left[ \begin{array}{c c c c c} 1 & 5 / 3 & 1 & 5 / 2 & 5 / 6 \\ 3 / 5 & 1 & 3 / 5 & 3 / 2 & 1 / 2 \\ 1 & 5 / 3 & 1 & 5 / 2 & 5 / 6 \\ 2 / 5 & 2 / 3 & 2 / 5 & 1 & 1 / 3 \\ 6 / 5 & 2 & 6 / 5 & 3 & 1 \end{array} \right]
+$$
+
+Obviously, $\mathrm{J_0}$ is a consistent matrix and its eigenvalue is 4. The normalized eigenvector is $(\frac{5}{21},\frac{1}{7},\frac{5}{21},\frac{2}{21},\frac{2}{7})$ . So $\mathrm{W_1 = (\frac{5}{21},\frac{1}{7},\frac{5}{21},\frac{2}{21},\frac{2}{7})}$ is taken as the weight vector among biodiversity, air quality, soil quality, water quality and environmental health indicator.
+
+However, in the general sense, we need a consistency test of the comparative matrix. Therefore, consistency index (CI) and random consistency index (RI) are introduced:
+
+$$
+\mathrm {C I} = \frac {\lambda - r}{r - 1}
+$$
+
+RI can be obtained by checking the data table.
+
+Then calculate the consistency ratio:
+
+$$
+\mathrm {C R} = \frac {C I}{R I}
+$$
+
+If $\mathrm{CR} < 0.1$ , it is considered that the degree of inconsistency is within the acceptable range, so it passes the consistency test. If it can't pass the test, we will modify $\mathbf{J}$ . The specific modified approach varies in respect to the size and geographical location of the construction project.
+
+# 2. Criteria layer
+
+Similarly, we examine the weights of environmental cost and economic cost respectively in the total cost. Since environmental cost is taken into account, we believe that environmental cost is as important as economic cost for the examiner. Therefore, we have the weight vector $\mathbf{W}_2$ between economic cost and environmental cost:
+
+$$
+\mathrm {W} _ {2} = (\frac {1}{2}, \frac {1}{2}) ^ {T}.
+$$
+
+# 3. Goal layer
+
+Now we calculate the combination weight vector:
+
+$$
+\mathrm {W} _ {3} = (1, 0, 0, 0, 0, 0) ^ {T}, \quad \mathrm {W} _ {4} = (0, \mathrm {W} _ {1}) ^ {T}, \mathrm {W} _ {5} = (\mathrm {W} _ {3}, \mathrm {W} _ {4})
+$$
+
+$$
+\mathbf {W} = \mathbf {W} _ {5} ^ {T} * \mathbf {W} _ {1}
+$$
+
+W is the combination weight vector of cost of each element. The weight vector obtained by the initial matrix is $\mathbf{W}_0 = \left( \frac{1}{2}, \frac{5}{42}, \frac{1}{14}, \frac{5}{42}, \frac{1}{21}, \frac{1}{7} \right) \mathbf{T}$
+
+# 4. Ranking standards evaluation of indexes
+
+Since we need to make an absolute comparison between cost and benefit, we give the ranking standards of these environmental indexes in Table2:
+
+Table2: Ranking standards and scores of values' changes of different indexes
+
+| Ranking (Score) | Decrease of biodiversity | Reduction of air quality | Reduction of soil quality | Reduction of water quality | Impact on human health | Capital input |
| 1 | above0.5 | above 0.5 | above 0.5 | above 0.5 | above 50 | above 2AC |
| (9) | | | | | | |
| 2 | 0.3~0.5 | 0.3~0.5 | 0.3~0.5 | 0.3~0.5 | 30~50 | 1.5AC~2AC |
| (7) | | | | | | |
| 3 | 0.2~0.3 | 0.15~0.3 | 0.2~0.3 | 0.15~0.3 | 20~30 | AC~1.5AC |
| (5) | | | | | | |
| 4 | 0.1~0.2 | 0.05~0.15 | 0.1~0.2 | 0.05~0.15 | 10~20 | 0.7AC~AC |
| (3) | | | | | | |
| 5 | under0.1 | under0.05 | under0.1 | under0.05 | under10 | under0.7AC |
| (1) | | | | | | |
+
+When measuring the economic cost, we select the average economic cost (AC) of the industry which the project belongs to as the benchmark to determine the scoring standard. We evaluate the project's rank according to the ratio of the project cost to AC.
+
+Thus, we obtain rank vectors of the five environmental cost indexes and economic cost index. And the value of total cost can be obtained from inner product of the rank vectors and combination weight vector.
+
+To demonstrate our model in long-term perspective, we give the Figure.3 to illustrate our process of AHP.
+
+
+Figure 3: Schematic Diagram of Long Run Cost
+
+# 4.2.2 Benefit Analysis
+
+Our benefit analysis in long run is the same as the above benefit analysis in short run. So the total benefit of a land use development project is:
+
+$$
+\mathrm {B} _ {s} = \sum_ {\mathrm {t} = 0} ^ {\mathrm {T}} \frac {\mathrm {B t}}{(1 + \mathrm {r}) ^ {t}} + \mathrm {B} _ {2 1} + \mathrm {B} _ {2 2}
+$$
+
+Then we will evaluate the long run benefit of a land use development project's rank and score in the same process as the long run cost analysis.
+
+# 4.3 Cost benefit analysis
+
+After obtaining the total benefit measured by monetary value of the land development project, in order to perform the cost benefit analysis, we conducted the
+
+same ranking process towards the benefit as the economic cost's above to make cost and benefit comparable, which is scoring based on the ranking standards of the different ratios of the total benefit of the project (Bs) to the average economic cost of the industry (AC).
+
+Finally, we perform the cost benefit analysis by comparing the weighted score of total cost and the weighted score of total benefit, so that the feasibility of the land use development project can be evaluated.
+
+# 5 Case Studies
+
+Since we are required to perform a cost benefit analysis of land use development projects of varying sizes and time, we will combine normative analysis and empirical analysis with our model to do cost benefit analyses of two projects: one is a short run model application to a small community-based project, the other is a long run model application to a large national project.
+
+# 5.1 Short run empirical analysis: a story of a paper mill
+
+A private entrepreneur in a small town in China who is attracted by the large demand for paper in the town's schools wants to invest in a small paper mill nearby. The private entrepreneur submits the construction program to the land planning bureau. Now the land planning bureau's officer should measure the cost and benefit of the project to decide whether the project should be granted.
+
+# 5.1.1 Our assumptions
+
+- Due to lack of funds, the private entrepreneur plans to use simple papermaking equipment with 20-year service life.
+- The annual paper demand of this town is 10 tons, and the market price of paper is 100,000 yuan per ton.
+- The private entrepreneur does not plan to expand production and choose to retire after equipment aging.
+- A paper mill will not decrease the market price and bring additional benefit to the society.
+The discount rate is equal to the rate of inflation
+
+# 5.1.2 Analysis by a short run model
+
+After consulting the data, we learn that the construction cost of a small paper mill is about 2 million yuan and the average production cost of paper is about 70,000 yuan per ton. We can calculate that the economic benefit is 6 million yuan. Since the paper
+
+mill will not decrease the market price, it can be considered without any other benefit.
+
+The officer is preparing to analyze the problem by using a short run model.
+
+First, calculate the environmental cost. A small paper mill's pollution mainly concentrates on water pollution, and emission of COD occupies more than $74\%$ of the total emission [5]. We consider that a small paper mill plans to discharge 800 tons of sewage annually, assuming that the environment can purify 100 tons of sewage by its self-purification capability
+
+We only take the replacement cost of COD in water pollution into account (the unit treatment cost of COD is 800 yuan per ton).
+
+$$
+\mathrm {C} _ {\mathrm {R W}} ^ {*} \left(\mathrm {P} _ {\mathrm {W}} - \mathrm {L P} _ {\mathrm {W}}\right) ^ {*} 2 0 = 8 0 0 ^ {*} (8 0 0 - 1 0 0) ^ {*} 2 0 = 1 1. 2 \text {m i l l i o n y u a n}
+$$
+
+It should be pointed out that the advanced paper mills in developed countries in Europe and America only discharge about 10 cubic meters of water per ton of paper production, and the large and medium-sized paper mills need about 20 cubic meters. However, there are a large number of small nonstandard paper mills in China which need about 1 million cubic meters per ton of paper.[6]
+
+We can see that the cost of the private paper mill is far greater than its benefit when only the replacement cost of COD is taken into account. The addition of short run environmental cost has made us reject construction project, not to mention the long run environmental cost.
+
+Therefore, our officer in land planning bureau must reject the private entrepreneur's request.
+
+# 5.2 Long run empirical analysis: the electric power development in the Tennessee Valley
+
+# 5.2.1 Background of the electric power development in the
+
+# Tennessee Valley
+
+The Tennessee Valley is in the southeast of United States. Its trunk, the Tennessee River, is the major river in the southeast and the fifth longest river in the United States. During the Great Depression, The Tennessee Valley was the typical representative of rural depression and poverty. As part of the New Deal, President Franklin Delano Roosevelt signed the Tennessee Valley Authority Act in 1933, creating the Tennessee Valley Authority (TVA) which began to develop the Tennessee Valley comprehensively. TVA's electric power development has experienced hydropower, thermal power, nuclear power and new energy generation. With the development of electric power, the economic society of the Tennessee river valley develops quickly as well.
+
+# 5.2.2 Selection of project
+
+Among the construction projects of hydropower plants, thermal power plants and nuclear power plants in the process of electric power development, we select to analyze construction projects of the thermal power plant in the Tennessee Valley from 1940 to 1965. The reasons are as follows:
+
+- The thermal power projects have a wider range of construction, so that the projects can be used for our research on large-scale land development projects.
+- Most of the thermal power projects in Tennessee were constructed between 1940 and 1965.[7] So they have had a dominant position in the electric power development process for a long time, with the electricity-generating capacity once accounting for more than $80\%$ nationwide.
+- According to the data provided by the North American Electric Reliability Corporation[8], the average service life of thermal power stations is 50 years, and the long run economic benefit is estimable.
+- the construction of large thermal power plants can produce powerful electric power for the benefit of society, but also need to use a lot of primary energy and water resources and build tall building groups. While providing local residents with a number of employment opportunities, they also discharge a large number of waste gas, waste water and waste residue, which must have certain impact on the environment. Therefore, it is necessary to incorporate environmental cost into the cost benefit analysis of this project.
+
+# 5.2.3 Analysis by a long run model
+
+# 1. Data sources
+
+- The data of Tennessee ecological indicators can be obtained from the official website of the Tennessee government [9]. And the environmental cost can be obtained according to the changes of those indicators.
+- According to the work of Pankratz and Wilson (1988) [10], we obtained the estimated capital of the construction of a $2^{*} 200,000$ kw thermal power plant. On this basis, the average construction cost of thermal power stations can be calculated, which should be converted into the dollar price in 1982.
+- According to the article of Sun(2010)[11], we obtain the overall construction capital of Tennessee thermal power stations (measured by the constant dollar price in 1982, and the specific data will be shown in the appendix, the data after will be shown in the same way), unit cost and unit income of power generation and the total number of annual power generation, from which we can get the economic cost and economic benefit of construction project of
+
+Tennessee thermal power station. In addition, the total amount of industry revenue increase and electricity price reduction in the Tennessee valley can also be obtained, so that we can calculate the total social benefit generated by the construction of thermal power stations.
+
+# 2. Calculation
+
+# (1) Cost:
+
+After a series of calculation in our long run model of the data from above, the variation values of different indexes are obtained. According to Table.2, we have the scores of variation value of different indexes.
+
+Table3: The ranks and scores of values' changes of different indexes according to Tab.2
+
+| Cost Index | Variation Value | Ranking (Score) |
| Decrease of biodiversity | 0.2 | 3(5) |
| Reduction of air quality | 0.4 | 2(7) |
| Reduction of soil quality | 0.1 | 5(1) |
| Reduction of water quality | 0.1 | 4(3) |
| Impact on human health | 20 | 3(5) |
| Capital input | $4,120 million | 3(5) |
+
+The average cost (AC) is $3,000 million based on the before calculation.
+
+$\therefore p = (5, 5, 7, 1, 3, 5)$ , $p$ is rank vector.
+
+Applying the weight vector $w_0$ corresponding to $\mathbf{J}_0$ in the long run model, we have the score of cost $G_I$ :
+
+$$
+G _ {l} = p w _ {0} = \frac {3 2}{7}
+$$
+
+# (2) Benefit:
+
+a. The economic benefit of the project of Tennessee thermal power station: (electricity price-generating cost) * total amount of power generation =\\(456 million
+b. The monetary value of the increase of employment rate: It can be estimated by the increase amount of the proportion of manufacturing industry and the income of the whole manufacturing industry. The resident income increased about $6,815 million.
+c. Other social benefit: Because of the adoption of the policy of low electricity price, the electricity price of TVA fell from 2 cents per kw·h to 1 cent per kw·h. The thermal power plants benefited power users by $10 million from 1940 to 1965[12].
+
+Therefore, the total benefit of the project of Tennessee thermal power stations is:
+
+$456 + 6,815 + 10 = \$ 7,280$ million. The rank is 9, so the score is 9 denoted as $G_{2}$ .
+
+(3) Cost benefit analysis
+
+Because $G_{2} > G_{1}$ , that means construction project of Tennessee thermal power stations still have social value according to cost benefit analysis in a long-term perspective even incorporating environmental cost into the total cost.
+
+# 5.2.4 Initial matrix optimization
+
+As mentioned in the above model, the weights of various costs vary from model to model. Now we will optimize the comparative matrix. Considering the characteristic of the thermal power stations project: the main emission of the thermal power stations are dust particles and other pollutants produced by burning coal. Correspondingly, according to the public opinion survey around the thermal power stations and expert evaluation, we increase the weight of air pollution and modify the comparative matrix as follows:
+
+$$
+\mathbf {J} _ {1} = \left[ \begin{array}{c c c c c} 1 & 1 / 3 & 4 / 5 & 1 / 2 & 1 / 2 \\ 3 & 1 & 2 & 2 & 1 \\ 5 / 4 & 1 / 2 & 1 & 2 / 3 & 1 / 2 \\ 2 & 1 / 2 & 3 / 2 & 1 & 2 / 3 \\ 2 & 1 & 2 & 3 / 2 & 1 \end{array} \right]
+$$
+
+After the consistency test, the matrix passes the test and meets the requirement of consistency, and its eigenvalue is 5.04, and the normalized eigenvector is (0.108 0.306 0.135 0.186 0.266).
+
+We define $\nu_{1}$ as the weight vector, so $\nu_{1} = (0.1080.3060.1350.1860.266)$ .
+
+Thus, the combination weight vector $v_{2} = (0.5, 0.054, 0.153, 0.067, 0.093, 0.133)$ .
+
+Therefore, by the same method, the cost score is 4.852, which is less than the benefit score 9. The construction project of thermal power stations is still feasible.
+
+# 6 Sensitivity Analysis
+
+When we calculate the economic cost and the environmental cost, we assume that the decision maker considers that the environmental cost is as important as the economic cost, so we get the primary weight vector of the criteria layer $w_{2} = (1 / 2, 1 / 2)^{T}$ .
+
+In fact, it is not accurate because different decision makers have different priorities with respect to their preference. Benevolent governments may care more about environmental protection, while entrepreneurs may consider the environment as a small factor affecting cost. The following is a sensitivity analysis of the primary indicators in
+
+the criteria layer (economic cost and environmental cost).
+
+Suppose the weight of economic indicator changes from $1/2$ to $a$ , where $0 < a < 1$ . So the weight of economic cost becomes $1 - a$ .
+
+Taking Tennessee thermal power stations project as an example and repeating the evaluation process of AHP, we will obtain the new combination weight vector:
+
+$$
+v = \left(a, 0. 1 0 8 (1 - a), 0. 3 0 6 (1 - a), 0. 1 3 5 (1 - a), 0. 1 8 6 (1 - a), 0. 2 6 6 (1 - a)\right) ^ {T}
+$$
+
+So, the score of cost is $0.295a + 4.705$ .
+
+Furthermore, because $0 < a < 1$ , $0.295a + 4.705 < 9$ .
+
+Now, we evaluate the model before optimization by the same method, and we get a cost score of $0.858\mathrm{a} + 4.142$ . Figure.4 illustrates a function of the score varying with the weight.
+
+
+Figure 4: Sensitivity analysis of weights of the indicators
+
+We can see that both of them converge to 5 with the growth of a, but the model is more stable after adjusting the weight of air quality.
+
+Therefore, for the Tennessee project, even when environmental factors are taken into account, the benefit always far exceeds the cost. In other words, the decision to build a thermal power station will always be made regardless of the emphasis on economic cost or on environmental cost.
+
+In reality, China's small paper mills are being gradually clamped down, while the development of Tennessee thermal power in the United States has brought great positive value in the history, which shows that our model is effective to some extent.
+
+# 7 Implications on Land Use Project Planners and Managers
+
+- Planners should take the long run environmental impacts into account, including the current environmental costs and losses caused by environmental degradation. Then put them into the cost benefit analysis to comprehensively evaluate whether a project should be carried out. However, since businessmen are self-interested and pursue profit maximization, they may have no incentive to bear environmental costs. Therefore, the government should take this externality into full consideration when making decisions, and correct the negative environmental benefits brought by land development by means of regulation or Pigovian tax. That is, government will internalize the environmental costs into the real costs of land construction.
+- When measuring environmental costs, it is necessary to conduct a survey of nearby residents on the importance of each indicator and set an initial matrix according to residents' wishes. After that, the next step is to measure the relative importance of various environmental indicators and calculate the whole environmental costs through the model.
+- If the environmental pollution and ecological damage caused by some land use project is too large, the total cost may be greater than the total benefit of the project when the environmental cost is included. Thus, the land use project originally approved will be rejected.
+- In order to reduce the cost of environmental pollution and degradation, land project planners should choose the green construction plan as far as possible.
+
+# 8 Evaluation of the Model
+
+# 7.1 Strengths
+
+- Comprehensive consideration of environmental costs. It not only includes the current economy-health cost, but also considers the environmental cost in the sense of sustainable development and ecology in the long run.
+- In the short-term model, the replacement cost is used to measure the current environmental cost, so the environmental cost is conveniently and succinctly included in the cost benefit analysis of land use projects.
+- In the long-term optimization model, forced monetization is somewhat far-fetched due to the lack of monetization approaches for environmental costs caused by environmental degradation. Therefore, we converted the cost-
+
+benefit analysis into the analytic hierarchy process (AHP), and reasonably selected the corresponding environmental indicators for scoring and evaluation.
+
+- In the long-term optimization model, the selection of initial matrix is flexible in practical sense. The relative proportion of each index can be determined by the project developer according to the public survey. From the perspective of humanistic care, this cost measurement is close to the will of citizens.
+
+# 7.2 Weaknesses and improvements
+
+- The selection of comparison matrix is relatively rough. In the case of insufficient information, our method is to select the quantity of academic keywords as the basis of people's attention. If feasible, the initial matrix should be selected according to the poll results.
+- In the evaluation of economic costs and benefits, the basis for classification is the average cost of the industry. This scoring method needs to be evaluated, especially when the economic benefits of enterprises are much greater than the economic costs, which cannot reflect the gap perfectly.
+- We focus our analysis on cost and pay less attention to benefit. We did not consider the environmental benefits of the project, which are important features of some green public projects. As future work, we will improve the environmental benefit part.
+- The long-term and short-term time division is relatively vague, without clear boundaries.
+- Due to limited space and time, we only made a short-term analysis of the paper mill case and long-term of the Tennessee thermal power plant case. In fact, we could do a long-term analysis of the first, and a short-term analysis of the second, too.
+
+# 9 Conclusion
+
+Ecosystem services assessment model is presented. Our methodology accounts for environmental cost both in the short run and in the long run, which are included in the cost benefit analysis of land use projects. We apply the model to small and large projects separately. The results show that in some cases, the actual value of some land use projects is negative when environmental costs are considered. So we suggest the planners should choose green construction plans as far as possible.
+
+# References
+
+[1] Chee, Y., 2004. An ecological perspective on the valuation of ecosystem services. Biological Conservation 120, 549-565.
+[2] Songlin Xu, Economic Analysis of Environmental Pollution[J]. Econometrics Technique, 1995(07): 25-38.
+[3] Hritonenko N, Yatsenko Y. Mathematical Modeling in Economics, Ecology, and the Environment. Dordrecht: Kluwer Academic Publishers, 2006.
+[4] Normalized Difference Vegetation Index (NDVI). https://earthobservatory.nasa.gov/features/MeasuringVegetation/measuring_vegetation_2.php Published Aug 30, 2000 "Measuring Vegetation". NASA Earth Observatory.
+[5] Yihong Zhou. Further Discussion into Environmental Pollution Cost Analysis. China's Environmental Protection Industry, 2002,01: 20-23.
+[6] Yaodong Liu. Current situation Analysis of paper mill pollution[J]. The southern farm machinery, 2015, 46(11): 79-80.
+[7] William U. Chandler, the Myth of TVA: Conservation and Development in the Tennessee Vally, Ballinger Publishing Company, C ambridge, Mass., 1984.p.230-231.
+[8]North American Electric Reliability Council. https://www.nerc.com/Pages/default.asp
+[9]Tennessee Government. https://www.tn.gov/
+[10]Pankratz, R. E. and Wilson, B. L. 1988 年发表“Prediction Power Cost And its Role in Esp Economics”
+[11]孙前进在2010年发表论文“美国田纳西河流域的电力开发(1933—1983年)”
+[12]David E. Lilienthal, TVA: Democracy on the Marth, Harper & Brothers Publishers, New York, 1944.p.39-40.
+
+# Appendices
+
+This is the map of the power station:
+
+
+May 2005 map of TVA sites dam nuclear fossil
+
+These are the cost tables:
+表 3-2 TVA 火力发电厂的建设成本表
+
+| 火力发电厂名称 | 建成时间(年) | 装机容量(万千瓦) | 建设成本(美元/千瓦) |
| 按当年美元计 | 按1982年不变价格计 |
| 沃茨巴火力发电厂 | 1942 | 24.0 | 92 | 556 |
| 约翰逊维尔发电厂 | 1951 | 79.4 | 145 | 525 |
| 威多斯克里克发电厂 | 1952 | 85.5 | 181 | 648 |
| 肖尼火力发电厂 | 1953 | 175.0 | 145 | 512 |
| 金斯敦火力发电厂 | 1954 | 172.3 | 154 | 536 |
| 科尔伯特火力发电厂 | 1955 | 84.6 | 126 | 428 |
| 约翰塞维尔发电厂 | 1955 | 84.7 | 139 | 473 |
| 加拉廷火力发电厂 | 1956 | 60.0 | 126 | 386 |
| 约翰逊维尔发电厂(扩建) | 1958 | 51.9 | 123 | 386 |
| 艾伦火力发电厂 | 1959 | 99.0 | — | — |
| 加拉廷火力发电厂(扩建) | 1959 | 65.5 | — | — |
+
+第4章·TVA电力的开发成就与问题
+
+| 项目 年份 | 电力收入 | 总收入 | 租金收入及其 他项目收入 | 电力净收入 | 经营支出 |
| 1957 | 234,871,850 | 235,732,976 | 861,126 | 58,143,669 | 177,589,307 |
| 1956 | 220,902,537 | 221,642,216 | 739,679 | 53,859,167 | 167,741,206 |
| 1955 | 187,361,354 | 188,162,989 | 801,636 | 47,513,278 | 140,262,324 |
| 1954 | 133,319,876 | 133,947,808 | 627,932 | 28,140,957 | 105,127,412 |
| 1953 | 104,285,187 | 104,877,969 | 592,682 | 18,626,714 | 85,583,760 |
| 1952 | 94,466,655 | 85,004,390 | 537,735 | 25,096,349 | 69,165,041 |
| 1951 | 69,826,533 | 70,329,580 | 503,047 | | 43,612,785 |
| 1950 | 57,259,339 | 57,786,111 | 526,772 | 26,068,212 | 30,780,157 |
| 1949 | 57,618,811 | 58,030,515 | 411,704 | 20,944,415 | 36,551,727 |
| 1948 | 48,434,877 | 48,769,524 | 334,647 | 16,617,811 | 31,593,076 |
| 1947 | 43,810,572 | 44,144,090 | 333,518 | 21,248,377 | 22,305,341 |
+
+| 1966 | 321,580,000 | 326,804,000 | 2,215,000 | 47,889,000 | 270,221,000 |
| 1965 | 291,116,000 | 296,031,000 | 1,947,000 | 54,977,000 | 233,803,000 |
| 1964 | 281,703,000 | 286,398,000 | 1,930,000 | 58,183,000 | 224,260,000 |
| 1963 | 264,421,000 | 268,766,000 | 1,794,000 | 55,103,000 | 213,661,000 |
| 1962 | 248,192,000 | 252,098,000 | 1,641,000 | 56,180,000 | 197,411,000 |
| 1961 | 244,607,000 | 248,338,000 | | 51,644,000 | 198,789,000 |
| 1960 | 240,650,000 | 242,385,000 | 1,734,663 | 51,075,000 | 192,826,000 |
| 1959 | 236,197,581 | 237,540,179 | 1,342,598 | 50,829,938 | 186,710,241 |
| 1958 | 232,217,189 | 233,548,166 | 1,330,977 | 54,981,946 | 178,566,220 |
+
+# These are Matlab codes:
+
+function[x1]=cal(A)
+
+[n,n] $=$ size(A);
+
+[v,d] = eig(A); %求矩阵的特征值和特征向量
+
+```c
+eigenvalue = diag(d); %求对角线向量
+
+lamda=max(eigenvalue)%求最大特征值
+
+for $i = 1$ :length(A)%求最大特征值对应的序数
+
+if lamda $=$ =eigenvalue(i)
+
+break;
+
+end
+
+end
+d_lamda $\equiv$ v(:,i)%求矩阵最大特征值对应的特征向量
+CI $=$ (lamda-n)/(n-1);
+RI $=$ [0 0 0.58 0.90 1.12 1.24 1.32 1.41 1.45 1.49 1.52 1.54 1.56
+1.58 1.59];
+CR $=$ CI/RI(n);
+if CR<0.10 CR(Result = '通过';
+else CR(Result = '不通过');
+end
+w $=$ v(:,1)/sum(v(:,1));
+w $=$ w';
+disp('该判断矩阵权向量计算报告:');
+disp('一致性指标:');disp(num2str(CI));
+disp('一致性比例:');disp(num2str(CR));
+disp('一致性检验结果:');disp(CR(Result);
+disp('特征值:');disp(num2str(lamda));
+disp('权向量:');disp(num2str(w));
+x1=num2str(w);
+end
+a=0:0.1:1;
+y1=0.295*a+4.705;
+y2=0.858*a+4.142;
+plot(a,y1,a,y2);
+ylim([3 6])
+grid on
+title('sensitivity')
+xlabel('proportion a')
+ylabel('score of cost')
+gtext('after adjustment')
+gtext('before adjustment')
\ No newline at end of file
diff --git a/MCM/2019/E/1903455/1903455.md b/MCM/2019/E/1903455/1903455.md
new file mode 100644
index 0000000000000000000000000000000000000000..a6b4c39e423a1b965b03b1eca0ab549c2e826b96
--- /dev/null
+++ b/MCM/2019/E/1903455/1903455.md
@@ -0,0 +1,555 @@
+For office use only
+
+T1
+
+T2
+
+T3
+
+T4
+
+Team Control Number
+
+1903455
+
+Problem Chosen
+
+E
+
+For office use only
+
+F1
+
+F2
+
+F3
+
+F4
+
+2019
+
+MCM/ICM
+
+Summary Sheet
+
+# Ecosystem services matters! Sustainability is necessary
+
+Ecosystem services (ES) are the conditions and processes through which natural ecosystems and the species that make them up, sustain and fulfil human life (Daily, 1997). However, whenever humans alter the ecosystem, we potentially limit or remove ecosystem services. The impact of these projects of varying sizes may seem negligible to the total ability of the biosphere's functioning potential, cumulatively they are directly impacting the biodiversity and causing environmental degradation. In order to understand the true economic costs of land use projects and propose sustainable development factor, we establish an Ecological Services Valuation Model.
+
+To begin with, in order to measure the impact of ES numerically, we introduce MA classification method to make up the ecosystem services index(ESI). Furthermore, 11 indicators are selected from four aspects primarily, and then they are integrated into ESI by using entropy weight method (EWM) and coefficient of variation method (CVM). In addition, hierarchical clustering analysis(HCA) is applied to divide the ecosystem service intensity of the project into three categories: weak, moderate and strong. As a result, we found that private enterprise relocation project and house construction belongs to weak, factory construction project is determined to be moderate and national pipeline engineering has strong ecosystem service intensity.
+
+Next, to calculate the true economic cost of land use projects, an Ecological Services Valuation Model is established. First, we analyze the impact of original cost on ES when the cost of ES is not taken into account. Then, we analyze the benefit and cost of land use project, on this basis, we find that the cost of ES has an important influence to the life cycle of a project. We also use support vector machine (SVM) to forecast the benefit-cost ratio of land use, the result shows that considering the cost of ES of project has the accumulation of a long life cycle and high efficiency. For private enterprise relocation project, without considering ES, we calculate that it reaches the maximum benefit-cost ratio in 2013, and predict that it will stop making profits in 2033. When ES is taken into account, the cost-benefit ratio of land use projects will gradually increase.
+
+At last, for the sake of exploring the impact of sustainable development measures on ecosystem benefits, we divide the 11 three-level indicators into two categories: sustainable development indicators and unsustainable development indicators. After that, we introduce sustainable development factors $\theta$ to describe the impact of different ecosystem service measures on the final development status of projects. Eventually, the values of $\theta$ for four different size projects are $\mathbf{P}_{\mathrm{A}} = 87.31$ , $\mathbf{P}_{\mathrm{B}} = 88.4$ , $\mathbf{P}_{\mathrm{C}} = 101.4$ , $\mathbf{P}_{\mathrm{D}} = 11.98$ , it is obvious that $\theta$ increases as the size of the project increases.
+
+To conclude, We first construct the ecosystem service system, establish the Ecological Services Valuation Model and conduct the cost benefit analysis to four projects of varying sizes. What's more, introduce the sustainable development factor to evaluate the development of land use projects and propose our suggestion for developing land use projects.
+
+Key words: Ecosystem services, ESVM, SVM, sustainable development factor
+
+# Content
+
+1. Introduction 1
+
+1.1 Background 1
+1.2 Our work 1
+
+2. Assumptions and Justification 2
+3.Notations 2
+
+4. Ecological Services Analysis 3
+
+4.1 Ecological Services System(ESS) 3
+4.2 Intensity evaluation 7
+
+5. Ecosystem Service Valuation Model 9
+
+5.1 Cost-benefit analysis 9
+5.2 Prediction of Benefit-cost radio with SVM regression method 12
+5.3 Result analysis 14
+
+6. Sustainable development analysis 15
+
+6.1 Analysis of ecosystem services measures 15
+6.2 Influence of Sustainability on benefit-cost ratio 16
+
+7. Sensitivity Analysis 18
+8. Advice on project planning and management 19
+
+9. Conclusion 20
+
+9.1 Strengths 20
+9.2 Weaknesses 20
+
+References 21
+
+# 1. Introduction
+
+# 1.1 Background
+
+Ecosystem is the foundation of human survival and development. It not only provides space for human survival, but also provides various resources needed for human development, and absorbs waste generated by human production and life. Land resources are the main component of natural resources, and its position in production determines that land development and utilization is a major project. However, most land use projects do not consider the impact of ecosystem services. Although these activities may seem inconsequential to the total ability of the biosphere, they cumulatively directly affect the biodiversity and lead to environmental degradation. Therefore, how to evaluate the environmental cost of land use development projects and determine the real valuation of projects is worth our consideration
+
+# 1.2 Our work
+
+For the sake of understanding the real economic costs of land use projects in ecosystem services, we are required to establish an ecological services valuation model which determines a project's quality. By selecting appropriate evaluation indicators, we endow target weights and combine those low indicators to realize a comprehensive index. Subsequently, the established model will be applied to various projects to test its applicability and modifications will be proposed to improve it.
+
+We will proceed as follows to tackle these problems:
+
+- First, create the ecosystem service system. We use the entropy weight method to determine the weights of 11 third-level indicators and apply the coefficient of variation method to determine the weight of four second-level indicators. Eventually, obtain the calculation equation of the ecosystem service index. After that, through the hierarchical clustering method, the land use projects are divided into three categories.
+- Second, construct an ecological services valuation model. We conduct an benefit cost analysis of land use projects. Considering the importance of ecosystem service costs to the development of land use projects, we make a comparative analysis on the benefit-cost ratio of whether the cost of ecosystem services is considered in land use projects.
+- Finally. create We divide the 11 three-level indicators into sustainable development indicators and unsustainable development indicators. At the same time, the sustainable development factor $\theta$ is introduced to describe the long-term impact of adopting sustainable development measures on project development.
+
+The whole modeling process can be shown as follows:
+
+
+Figure.1 Framework of ESVM
+
+# 2. Assumptions and Justification
+
+To simplify the problem and make it convenient for us to simulate real-life conditions, we make the following basic assumptions, each of which is properly justified.
+
+- We assume that the cost and benefits of land use projects are not affected by regional policies and other factors that are not related to ecology and the project itself.
+- We assume that the relative importance of various indicators doesn't change over time.
+- We assume all data we obtain are trustworthy since all of sources are reliable. Thus, we are confident that our metrics can reflect the accurate condition.
+
+# 3. Notations
+
+We list the symbols and notations used in this paper in Table 1.
+
+Table 1 Notations
+
+| Symbols | Definition |
| PSI | Provisioning services index |
| RSI | Regulating services index |
| BSI | Biotope services index |
| CSI | Cultural services index |
| ESVM | Ecological services valuation model |
| ESS | Ecological services system |
| ESI | Ecosystem service index |
| BCAM | Benefit-cost analysis method |
| BCR | Benefit-cost ratio |
| ESC | Ecosystem services cost |
| SDF | Sustainable development factor |
+
+# 4. Ecological Services Analysis
+
+In this section, in order to measure the impact of ecosystem services, we make up the ecosystem services index ESI. First, 11 three-level indicators are selected from four aspects primarily, and then they are integrated into ecosystem service index(ESI) by using entropy weight method (EWM) and coefficient of variation method (CVM). In addition, hierarchical clustering analysis (HCA) is applied to divide the ecosystem service intensity of the project into three categories: weak, moderate and strong. Eventually, we use four projects of varying sizes to verify our evaluation system.
+
+# 4.1 Ecological Services System(ESS)
+
+
+Figure.2 Ecological Services System.
+
+According to the classification method of Millennium Ecosystem Assessment(MA)[1], the classification framework of MA combines ecosystem services with human well-being, and the classification system is more systematic. By using MA, the ecosystem service is divided into 4 service types: Provisioning services, Regulating services, Biotope services and Cultural services. Summarizing the generally accepted classification result[2] referring to the existing research[3]. Combined with the
+
+characteristics of ecosystem characteristics, structure and ecological process, the ecosystem service is subdivided into 11 functional types.
+
+# 4.1.1 Indicator description
+
+# (1) Provisioning services
+
+a) Food production $X_{1}$ (ton per year). Food is the main component of the ecosystem, and its status in agricultural production determines that its development and utilization is a significant agricultural engineering design. Therefore, we introduce the ratio of food production to reflect the development and utilization of ecosystem.
+b) Raw material production $X_{2}$ (ton per year). Taking into account changes in ecosystem services and assessing the environmental costs of land-use development projects, we have introduced the concept of raw material production including the production of wood, rubber, lacquer and rosin.
+c) Water supply $X_{3}$ (ha per year). Water supply is an important part of ecosystem services and has a significant impact on mitigating the negative consequences of land use change caused by river pollution and improper wastewater treatment. We hence introduce the water supply ha per year to describe its reflection to ecosystem services.
+
+# (2) Regulating services
+
+a) Gas regulation $X_{4}$ (million dollars per year). Gas regulation is mainly used to improve poor air quality, thus indirectly having a positive impact on climate change. The main measure we take is to convert the waste gas into usable gas and express it in terms of a $X_{4}$ .
+b) Solid regulation $X_{5}$ (million dollars per year). Solid regulation mainly refers to a series of treatment of waste residues and wastes generated in the process of human production activities, which can be converted into solid materials that can be used by people, which has an important impact on improving environmental quality and improving biodiversity.
+c) Water regulation $X_{6}$ (million dollars per year). Water regulation includes purification and regulation, and water purification primarily helps to filter out and break down organic waste into inland waters and coastal and marine ecosystems to obtain available water. Water regulation reduces the impact of land cover changes on runoff, floods and aquifers.
+
+# (3) Biotope services
+
+a) Soil conservation $X_{7}$ (ha per year). Quality soil protected by natural vegetation and litters maintains fertility, prevents dangerous landslides, protects coasts and riverbanks, and prevents silting. Through afforestation and planting grass to protect the soil, achieve sustainable development of food production, gradually improve soil productivity and the ecological situation
+b) Pest control $X_{8}$ (ha per year). In order to reduce or prevent the harmful effects of pathogenic microorganisms and pests on crops, people and livestock, we artificially
+
+adopt certain measures for prevention and control
+
+c) Biodiversity protection $X_{9}$ (ha per year). Genetic diversity, species diversity and ecosystem diversity are important components of biodiversity. We should not only focus on the protection of the wild populations of the species involved, but also protect their habitats, maintain the balance of the food chain and improve the ecological environment
+
+# (4) Cultural services
+
+a) Recreation & tourism $X_{10}$ (million dollars per year). Under the condition of not destroying the natural environment, people use different ways of using biological resources to carry out recreational activities, which is called ecotourism
+b) Performance & art $X_{11}$ (time per year). The performing arts usually include dance, music, drama, folk art, acrobatics, magic and so on. Therefore, we determine the time of performance & art per year as one of the influence of cultural services.
+
+# 4.1.2 Weight of indicators
+
+# (1) Entropy weight method
+
+In this section, with the evaluation indicators defined above, we further determine the weights of these eleven indicators, resulting in the combination of primary indicators. We first use the entropy weight method(EWM) to eliminate the data incommensurability caused by the inconsistency of data dimensions, based on the attribute type of the original indicators, we use the standard 0-1 transformation and the given optimal interval method to do non-dimensional and normalization. Therefore, it is convenient to judge the merits of the evaluation indicators directly from the numerical value, and facilitate the evaluation of multi-attribute decision-making.
+
+These 11 indicators of $X_{1}, X_{2}, X_{3}, \ldots, X_{11}$ , where $X_{i} = \{x_{i1}, x_{i2}, \ldots, x_{in}\}$ , describe the impact of ecosystem services and assess the environmental costs of land-use development projects. For the cost index, ecosystem services are proportional to the value of the index, while for the efficiency index, ecosystem services decrease with the increase of the value of the index. Thus, we have
+
+$$
+\left\{ \begin{array}{l} y _ {i j} = \frac {x _ {i j} - \operatorname* {m i n} \left(x _ {i}\right)}{\operatorname* {m a x} \left(x _ {i}\right) - \operatorname* {m i n} \left(x _ {i}\right)} \\ y _ {i j} = \frac {\operatorname* {m a x} \left(x _ {i}\right) - x _ {i j}}{\operatorname* {m a x} \left(x _ {i}\right) - \operatorname* {m i n} \left(x _ {i}\right)} \end{array} \quad j = 1, 2, \dots , n \right. \tag {1}
+$$
+
+where $y_{ij}$ is the standardized value of each evaluation indicator of each size, $\max(x_i)$ and $\min(x_i)$ are the maximum and minimum value of the evaluation indicator $X_i$ .
+
+After data standardization, we can use $y_{ij}$ instead of $x_{ij}$ to describe ecosystem services, and then we have
+
+$$
+q _ {j} = \frac {y _ {i j}}{\sum_ {j = 1} ^ {n} y _ {i j}} \tag {2}
+$$
+
+According to the concept of self-information and entropy in information theory, the information entropy $e_{i}$ of each evaluation index can be calculated, and thus
+
+$$
+e _ {i} = - \ln (n) ^ {- 1} \sum_ {j = 1} ^ {n} q _ {i j} \ln \left(q _ {i j}\right) \tag {3}
+$$
+
+Based on the information entropy, we will further calculate the weight of each evaluation indicator we defined before..
+
+$$
+w _ {i} = \frac {1 - e _ {i}}{k - \sum_ {i} e _ {i}} \quad i = 1, 2, \dots , k \tag {4}
+$$
+
+Furthermore, four comprehensive evaluation indicators of provisioning services, regulating services, biotope services, cultural services are obtained. This article will be abbreviated as $PSI$ , $RSI$ , $BSI$ and $CSI$ based on the weight of these calculations, we have
+
+$$
+\left\{ \begin{array}{l} P S I _ {j} = w _ {1} y _ {1 j} + w _ {2} y _ {2 j} + w _ {3} y _ {3 j} \\ R S I _ {j} = w _ {4} y _ {4 j} + w _ {5} y _ {5 j} + w _ {6} y _ {6 j} \\ B S I _ {j} = w _ {7} y _ {7 j} + w _ {8} y _ {8 j} + w _ {9} y _ {9 j} \\ C S I _ {j} = w _ {1 0} y _ {1 0 j} + w _ {1 1} y _ {1 1 j} \end{array} \right. \tag {5}
+$$
+
+where $PSI_{j}, RSI_{j}, BSI_{j}$ and $CSI_{j}$ represent the secondary indicators of the size of $j$ .
+
+The weight of these indexes are determined by EWM and the expression of these indexes are finally described.
+
+# (2)Coefficient of variation method
+
+After representing 11 indicators as four comprehensive variables, we need to further aggregate the four indicators into a comprehensive indicator to directly assess ecosystem services, laying the foundation for rational and effective use of natural resources and protection of the ecological environment.
+
+The coefficient of variation method(CVM) is a method of directly using the information contained in each indicator and calculating the weight of the indicator by calculation. Considering the difference between the units and the mean of the four comprehensive indicators, the standard deviation cannot be used to compare the degree of variation, but the ratio of the standard deviation to the mean is used to compare. The equation for each exponent can be expressed as
+
+$$
+C. V _ {i} = \frac {\sigma_ {i}}{\bar {x} _ {i}} \quad i = 1, 2, 3, 4 \tag {6}
+$$
+
+where $C.V_{i}$ is the coefficient of variation of $PSI$ , $RSI$ , $BSI$ , $CSI$ , which are also known as the standard deviation. $\sigma_{i}$ is the standard deviation of the index $i$ . $\overline{x}_{i}$ means the average of the index $i$ . After that, we can calculate the weight of four comprehensive index:
+
+$$
+W _ {i} = \frac {C . V _ {i}}{\sum_ {i = 1} ^ {n} C . V _ {i}} \quad i = 1, 2, 3, 4 \tag {7}
+$$
+
+Subsequently, on the basis of those calculated weights, we can derive the ecosystem services indicators, which is abbreviated as $ESI$
+
+$$
+E S I = \left(W _ {1} \times P S I + W _ {2} \times R S I + W _ {3} \times B S I + W _ {4} \times C S I\right) \times 1 0 0 \tag {8}
+$$
+
+Since the specific value of those indicators have been given in Table 2, hence we can calculate the $ESI$ of our selected projects. As can be seen from table 2 below, for the 11 evaluation indicators, the weight difference between them is not big, which is generally around 0.1. Recreation & tourism has the largest weight of 0.1293. For the four comprehensive indicators, cultural services ranks the first with the weight of 0.2862 in our final criterion, with regulating services following behind, while provisioning services ranks the third and biotope services is at the bottom.
+
+Table 2 Weight values of the indicators
+
+| Indicators(I) | Indicators(II) | Weights | Indicators(III) | Weights |
| Intensity | Provisioning services | 0.2591 | Food production | 0.0705 |
| Raw material production | 0.1038 |
| Water supply | 0.1063 |
| Regulating services | 0.2647 | Gas regulation | 0.1157 |
| Solid regulation | 0.0743 |
| Water regulation | 0.1089 |
| Biotope services | 0.1900 | Soil conservation | 0.1189 |
| Pest control | 0.0348 |
| Biodiversity protection | 0.0438 |
| Cultural services | 0.2862 | Recreation & tourism | 0.1293 |
| Performance & art | 0.0937 |
+
+# 4.2 Intensity evaluation
+
+We first divide the ecosystem services of land use projects into three levels through hierarchical clustering. Then, we use our ecosystem services system to conduct an assessment of ecosystem services on four different size land use project examples to verify the effectiveness of our evaluation system.
+
+# 4.2.1 Hierarchical clustering
+
+Combined with the comprehensive ecosystem index we established before, we imported the project data of different sizes and calculated their values. Euclidean distance is used as similarity measurement method to divide the ecosystem intensity into weak, moderate and strong, and hierarchical clustering was used to cluster the projects of ten sizes
+
+Hierarchical clustering algorithm is just like building a branching tree from bottom
+
+up[], with leaves as individual documents and the centers (clusters) as the roots[4]. The strategy of aggregation clustering is to firstly cluster each object as an atom, and then cluster these atoms layer by layer until certain termination conditions are met. The hierarchical clustering algorithm has a computational complexity of $O(n^{2})$ , which is suitable for the classification of small data sets[5].
+
+After calculating the $ESI$ of our selected projects, then we apply hierarchical clustering algorithm to clarify these projects into three groups: weak, moderate and strong. The higher the value is, the stronger the project is. The results of clustering are shown as follows.
+
+After calculating the value of $PSI, RSI, BSI, CSI$ and $ESI$ , we can evaluate the ecosystem service intensity of specific projects. As shown in Figure.4, we assign different colors to the different ecosystem service intensity levels of weak, moderate and strong, so as to intuitively display the ecosystem service intensity.
+
+
+Figure.3 Classification standards of varying ecosystem service index, which is classified as weak, moderate and strong.
+
+As shown in Figure.3, the classification standards of the four combined indicators and the comprehensive metric vary a little. Take comprehensive index of ecosystem services for an example, when the score of $ESI$ is less than 29.45, it is a weak project. While the score of $ESI$ is between 29.45 and 68.37, it is a moderate project. In the condition that $ESI$ is more than 68.37, it is regarded as strong.
+
+# 4.2.2 The result of intensity evaluation
+
+As is vividly shown in Figure.5, we use the ecological services valuation system to evaluate four types of projects which will be listed later and rank them.
+
+Table 3 Description of practical project of varying sizes
+
+| Project | Name | Size | Intensity |
| \(P_A\) | Private enterprise relocation project | medium | weak |
| \(P_B\) | Factory construction project | medium | moderate |
| \(P_C\) | National pipeline engineering | large | strong |
| \(P_D\) | House construction | small | weak |
+
+From the above Table 3 we can see that $\mathrm{P_A}$ , $\mathrm{P_B}$ , $\mathrm{P_C}$ , and $\mathrm{P_D}$ are land use projects of different sizes, respectively, national pipeline engineering has the largest scale as a national project.
+
+
+Figure.4 Comparison of ESI original ranking and intensity valuation indicators.
+
+Considering that $\mathrm{P_C}$ involves pipeline protection structure engineering, pipeline crossing engineering and line subsidiary engineering, the project construction is very complex, with a long construction period and high technical requirements, therefore, it has the greatest impact on the environment and the intensity of ecosystem service is strong which is in line with reality. As for project of $\mathrm{P_D}$ , it mainly involves house construction. Its impact on the environment is relatively small, so the intensity of ecosystem services is weak which corresponds to reality.
+
+# 5. Ecosystem Service Valuation Model
+
+In the process of economic development, the initial construction projects mainly rely on extensive economic growth mode of increasing investment and material input, and the contradiction between economic development and resource environment is very sharp. While individually these activities may seem inconsequential to the total ability of the biosphere's functioning potential, cumulatively they are directly impacting the biodiversity and causing environmental degradation.
+
+# 5.1 Cost-benefit analysis
+
+For the sake of understanding the true economic costs of land use projects, we create an ecological services valuation model to perform a cost benefit analysis of land use development projects of varying sizes based on the cost-benefit analysis method(CBA), which mainly includes the following steps[6].
+
+# 5.1.1 Cost analysis
+
+Cost analysis[7] includes internal cost and external cost. Internal cost refers to the direct expenses incurred by the developer in land development and utilization including land costs, infrastructure costs, ancillary facilities costs, financing costs, indirect costs
+
+borne by the development of land, which can be obtained from statistics and accounting data. The external cost is not directly reflected in the expenditure of land development and utilization itself, but in the monetary estimate of other environmental losses (i.e. negative external effects) caused by the development of land. When other productive resources associated with land are damaged by the exploitation of land, external costs are best estimated through the loss of productivity of that resource. Thus, we have:
+
+$$
+C _ {r} = C _ {o} + C _ {e} \tag {10}
+$$
+
+$$
+C _ {e} = q \cdot E S I = q \cdot l \cdot C _ {o} \tag {11}
+$$
+
+where $C_r$ is real economic costs of land use projects, $C_o$ is the cost of building a project without considering the cost of ecosystem services, $C_e$ is the economic cost of improving the negative consequences of land use change, that is, the cost of ecosystem services. $q$ represents a constant, $l$ is ecosystem service cost index.
+
+In order to explore the functional relationship between $ESI$ and $C_o$ , we calculate the ecosystem service index of project A over time through equation (8). To explore the relationship between $ESI$ and $C_o$ , we make the cost of project $P_A$ and the discrete points of $ESI$ and fit it with Gaussian method, such as Figure.5 shows.
+
+
+Figure.5 The relationship curve between original costs and $ESI$ .
+
+From the above Figure.5, we can see that ecosystem services and original costs match the tendency of logistic growth curve approximately and the result of the fitting also proves this by equation (8). In the original cost accumulation stage, because the cost required at this time is small, the overall operational capacity of the biosphere is irrelevant, so $ESI$ is increasing slowly. As the original cost continues to increase, when the ecosystem itself exceeds the range, the original cost will increase rapidly due to the impact on biodiversity and environmental degradation. When the original cost increases to a certain extent, at this time the ecosystem has a certain adaptability to the already existing impact, so the ecosystem service changes tend to be slow at this time.
+
+$$
+E S I = \frac {0 . 9 4 6 2}{1 + \exp \left(0 . 0 0 0 0 7 2 \left(C _ {o} - 3 1 2 6 5 4\right)\right)} \tag {12}
+$$
+
+# 5.1.2 Benefit analysis
+
+Generally, two benefits can be obtained through the development and utilization of land resources: internal and external benefits. The internal benefit is the benefit which can be estimated by the market price directly after the land development and utilization, and is the direct result of the land utilization plan. External benefits refer to the beneficial effects of land development and utilization activities on surrounding resources, environment and economic activities.
+
+$$
+B _ {r} = B _ {o} - C _ {e} + \delta \tag {13}
+$$
+
+$$
+B _ {o} = B _ {b} - C _ {o} \tag {14}
+$$
+
+where $B_{r}$ is true economic benefits of land use projects, $B_{o}$ is the benefit of building a project without considering the cost of ecosystem services and it can be obtained from the statistics, $\delta$ is a variable related to policy, subsidies, etc. $B_{b}$ is the total benefit of a project when the cost of ecosystem services is not taken into account.
+
+# 5.1.3 The ratio of benefit-cost analysis
+
+In order to explore the real benefit-cost situation of land use projects, we use benefit-cost ratio to describe the impact of whether or not to consider the ecological services on the project. We take project $\mathrm{P_A}$ as an example.
+
+$$
+I = B _ {r} / C _ {r} \tag {15}
+$$
+
+where $I$ is the ratio of benefit-cost.
+
+
+Figure.6 The practical value (1995~2019) of benefit. (a) scatter plot of benefit-cost rate over time not taking into account the cost of ecosystem services; (b) scatter plot of benefit-cost rate over time taking into account the cost of ecosystem services.
+
+
+
+As can be seen from Figure.6, without considering the cost of ecosystem services, the benefit of an existing project tends to increase first and then decrease. That is, as the life cycle increases, a project experiences entrepreneurship, growth, and maturity, and then quickly declines after reaching a peak. However, if the enterprise takes ecosystem services into consideration, although the growth trend is relatively slow in the early stage, the benefits of the project will gradually increase over time, and the trend of
+
+growth will continue for a long time.
+
+# 5.2 Prediction of Benefit-cost radio with SVM regression method
+
+In order to predict the change trend of the curve quantitatively and describe the impact of ecosystem services on project benefits intuitively, we perform support vector machine(SVM) regression on the data of the two curves we just got.
+
+Support vector machines are a core machine learning technology. They have strong theoretical foundations and excellent empirical successes. SVM follows the principle of structural risk minimization and is good at solving small sample and nonlinear problems[8]. Unlike traditional machine learning methods such as artificial neural networks that follow the principle of empirical risk minimization, SVM avoids problems such as overfitting, difficult parameter adjustment and slow convergence[9,10].
+
+We are given training data $M_{u}(u = 1,2,3,\dots ,\mathrm{b})$ that are vectors in some space $M_{u}\in R^{\nu}$ . We are also given their labels $Y_{u}(u = 1,2,3,\dots ,\mathrm{b})$ where $Y\in R^{\nu}$ . In their simplest form, SVMs are hyperplanes that separate the training data by a maximal margin. The training instances that lie closest to the hyperplane are called support vectors. More generally, SVMs allow us to project the original training data in space $M$ to a higher dimensional feature space $F$ via a Mercer kernel operator $K$ . The form of the optimal hyperplane is expressed as follows:
+
+$$
+Y = f (m) = \varsigma \cdot \gamma (M) + p \tag {16}
+$$
+
+where $Y = \left[Y_{1},Y_{2},\dots ,Y_{b}\right]$ , $M = \left[M_1^T,M_2^T,\dots ,M_b^T\right]$ , $\varsigma$ is the weight vector, $p$ is the threshold value, $\gamma (M)$ is the mapping from the input space to the higher-dimensional space
+
+Introduce the insensitive loss function $\varepsilon$ , and use the dispersion analysis to solve the optimal hyperplane. When $\varepsilon$ is greater than the error value, the error is small and negligible; the relaxation variables $\xi$ and $\xi^{*}$ are introduced to prevent individual data from affecting the model deviation. The penalty factor $D$ is introduced, and the sample data deviating from the model is penalized, so the optimal hyperplane can be converted into equations (17) and (18) to solve the minimum problem:
+
+$$
+\min \left\{\frac {1}{2} \| \varsigma \| ^ {2} + D \left(\sum_ {i = 1} ^ {b} \xi_ {i} + \sum_ {i = 1} ^ {b} \xi_ {i} ^ {*}\right) \right\} \tag {17}
+$$
+
+$$
+\left\{ \begin{array}{l} f \left(M _ {i}\right) - Y _ {i} \leq \xi_ {i} + \varepsilon \\ Y _ {i} - f \left(M _ {i}\right) \leq \xi_ {i} + \varepsilon \end{array} \right. \tag {18}
+$$
+
+We introduce the Lagrange function to solve the above equations (17) and (18), where $\beta_{i}$ and $\beta_{i}^{*}$ are Lagrange multipliers, so the equation is transformed into the following equation (19).
+
+$$
+\begin{array}{l} L (\varsigma , a, \xi , \xi^ {*}) = \frac {1}{2} \| \varsigma \| ^ {2} + D \sum_ {i = 1} ^ {b} (\xi + \xi^ {*}) - \beta \sum_ {i = 1} ^ {b} (\varepsilon + \xi_ {i} - Y _ {i} + \varsigma M _ {i} + a) \tag {19} \\ - \beta_ {i} ^ {*} \sum_ {i = 1} ^ {b} \left(\varepsilon + \xi_ {i} ^ {*} - Y _ {i} + \varsigma M _ {i} + a\right) \\ \end{array}
+$$
+
+where $\xi_{i},\xi_{i}^{*},\beta_{i},\beta_{i}^{*}\geq 0;i = 1,2,\ldots ,b$ , $\xi^{*}$ , $\beta_{i}$ , $\beta_{i}^{*}$
+
+The minimum value can be obtained by deflecting the $\varsigma, a, \xi_i, \xi_i^*$ of the function $L$ , respectively, as follows:
+
+$$
+\left\{\begin{array}{l}\frac {\partial L}{\partial \varsigma} = 0 \rightarrow \frac {\partial L (\varsigma , a , \xi , \xi^ {*})}{\partial \varsigma} = 0\\\frac {\partial L}{\partial a} = 0 \rightarrow \frac {\partial L (\varsigma , a , \xi , \xi^ {*})}{\partial a} = 0\\\frac {\partial L}{\partial \xi_ {i}} = 0 \rightarrow \frac {\partial L (\varsigma , a , \xi , \xi^ {*})}{\partial \xi_ {i}} = 0\\\frac {\partial L}{\partial \xi_ {i} ^ {*}} = 0 \rightarrow \frac {\partial L (\varsigma , a , \xi , \xi^ {*})}{\partial \xi_ {i} ^ {*}} = 0\end{array}\right. \tag {20}
+$$
+
+Two commonly used kernels are the polynomial kernel $K(u, v) = (u \cdot v + 1)^h$ , which induces polynomial boundaries of degree $h$ in the original space $M$ , and the radial basis function kernel $K(u, v) = (e^{-\delta(u - v)(u + v)})$ , which induces boundaries by placing weighted Gaussians upon key training instances. In the remainder of this paper we will adopt the first polynomial kernel function to build a prediction model.
+
+Combining equations (20) and polynomial kernel, the final form of the optimal hyperplane is determined as shown in equation (21).
+
+$$
+f (X) = \varsigma \gamma (M) + a = \sum_ {u = 1} ^ {b} \left(\beta - \beta_ {i} ^ {*}\right) K (u, v) + a \tag {21}
+$$
+
+where $K(u,v)$ is the polynomial kernel.
+
+We conduct regression analysis based on support vector machine. Given the data of 1995-2019 as the training samples, we predict the changes over 2019-2035 years and got the change curve as shown in the Figure.7.
+
+As can be seen from the Figure.8(a), the benefit-cost ratio of the project $\mathrm{P_A}$ without considering ecosystem services increases firstly and then decreases with time. In other words, although the profit rate of such projects is relatively fast at the beginning, the benefit starts to decline sharply with time.
+
+
+Figure.7 The practical value (1995~2019) and predicted value(2019~2035) of benefit-cost rate of project $\mathrm{P_A}$ . The blue line is made by the method of support vector machine regression, and the red dotted line is a fitting curve with a confidence interval of 0.95.(a) fitting curve of available benefit data over time; (b) fitting curve of benefit data over time taking into account the cost of ecosystem services.
+
+
+
+As for the Figure.7(b), the project $\mathrm{P_A}$ is based on the consideration of the cost of ecosystem services, that is, the external cost. Even if the rate of profit growth at the beginning is slow, the benefits have been slowly increasing with time. In the predicted nearly two decades, the benefit does not show a declining trend, but steadily increases. It can be seen that considering ecosystem services can effectively increase the life cycle of the project, consolidate the existing status, and delay the arrival of the recession.
+
+# 5.3 Result analysis
+
+In order to compare more intuitively the impact of the availability of ecosystem services on the benefit-cost ratio of land-use projects, we have combined the two maps of Fiugre.8 to facilitate comparative analysis.
+
+
+Figure.8 The practical value (1995-2019) and predicted value of benefit-cost ratio of project $\mathrm{P_A}$ (2019-2035).
+
+As can be seen from the Figure.8, in 2013, when the benefit cost ratio reached the peak of the project. The project that does not consider ecosystem services is not implemented in an appropriate environment due to the damage to the ecological environment beyond its capacity. As a result, efficiency decreases gradually and benefit decreases rapidly with time. In 2026, the two curves intersect, indicating whether the benefit-cost of the ecological cost service is equivalent at this time. After that (shaded area), the benefits brought about by the ecological service should be greater than the benefits brought by ecological services from that on. When reaching 2033, do not consider the ecological services will lead to the destruction of the ecological environment seriously affecting the project operation, so that the cost-effectiveness ratio reach to 0.
+
+Table 4 shows the specific values and descriptions of some special information points in Figure 8. Among them, $\mathrm{N}$ represents the absence of ecosystem services, $\mathrm{Y}$ represents the adoption of ecosystem services. We use the same model for cost-benefit analysis of ${\mathrm{P}}_{\mathrm{B}},{\mathrm{P}}_{\mathrm{C}}$ and ${\mathrm{P}}_{\mathrm{D}}$ . The specific results are shown in Appendix 1,2,3.
+
+Table 4 BCR of the project $\mathbf{P}_{\mathbf{A}}$ according to Figure.8
+
+| \(P_A\) | Year | 1995 | 2013 | 2019 | 2026 | 2030 | 2033 |
| N | BCR | 1.06 | 11.38 | 10.48 | 6.13 | 2.26 | 0 |
| Description | ↗ | Highest | ↘ | Intersection | ↘ | Zero |
| Y | BCR | 0.73 | 6.03 | 6.10 | 6.13 | 6.24 | 6.39 |
| Description | ↗ | ↗ | ↗ | ↗ | ↗ | ↗ |
+
+# 6. Sustainable development analysis
+
+From the above analysis, we can see that considering the cost of ecosystem services can increase the life cycle of land-use projects and increase cumulative benefits. In this section, we will explore the impact of specific ecological service measures on land use projects.
+
+# 6.1 Analysis of ecosystem services measures
+
+The ecosystem service measures are broadly divided into two categories by us: sustainable development measures and unsustainable development measures. Statistics on the ecosystem services about projects $\mathrm{P_A}$ , $\mathrm{P_B}$ , $\mathrm{P_C}$ and $\mathrm{P_D}$ are carried out.
+
+From the pie chart shown in Figure.9, we can see land use projects of different scales $\mathrm{P_A}$ , $\mathrm{P_B}$ , $\mathrm{P_C}$ , and $\mathrm{P_D}$ . As the scale of the project increases, the proportion of projects adopting sustainable development measures is also increasing. For example, for national pipeline projects, projects taking sustainable development measures accounted for $91\%$ of the total measures, and only non-sustainable development measures accounted for only $9\%$ . Therefore, it is not difficult to find that the importance of sustainable development measures is valued on almost all scales.
+
+
+Unsustainable measures
+
+
+Sustainable measures
+
+
+(a)
+
+
+(b)
+
+
+(c)
+Figure.9 Statistical results of four projects (a)project $\mathrm{P_A}$ ; (b)project $\mathrm{P_B}$ ; (c) project $\mathrm{P_C}$ ; (d) project $\mathrm{P_D}$
+
+
+(d)
+
+From the ecosystem service system established in the fourth part, it is not difficult to find that 11 indicators are divided into 6 sustainable development indicators and 5 unsustainable development indicators, and the sustainable development indicators are shown in Figure.10.
+
+
+Figure.10 The schematic diagram of the six sustainable development indicators
+
+# 6.2 Influence of Sustainability on benefit-cost ratio
+
+In order to more clearly reflect the impact of sustainable development measures on the benefits of land-use projects, we use project $\mathrm{PA}$ , $\mathrm{PB}$ , $\mathrm{PC}$ , and $\mathrm{PD}$ as research objects. The ecosystem service evaluation model established in Section 5 is used to analyze and predict the benefit-cost ratio which influenced by sustainable development measures for 2004-2018. The results are shown in Figure.11.
+
+
+Figure.11 Benefit-cost ratio curve of four projects which influenced by sustainable development measures only
+
+As can be seen from the above Figure.11, for large-scale national pipeline engineering, the benefits of adopting sustainable development measures will grow exponentially over time, and the gap between medium and small scale will increase. It can be seen that the adoption of sustainable development measures has a significant impact on the long-term development of the project, and can significantly increase the cumulative benefits of the project in the later stages of project maturity.
+
+From the Figure.11, we fitted the relationship expression between the benefit brought by sustainable development index and index weight. The fitting results are shown in Table 5.
+
+Table 5 Result of fitting according to Figure.11
+
+| Project | Equation |
| \(P_A\) | \(B_{P_A}=87.31\cdot\exp(0.1056\cdot\Delta t)+0.0014\) |
| \(P_B\) | \(B_{P_B}=88.4\cdot\exp(0.1098\cdot\Delta t)+0.0014\) |
| \(P_C\) | \(B_{P_c}=101.4\cdot\exp(0.1198\cdot\Delta t)+0.0014\) |
| \(P_D\) | \(B_{P_D}=11.98\cdot\exp(0.1033\cdot\Delta t)+0.0013\) |
+
+From Table 5, we can summarize the relationship between sustainable development and benefit-cost radio of land-use projects of any size. We portray their relationship as equation(22).
+
+$$
+B = \eta \cdot \sum_ {s = 1} ^ {h} w _ {i} \cdot x _ {(s, t)} \left(e ^ {m \cdot \Delta t} + c\right) \tag {22}
+$$
+
+$$
+\theta (w, x) = \eta \cdot \sum_ {s = 1} ^ {h} w _ {i} \cdot x _ {(s, t)} \tag {23}
+$$
+
+where $B$ is the benefit brought by sustainable development indicators; $\eta$ and $c$ is a constant; $w_{i}$ is the weight of indicator $s$ we defined before; $h$ is the number of sustainable development indicators, $h = 6$ ; $x_{(s,t)}$ is the value of the indicator $s$ in year $t$ ; $m$ is related to the size of the project; $\Delta t$ is the time that land use projects have existed; $\theta (w,x)$ is a function of $w$ and $x$ .
+
+Subsequently, we define $\theta(w,x)$ as the sustainable development factor(SDF), which is closely related to the weight of sustainable development index and sustainable development index.
+
+Taking into account the uncertainty $\delta$ in our benefit-cost radio equation when calculating the benefit-cost radio, through the above analysis, we clearly see that a large part of the benefits generated by uncertain factors is due to the benefit brought by the adoption of sustainable development measures. Thus, we get equation(24).
+
+$$
+\delta = B + \varpi \tag {24}
+$$
+
+where $B$ is the benefit brought by sustainable development indicators; $\varpi$ is the benefits associated with policies, subsidies, etc.
+
+The adoption of sustainable development measures has important practical significance for extending the life cycle of enterprises, delaying the arrival of recession and increasing the cumulative benefits of projects. The introduction of SDF to describe the level of sustainable development is of great significance for the long-term development of projects.
+
+# 7. Sensitivity Analysis
+
+Based on the sustainable development factors calculated from the above four projects, we calculate their real benefit-cost ratio, as shown in Figure 12.
+
+
+Figure.12 The curve of benefit-cost ratio over time of four project.
+
+To test the robustness of our model, in this section, by assigning different values to
+
+$\theta$ , we conducted a regression analysis on the data of benefit over time from 1995 to 2018, and predicted the data from 2019 to 2035. The final result is shown in Figure.13.
+
+To quantitatively characterize the impact of $SDF$ on a specific project, we have depicted the change curve of benefit-cost radio over time for project $\mathrm{P_A}$ . As can be seen from the figure above, with the increase of $\theta$ , the benefit-cost radio of the same project grows increasingly slowly, that is, the area between two adjacent curves gradually shrinks. This means that the value of $\theta$ is not the larger the better, but there is an optimal value. When $\theta$ is greater than this optimal value, the environmental damage caused by the project has been alleviated. If $\theta$ continues to increase at this time, it will not get the significant benefits it deserves, so the distance between adjacent curves is gradually narrowing which also accord with the development law of natural thing. This shows the stability of our model, which can solve practical problems in real life.
+
+
+Figure.13 The curve of benefit-cost ratio over time with different $\theta$ (project $\mathrm{P_A}$ ).
+
+# 8. Advice on project planning and management
+
+The real economic cost of the land use project is assessed by establishing an ecological services valuation model. We introduce sustainability assessment factors to characterize the factors that influence the effectiveness of their ecosystem services.
+
+From the results of Figure.11, we can see that adopting sustainable development measures will make the long-term $BCR$ of the project grow steadily. Although considering ecosystem services increases the cost of land use projects, in the long run, it will increase the project life cycle and cumulative benefits. Therefore, we suggest that in the planning of future land use projects, the emphasis on sustainable development factors will contribute to the long-term development of the projects.
+
+As for the influence of land use project planners and managers[12], first of all, we should take the workload of the existing land use projects as the benchmark, taking into account the stable transition of the business, and consider the efficiency improvement after concentration. Secondly, in the initial stage of introducing sustainable development measures, changes in business models will have a certain impact on the efficiency of the project staff. Therefore, it should be configured according to the familiarity of personnel with the posts. Finally, after the project is running stably, with
+
+the improvement of staff efficiency, post adjustment can be made according to personnel and business conditions.
+
+# 9. Conclusion
+
+To conclude, we first establish an ecological services valuation system based on the entropy weight method and coefficient of variation method so as to represent the ecosystem service index. Furthermore, we apply hierarchical clustering to determine the range of intensity for weak, moderate and strong respectively. Test of 4 projects of varying sizes shows that our model is robust and correct.
+
+Thereafter, in order to perform a cost benefit analysis of land use development projects of varying sizes, we first analyzed the correlation between original costs and ecosystem services, and found that ecosystem services had a significant effect on the increase of benefit-cost ratio. Then, we use support vector machines to predict the data in 2019-2035. It can be illustrated from the results that after considering the cost of ecosystem services, land use projects will continue to develop in a stable trend, although the projects without considering the cost of ecosystem services have more benefits in the short term, it will develop unsteadily in the long term. For the sake of further explore the factors affecting the benefit-cost ratio of a project, we introduce the sustainable development factor $\theta$ . when a project takes into account $\theta$ , its life cycle and cumulative benefits will be greater.
+
+Finally, we put forward suggestions for the development of land-use projects and modify our model to be applicable to small community-based projects and large national projects better through substituting indicator of $\theta$ while maintaining the framework.
+
+# 9.1 Strengths
+
+- We use the entropy weight method and the coefficient of variation method to determine the weight of ecosystem service indicators, avoiding the negative effects of a single method.
+- The experimental results show that our model can be applied to the long-term cost-benefit analysis of land use projects of different scales.
+- We validate our model through real land use project cases and find that our model is highly effective.
+- We use our model to discover the importance of sustainable development services for land use projects and to quantify the importance.
+
+# 9.2 Weaknesses
+
+- Due to the limited search data, the indicators used cannot completely describe the actual ecosystem services, which may reduce the accuracy of our model.
+- Our model does not take into account the possible costs of policies, natural disasters, etc., so there is a bias in cost-benefit analysis.
+
+# References
+
+[1] de GROOT R S, ALKEMADE R, BRAAT L, et al. Challenges in integrating the concept of ecosystem services and values in landscape planning, management and decision making [J]. Ecol Complexity, 2010, 7(3): 260-272.
+[2] BRAAT L C, de GROOT R. The ecosystem services agenda: bridging the worlds of natural science and economics, conservation and development, and public and private policy [J]. Ecosyst Serv, 2012, 1(1):4-15.
+[3] CHENG Min, ZHANG Liyun, CUI Lijuan, et al. Progress in ecosystem services value valuation of coastal wetlands[J]. Acta Ecol Sin, 2016, 36(23): 7509-7518.
+[4] Sambasivam S, Theodosopoulos N. Advanced data clustering methods of mining Web documents. Issues in Informing Science and Information Technology, 2006,(3):563-579
+[5] Sun JG, Liu J, Zhao LY. Clustering algorithms research. Journal of Software, 2008,19(1):48-61
+[6] Pandey M D, Nathwani J S. Canada Wide Standard for Particulate Matter and Ozone: Cost-Benefit Analysis Using a Life Quality Index[J]. Risk Analysis.2003 .23(1):55-67.
+[7] Fan Mingtian; Zhang Zuping; Su Aoxue. Cost-benefit analysis of integration DER into distribution network[C] Workshop 2012 //Integration of Renewables into the Distribution Grid. Lisbon, Portugal: Cired, 2012:1-4
+[8] Vapnik V N. The Nature of Statistic Learning Theory [M]. New York: Springer, 2000. 17-34.
+[9] García Nieto P J, Combarro E F, Montañés E, et al. A SVM-based regression model to study the air quality at local scale in Oviedo urban area (Northern Spain): A case study [J]. Applied Mathematics and Computation, 2013, 219(17): 8923-8937.
+[10] Lu W Z, Wang W J. Potential assessment of the 'support vector machine' method in forecasting ambient air pollutant trends [J]. Chemosphere, 2005, 59(5): 693-701.
+[11] Greg Kats, Capital E. The Costs and Financial Benefits of Green Buildings, A Report to California's Sustainable Building Task Force, 2003.
+[12] Pina V, Torres L. Analysis of the efficiency of local government services delivery. An application to urban public transport [J]. Transportation Research Part A, 2001, 35(10): 929-944.
+
+Appendix 1: ESVM result of project $\mathbf{P_B}$
+
+| P_B | Year | 1995 | 2012 | 2015 | 2020 | 2025 | 2028 |
| N | ICR | 2.38 | 12.52 | 10.26 | 6.32 | 2.18 | 0 |
| Description | ↗ | Highest | ↘ | Intersection | ↘ | Zero |
| Y | ICR | 1.29 | 3.68 | 5.81 | 6.32 | 7.24 | 7.89 |
| Description | ↗ | ↗ | ↗ | ↗ | ↗ | ↗ |
+
+Appendix 2: ESVM result of project $\mathbf{P}_{\mathrm{C}}$
+
+| \(P_C\) | Year | 1995 | 2010 | 2013 | 2016 | 2018 | 2020 |
| N | ICR | 4.92 | 20.16 | 16.24 | 11.72 | 4.37 | 0 |
| Description | ↗ | Highest | ↘ | Intersection | ↘ | Zero |
| Y | ICR | 2.67 | 6.27 | 9.19 | 11.72 | 12.46 | 13.25 |
| Description | ↗ | ↗ | ↗ | ↗ | ↗ | ↗ |
+
+Appendix 3: ESVM result of project $\mathbf{P}_{\mathrm{D}}$
+
+| PD | Year | 1995 | 2026 | 2035 | 2043 | 2050 | 2056 |
| N | ICR | 1.06 | 6.33 | 5.49 | 3.87 | 1.75 | 0 |
| Description | ↗ | Highest | ↘ | Intersection | ↘ | Zero |
| Y | ICR | 0.73 | 2.26 | 3.15 | 3.87 | 4.26 | 4.82 |
| Description | ↗ | ↗ | ↗ | ↗ | ↗ | ↗ |
\ No newline at end of file
diff --git a/MCM/2019/E/1916129/1916129.md b/MCM/2019/E/1916129/1916129.md
new file mode 100644
index 0000000000000000000000000000000000000000..307f48953cbd87fa07a62579a3cd4f387548a73e
--- /dev/null
+++ b/MCM/2019/E/1916129/1916129.md
@@ -0,0 +1,682 @@
+| For office use only | Team Control Number 1916129 | For office use only F1 |
| T1 | |
| T2 | | | F2 |
| T3 | | Problem Chosen | F3 |
| T4 | | E | F4 |
+
+# 2019
+
+# MCM/ICM
+
+# Summary Sheet
+
+To evaluate the unmeasurable cost of environmental services, our team introduces an accounting model to calculate the economic costs of ecosystem services. We divide the true economic costs of land use projects into three parts: natural resource consumption, cost of environmental pollution and cost of environmental degradation.
+
+To measure the natural resource consumption more accurately, we discuss the non-market consumption and market consumption separately. For the non-market consumption, we express it with the ecological value of the natural resource we put in. For market consumption, we express it with the shadow price of the net primary production (NPP), which can be calculated by CASA model.
+
+When it comes to cost of environmental pollution, we divide the environmental pollution into water pollution, air pollution and industrial waste. Then we consider the derivative effect of pollution by calculating the economic loss it causes.
+
+As for the cost of environmental degradation, we divide it into the cost of vegetation depletion, land degradation and biodiversity decrease. We introduce the concept of ecological value to measure the cost of vegetation depletion, while the cost of land degradation is measured by its opportunity cost. Then we introduce Shannon Wiener Index to measure the biodiversity decrease.
+
+To calculate the environmental degradation cost of land use projects, we regard the self-recovery process of ecosystem as a negative feedback process based on the feedback principle of BP nerve network. Then we construct a long-term ecological self-recover model. In order to weight different factors' influence more accurately, we develop an OBP (One-way back propagation) nerve network, which is significantly simplified from the well-known BP nerve network. Firstly, we train the known data in the net without considering environmental recovery. After obtaining the weight of each factor, we use it in the long-term ecological self-recover model and calculate the cost of environmental degradation.
+
+Then we cite three typical cases to conduct the cost-benefit analysis, which are House, Subway and Steel Mill. After our cost-benefit analysis, the House is not worth building considering the cost of ecosystem services, while it is worth building in the traditional analysis model. The Subway and Steel Mill is worth constructing in both the traditional way and our new model. From the cases, we can see the significance of considering the cost of ecosystem services for it may influence the decision of planners.
+
+At the end of the paper, we have a further discussion about the cost of ecosystem services. We put forward an innovative expression of Green GDP with the model we built. Moreover, to relieve the externalities of land use projects, we consider Pigou tax and define it in a new way.
+
+Key Words: ecological value, BP nerve network, long-term ecological self-recover model, cost-benefit analysis, Green GDP, externalities
+
+# CONTENT
+
+1 Introduction 2
+
+1.1 Background 2
+1.2 Problem Restatement 2
+1.3 Symbol Description 2
+
+2 Measurement of Natural Resource Consumption 3
+
+2.1 Net Primary Production 3
+2.2 Modified Cobb-Douglas Production Function 4
+2.3 Measurement of Non-Market Consumption 4
+2.4 Measurement of Market Consumption 6
+
+3 Cost of Environmental Pollution 6
+
+3.1 Cost of Water Pollution Control 7
+3.2 Cost of Air Pollution Control 7
+3.3 Cost of Industrial Waste Control 7
+3.4 Cost of Derivative Effects of Pollution 8
+
+4 Cost of Environmental Degradation 8
+
+4.1 Cost of Vegetation Depletion 8
+4.2 Cost of Land Degradation 9
+4.3 Cost of Biodiversity Decrease 9
+
+5 Long-Term Ecological Self-Recover Model 10
+
+5.1 General Assumption 10
+5.2 Weight Update Formula 10
+5.3 Simulation Process 11
+5.4 Practical Application of the Model 12
+
+6 Cost-Benefit Analysis of Land Use Project 13
+
+6.1 General Assumption 13
+6.2 Cost Analysis of Land Use Project 13
+6.3 Benefit Analysis of Land Use Project 14
+6.4 Cost-Benefit Ratio 15
+6.5 Case Analysis 15
+
+7 Sensitivity Analysis 17
+
+8 Strength and Weakness 18
+
+8.1 Strength 18
+8.2 Weakness 18
+
+9 Further Discussion 19
+
+9.1 Pigou Tax for Externalities of Natural Resources 19
+9.2 Inspiration to the Expression of Green GDP 19
+
+Appendix 1
+
+I. Reference 1
+II. Cost of Ecological Service Data Sheet 1
+
+# 1 Introduction
+
+# 1.1 Background
+
+Ecosystem services are the conditions and processes through which natural ecosystems and the species that make them up, sustain and fulfil human life [1]. Ecosystem services provide a guarantee for the survival of all life on earth. In traditional economic theory, Ecosystem services tend to be ignored and make no difference to the calculation of some economic indexes, such as GDP. However, Ecosystem services can be limited or removed by the human activities, which can influence the biodiversity and cause environmental degradation cumulatively. Moreover, as a research shows, the value of the entire biosphere (most of which is outside the market) is estimated at between $16 trillion and $54 trillion per year, with an average of $33 trillion per year [2]. Therefore, it is necessary to put a value on the environmental cost of human activities that have a negative impact on environment.
+
+# 1.2 Problem Restatement
+
+In this paper, we are aimed to find out the negative impacts of human activities on environment and value the environmental cost of them. We need to solve the following problems:
+
+> Create a model to calculate the environmental cost of land use development projects. It is consistent of natural resource consumption, cost of environmental pollution and cost of environmental degradation.
+$\succ$ Use the model to conduct a cost-benefit analysis of actual land use development projects.
+Evaluate the effectiveness and timeliness of the model.
+
+# 1.3 Symbol Description
+
+Table 1. Symbol Description
+
+| Symbol | Description |
| Cres | Natural Resource Consumption |
| Cpol | Cost of Environmental Pollution |
| Cdeg | Cost of Environmental Degradation |
| N | Net Primary Production (NPP) |
| Y | Total Output of an Economic Entity |
| A | Technology Level of the Economic Entity |
| L | Labor Put into Production |
| K | Capital Put into Production |
+
+# 2 Measurement of Natural Resource Consumption
+
+# 2.1 Net Primary Production
+
+Net primary production (NPP) is a significant indicator for evaluating the level of ecosystem service. It weighs the amount of solar energy that is fixed by vegetation per unit of time after expending the energy of its metabolism. Net primary production (NPP) reflects the regulatory capacity of an ecosystem. The higher the value of NPP, the more carbon dioxide fixed by vegetation through photosynthesis and more nitrogen deposited. It means lower carbon dioxide content in the atmosphere, which directly reduce the greenhouse effect [5].
+
+# 2.1.1 CASA Model for NNP Assessing
+
+By referring to conference, we find out a method to roughly estimate the value of $NPP$ . It is the product of vegetation absorbs photosynthetic effective radiation (APAR) and efficiency for solar energy utilization.
+
+$$
+N P P = A P A R \cdot \varepsilon^ {[ 5 ]} \tag {1}
+$$
+
+Where, NPP is Net Primary Production, APAR is vegetation absorbs photosynthetic effective radiation and $\varepsilon$ is efficiency for solar energy utilization.
+
+1) APAR is influenced by fraction of photosynthetically active radiation (FPAR), total solar radiation (TSR) and effective radiation ratio, which is 0.5.
+
+$$
+A P A R = F P A R \cdot S O L \cdot 0. 5 ^ {[ 5 ]} \tag {2}
+$$
+
+Where, $FPAR$ is fraction of photosynthetically active radiation, $SOL$ is total solar radiation, which can be acknowledged by interpolating all of solar radiation stations in space.
+
+$$
+F P A R = \frac {F P A R _ {N D V I} + F P A R _ {S R}}{2} [ 5 ] \tag {3}
+$$
+
+Where, $\mathrm{FPAR}_{NDVI}$ can be calculated by NDVI image and NDVI pixel value, $\mathrm{FPAR}_{SR}$ can be calculated by SR index. The value of NDVI depends on the type of the vegetation. The SR index can be calculated by normalized difference vegetation index (NDVI).
+
+2) $\varepsilon$ depends on the maximum light utilization, temperature stress factor and image coefficients of water stress.
+
+$$
+\varepsilon = \varepsilon_ {m a x} \cdot T _ {\varepsilon} \cdot W _ {\varepsilon} ^ {[ 5 ]} \tag {4}
+$$
+
+Where, $\varepsilon_{max}$ is maximum light utilization, which is a constant and depends on the type of the vegetation; $T_{\varepsilon}$ is temperature stress factor while $W_{\varepsilon}$ is image coefficients of water stress.
+
+# 2.2 Modified Cobb-Douglas Production Function
+
+Cobb-Douglas production function is an accounting equation measuring the total output of an economic entity. In classical economic theory, the contribution of Ecosystem services to the growth of output tend to be ignored. In this way, the traditional Cobb-Douglas production function has a form like this:
+
+$$
+Y = A \cdot L ^ {\alpha} \cdot K ^ {\beta} \tag {5}
+$$
+
+Where, $Y$ is the total output of an economic entity, $A$ is technology level of the economic entity, $L$ is labor put into production, $K$ is capital put into production, $\alpha, \beta$ are the output elasticities of labor and capital. Output elasticity measures the change of output introduced by the increase of unit factor input.
+
+The environmental cost of human activities cannot be reflected in the above formula. Therefore, we introduce the modified Cobb-Douglas production function:
+
+$$
+Y = A \cdot L ^ {\alpha} \cdot K ^ {\beta} \cdot N ^ {\lambda [ 3 ]} \tag {6}
+$$
+
+Where, $N$ is Net Primary Production (NPP), $\lambda$ is the output elasticities of Net Primary Production (NPP).
+
+# 2.3 Measurement of Non-Market Consumption
+
+It is hard for us to know the non-market consumption for there is no way to measure the consumers' willingness to pay. On this occasion, we introduce ecological value to represent the natural resource consumption. Ecological value can be divided into five parts: the value of fixed carbon dioxide, oxygen released, purifying the water quality, cleaning the dust and conserving the soil.
+
+# 2.3.1 Value of Fixed Carbon Dioxide and Oxygen released
+
+Vegetation has the ability to fix carbon dioxide and release oxygen by photosynthesis. The reaction equation is as follows:
+
+$$
+6 C O _ {2} + 1 2 H _ {2} O \rightarrow C _ {6} H _ {1 2} O _ {6} + 6 O _ {2} + 6 H _ {2} O \tag {7}
+$$
+
+The ratio of NPP, the quality of carbon dioxide and oxygen is $100:163:120^{[4]}$ . In the method of carbon tax approach and oxygen cost method, we can calculate the value of fixed carbon dioxide and oxygen released with the following formula:
+
+$$
+V _ {P} = S \cdot \left(c _ {O _ {2}} \cdot m _ {O _ {2}} + c _ {C} \cdot \left(m _ {C} + H _ {S}\right)\right) \tag {8}
+$$
+
+Where, $V_{P}$ is the value of fixed carbon dioxide and oxygen released, $S$ is the floor space of vegetation, $c_{O_2}, c_C$ are respectively the price of oxygen and coal per ton, $m_{O_2}, m_C$ are respectively the amount of oxygen released and carbon fixed, $H_{S}$ is carbon content of the soil.
+
+The value of each constant can be seen in the following table:
+
+Table 2. Value of Constants in Formula (8) [8]
+
+| Constant | Value |
| cO2 | 198.6 $/kg |
| mO2 | 0.05625 kg/m2 |
| cC | 150 $/t |
| mC | 0.07755 kg/m2 |
| HS | 0.505 kg/m2 |
+
+# 2.3.2 Value of Conserving the Soil
+
+The roots of vegetation are firmly entwined with the soil, which can help conserve the soil. The value of conserving the soil can be calculated by the following formula:
+
+$$
+V _ {s} = \frac {S \cdot c _ {\text {s o i l}} \cdot (x _ {2} - x _ {1})}{\rho} [ 9 ] \tag {9}
+$$
+
+Where, $V_{s}$ is the value of conserving the soil, $S$ is the floor space of vegetation, $c_{\text{soil}}$ is the cost of digging up and transporting a unit volume of soil, $x_{2}$ is soil erosion modulus without vegetation, $x_{1}$ is soil erosion modulus with vegetation, $\rho$ is soil bulk density.
+
+The value of each constant can be seen in the following table:
+
+Table 3. Value of Constants in Formula (9) [9]
+
+| Constant | Value |
| csoil | 6 $/m3 |
| x2 | 38.02 t/hm2 |
| x1 | 2.86 t/hm2 |
| ρ | 1.09 g/cm3 |
+
+# 2.3.3 Value of Purifying the Water Quality
+
+The branches, leaves and soil of vegetation are able to filter the pollutant in the rainfall. The value of purifying the rainfall quality can be calculated by the following formula:
+
+$$
+V _ {w} = 1 0 \cdot S \cdot \left(c _ {w a t} + c _ {p o o}\right) \cdot (R - T R) ^ {[ 1 0 ]} \tag {10}
+$$
+
+Where, $V_{w}$ is the value of purifying the water quality, 10 is conversion ratio, S is the floor space of vegetation, $c_{wat}, c_{poo}$ are respectively the cost of dealing polluted water and building reservoir per reservoir capacity, R is the amount of rainfall, TR is the amount of transpiration.
+
+The value of each constant can be seen in the following table:
+
+Table 4. Value of Constants in Formula (10) [10]
+
+| Constant | Value |
| cwat | 0.417 $/t |
| cpool | 0.147 $/a |
| R | 483.3 mm/a |
| TR | 374.8 mm/a |
+
+# 2.3.4 Value of Cleaning the Dust
+
+Vegetation is able to absorb the hazardous substance in the atmosphere, such as $SO_2$ , HF, $NO_X$ and dust. The value of it can be calculated by the following formula:
+
+$$
+V _ {d} = S \cdot \sum_ {i = 1} ^ {4} c _ {i} \cdot Q _ {i} ^ {[ 1 0 ]} \tag {11}
+$$
+
+Where, $V_{d}$ is the value of cleaning the dust, $S$ is the floor space of vegetation, $c_{i}$ is the cost of controlling hazardous substance, $Q_{i}$ is the amount of absorption per unit area, $i$ can be $SO_{2}, HF, NO_{X}$ and dust.
+
+The value of each constant can be seen in the following table:
+
+Table 5. Value of Constants in Formula (11) [10]
+
+| Constant | Value |
| QSO2 | 37.3 kg/(hm2.a) |
| QHF | 1.68 kg/(hm2.a) |
| QNOX | 6.00 kg/(hm2.a) |
| CSO2 | 0.171 $/kg |
| CHF | 0.099 $/kg |
| CNOX | 0.090 $/kg |
+
+# 2.4 Measurement of Market Consumption
+
+In the factor market, the factor inputs can be adjusted over time. With the Modified Cobb-Douglas Production Function, we use the shadow price of net primary production (NNP) to represent the natural resource consumption. The shadow price reflects the marginal contribution of net primary production (NPP) to output. Based on formula (2), we can calculate the value of it by taking the partial of net primary production (NPP) with respect to output.
+
+$$
+C _ {r e s} = \frac {\partial Y}{\partial N} = A \cdot \lambda \cdot L ^ {\alpha} \cdot K ^ {\beta} \cdot N ^ {\lambda - 1 [ 3 ]} \tag {12}
+$$
+
+Where, $Y$ is total output of an economic entity, $N$ is Net Primary Production (NPP), $A$ is technology level of the economic entity, $L$ is labor put into production, $K$ is capital put into production, $\alpha, \beta, \lambda$ are the output elasticities of labor, capital and NPP.
+
+# 3 Cost of Environmental Pollution
+
+The cost of environmental pollution is mainly consist of four parts: the control cost of water pollution, air pollution, industrial waste and the cost of derivative effects of pollution. It can be represented as the following formula:
+
+$$
+C _ {p o l} = C _ {w} + C _ {a} + C _ {i} + C _ {d} \tag {13}
+$$
+
+Where, $C_{pol}$ is the cost of environmental pollution, $C_w, C_a, C_i, C_d$ is respectively the control cost of water pollution, air pollution, industrial waste and the cost of derivative effects of pollution.
+
+# 3.1 Cost of Water Pollution Control
+
+Water pollution is mainly caused by cyanide and harmful metals. Moreover, high level of chemical oxygen demand (COD) is also an evidence of water pollution. Therefore, the cost of water pollution control comes from the three parts. The cost of each part is shown in the following table:
+
+Table 6. Control Cost of Water Pollution [6]
+
+| Type of Water Pollution | Cost of Pollution Control ($/t) |
| COD | 1671.43 |
| Cyanide | 357.00 |
| Harmful Metals | 225.57 |
+
+The cost of control the water pollution can be calculated as the following formula:
+
+$$
+C _ {w} = 1 6 7 1. 4 3 \cdot n _ {C O D} + 3 5 7 \cdot n _ {C y a} + 2 2 5. 5 7 \cdot n _ {m e t} \tag {14}
+$$
+
+Where, $C_w$ is the cost of water pollution control, $n_{COD}$ is the difference of COD above the standard value, $n_{Cya}, n_{met}$ respectively represent the amount of cyanide and harmful Metals.
+
+# 3.2 Cost of Air Pollution Control
+
+Air pollution is mainly caused by the pollutants $SO_x, CO_2, NO_x$ , dust and smoke. The control cost of each part is shown in the following table:
+
+Table 7. Control Cost of Air Pollution [6]
+
+| Type of Air Pollution | Cost of Pollution Control ($/t) |
| SOx | 92.86 |
| CO2 | 97.14 |
| NOx | 432.86 |
| Dust | 32.86 |
| Smoke | 20.00 |
+
+The cost of control the air pollution can be calculated as the following formula:
+
+$$
+C _ {a} = 9 2. 8 6 \cdot n _ {S O _ {x}} + 9 7. 1 4 \cdot n _ {C O _ {2}} + 4 3 2. 8 6 \cdot n _ {N O _ {x}} + 3 2. 8 6 \cdot n _ {D u s} + 2 0 \cdot n _ {S m o} \tag {15}
+$$
+
+Where, $C_a$ is the cost of air pollution control, $n_{\mathrm{SO}_x}, n_{\mathrm{CO}_2}, n_{\mathrm{NO}_x}, n_{\mathrm{Dus}}, n_{\mathrm{Smo}}$ respectively represent the amount of $SO_x$ , $CO_2$ , $NO_x$ , dust and smoke.
+
+# 3.3 Cost of Industrial Waste Control
+
+Industrial waste can be divided into general industrial solid waste and hazardous fixed waste. The actual cost of control industrial waste is composed of the treatment cost and storage treatment cost of industrial solid waste. The calculation formula is as follows:
+
+$$
+C _ {i} = C _ {c t l} + C _ {s t o} \tag {16}
+$$
+
+Where, $C_i$ is the cost of industrial waste control, $C_{ctrl}$ is the treatment cost and $C_{sto}$ is storage treatment cost.
+
+The control cost of each part is shown in the following table:
+
+Table 8. Control Cost of Industrial Waste [7]
+
+| Type of Cost | Type of Waste | Unit Control Cost ($/t) |
| Control Cost | General Industrial Solid Waste | 3.14 |
| Hazardous Waste | 214.29 |
| Storage Cost | General Industrial Solid Waste | 0.64 |
| Hazardous Waste | 2.15 |
+
+# 3.4 Cost of Derivative Effects of Pollution
+
+The derivative effects of pollution mainly refer to greenhouse effect, ozone hole and acid rain. The cost of it is calculated by the following formula:
+
+$$
+C _ {d} = L _ {G H} + L _ {O H} + L _ {A R} \tag {17}
+$$
+
+Where, $C_d$ is the cost of derivative effects of pollution, $L_{GH}, L_{OH}, L_{AR}$ are respectively the economic loss caused by greenhouse effect, ozone hole and acid rain.
+
+The main evaluation index of greenhouse effect is carbon dioxide emissions. The greater the carbon dioxide emissions, the severer the greenhouse effect. Therefore, we approximate the severity of the greenhouse effect in terms of carbon dioxide emissions. The loss of greenhouse effect can be calculated with the following formula:
+
+$$
+L _ {G H} = \alpha \cdot E _ {c} \tag {18}
+$$
+
+Where, $\mathrm{L}_{GH}$ is the loss caused by greenhouse effect, $\alpha$ is the marginal rate of substitution of capital for carbon dioxide emissions, $E_{c}$ is the amount of carbon dioxide emissions.
+
+# 4 Cost of Environmental Degradation
+
+Environmental degradation refers to the deterioration or compromise of the natural environment through consumption of assets by either natural processes or human activities. The classic examples of Environmental degradation are vegetation depletion, land degradation and biodiversity decrease.
+
+$$
+C _ {d e g} = C _ {v e g e} + C _ {l a n d} + C _ {b i o d} \tag {19}
+$$
+
+Where, $C_{deg}$ is the cost of environmental degradation, $C_{vege}, C_{land}, C_{biod}$ respectively are the loss of vegetation depletion, land degradation and biodiversity decrease.
+
+Then we will have a further discussion about the cost of the losses.
+
+# 4.1 Cost of Vegetation Depletion
+
+Vegetation depletion weakens the ability of ecosystem to purify the air, which means it fails to absorb the harmful gas such as $\mathrm{SO}_x$ and $\mathrm{NO}_x$ . Moreover, vegetation depletion decreases the $O_2$ in the atmosphere
+
+and increases the $CO_2$ in the atmosphere. The loss of them can be measured by their ecological value. The way to calculate the ecological value of the loss is represented by the formula (7).
+
+# 4.2 Cost of Land Degradation
+
+Land degradation leads to loss of land nutrient substance and sediment hazards. Based on the theory of market value, we can calculate soil conservation value by the opportunity cost of land nutrient substance and the control cost of sediment hazards.
+
+$$
+C _ {l a n d} = \Delta A \cdot \left(C _ {n u t} + C _ {s e d}\right) \tag {20}
+$$
+
+Where, $C_{land}$ is the total cost of land degradation, $A$ is soil retention, $C_{nut}$ is the unit opportunity cost of land nutrient substance, $C_{sed}$ is the control cost of sediment hazards.
+
+$$
+A = R \cdot K \cdot L S \cdot (1 - C \cdot P) ^ {[ 8 ]} \tag {21}
+$$
+
+Where, $R$ is rainfall erosion, which can be observed, $K$ is factor of soil erodibility, $LS$ is slope length slope factor, $C$ is land cover factor, $P$ is soil conservation measures factor. All of the parameters except $R$ are constants.
+
+# 4.3 Cost of Biodiversity Decrease
+
+Misuse of ecological services leads to the decrease of biodiversity. To measure the biodiversity in a district, we adopt the Shannon Wiener Index formula with the method of theory of information.
+
+$$
+H = - \sum P _ {i} \cdot l n (P _ {i}) \tag {22}
+$$
+
+Where, H is Shannon Wiener Index, $P_{i}$ is the proportion of species individualities in total number of individuals.
+
+We can learn from the formula that biodiversity is proportional to Shannon Wiener Index. In case of species distribution uniformity, the Shannon Wiener Index is one. When there is only one species, the Shannon Wiener Index is zero. The decrease of species diversity is a long-term qualitative change process. Therefore, the large-scale land project during long period will have negative effects on biodiversity. The cost of ecological services due to biodiversity loss is approximately equal to the difference of Shannon Wiener Index over a long period of time multiplied the unit service value. The unit service value is shown in the following table:
+
+Table 9. Unit Service Value of Biodiversity
+
+| Range of Shannon Wiener Index | Unit Service Value ($/hm2) |
| 0 < H < 1 | 428.57 |
| 1 ≤ H < 2 | 714.29 |
| 2 ≤ H < 3 | 1428.57 |
| 3 ≤ H < 4 | 2857.14 |
| 4 ≤ H < 5 | 4285.71 |
| 5 ≤ H < 6 | 5714.28 |
| 6 ≤ H | 7142.86 |
+
+# 5 Long-Term Ecological Self-Recover Model
+
+All of the ecosystems have the ability to self-recover. It is reflected in the renewable nature of resources, the self-decomposition of waste and so on. Based on the feedback principle of BP neural network, we regard the self-recovery process of ecosystem as a negative feedback process, and thus construct the time evolution model of ecological self-recovery.
+
+# 5.1 General Assumption
+
+The original ecological environment is not polluted.
+The resistance stability of environment is very rapid while the restorative stability is slow.
+
+# 5.2 Weight Update Formula
+
+Considering the practicability of the model, the weights of the new model cannot be randomly selected. At the same time, we give up the method of AHP to find the weights. Through our database, we establish a training set of 20 different engineering data set. We use neural network machine learning method under the condition of satisfying error accuracy to train a group of appropriate weight.
+
+Figure 1. Hierarchical Structure of Neural Network
+
+Note: G.S. Waste = General Industrial Solid Waste
+H. Waste = Hazardous Waste
+
+When we train the known training set, the process of neural network back propagation is influenced by only one "mother". We keep adjusting the weights until it is within the margin of error. The advantage of determining the weight in this way is that the unit influence of different parameters can be reflected in the weight.
+
+The weight update formula from hidden layer to output layer is as follows:
+
+$$
+\frac {\partial E _ {t o t a l}}{\partial W _ {1 1}} = \frac {\partial E _ {t o t a l}}{\partial W _ {o u t}} \cdot \frac {\partial W _ {o u t}}{\partial W _ {n e}} \cdot \frac {\partial W _ {n e}}{\partial W _ {1 1}} \tag {23}
+$$
+
+$$
+W _ {1 1} ^ {\prime} = W _ {1 1} - \eta \cdot \frac {\partial E _ {t o t a l}}{\partial W _ {1 1}} \tag {24}
+$$
+
+Where, $E_{total}$ is the binary norm of the total error between the output and the actual value of the output layer neural unit, $W_{out}$ is the output of the neuron, $W_{ne}$ is the linear summation of output values of hidden layer, $\frac{\partial W_{out}}{\partial W_{ne}}$ is to find derivative of the activation function, $\eta$ is the learning rate.
+
+The weight formula updating the first layer is as follows:
+
+$$
+\frac {\partial E _ {t o t a l}}{\partial W _ {1 1}} = \frac {\partial E _ {t o t a l}}{\partial W _ {o w}} \cdot \frac {\partial W _ {o w}}{\partial W _ {n e}} \cdot \frac {\partial W _ {n w}}{\partial W _ {1 1}} \tag {25}
+$$
+
+$$
+\frac {\partial E _ {t o t a l}}{\partial W _ {o w}} = \frac {\partial E _ {w}}{\partial W _ {o w}} \tag {26}
+$$
+
+$$
+W _ {1 1} ^ {\prime} = W _ {1 1} - \eta \cdot \frac {\partial E _ {t o t a l}}{\partial W _ {1 1}} \tag {27}
+$$
+
+Where, $E_{w}$ is the overall error of hidden layer (water pollution) nerve cell, $W_{ow}$ is the output if nerve cell (water pollution), $W_{nw}$ is the linear summation of output values of input layer, $\frac{\partial W_{ow}}{\partial W_{ne}}$ is to find derivative of the activation function, $\eta$ is the learning rate.
+
+# 5.3 Simulation Process
+
+Firstly, based on the life cycle theory, we divide the project into five cycles, which are the planning and design period, the raw material processing period, the construction period, the operation period and the end of life period. Here, we consider the evolution law of the pollution discharge with time in the raw material processing period, construction period and operation period.
+
+We can substitute the weights trained by $BP$ neural network into our ecological self-recover model. The pollutants discharged in the projects must be treated before they flow into the ecosystem. The pollutants flowing into the ecosystem are partly eliminated by the resistance of ecosystem while the others remain there. In the long term, this part could be purified because of the self-recover ability of ecosystem. Furthermore, something needs to be noticed.
+
+When the level of pollution exceeds the critical value that the ecosystem can withstand (k/2), the pollutant will not be purified.
+
+The stability of ecosystem decreased with the increase of pollution degree, and the stability of resilience increases with the increase of pollution degree.
+
+The pollutant of the first year can be calculated by the following formula:
+
+$$
+E _ {1} = \sum_ {i = 1} ^ {3} x _ {1 i} (1 - \beta_ {i})) \cdot (1 - \lambda) \tag {28}
+$$
+
+The pollutant in year $m + 1$ can be calculated by the following formula:
+
+$$
+E _ {m + 1} = \sum_ {i = 1} ^ {m + 1} E _ {n} \cdot f (m + 1 - n, \gamma) + \cdot \sum_ {i = 1} ^ {3} x _ {1 i} (1 - \beta_ {i})) \cdot (1 - \lambda) \tag {29}
+$$
+
+Where, $\gamma = \mathbf{g}(\sum E),\lambda = \mathbf{h}(\sum E)$
+
+As the system evolves, the ecosystem collapses because either the pollution level exceeds $\mathrm{k} / 2$ , or it will reach a dynamic equilibrium. When it reaches the dynamic equilibrium, the status can be represented by the following formula:
+
+$$
+E _ {m + 1} \cdot (1 - f (1)) = \sum_ {n = 1} ^ {m - 1} E _ {n} \cdot f (m + n - 1, \gamma) + \left(\sum_ {i = 1} ^ {3} x _ {1 i} \cdot (1 - \beta)\right) \cdot (1 - \lambda) \tag {30}
+$$
+
+Where, $f(a,b)$ is a function that reflects the ecosystem restoration changed with recovery capability and time, $g(E)$ is the function that reflects the ecosystem restoration increases with the pollution level increasing, $h(E)$ is the ecosystem resilience decreases with the pollution level increasing.
+
+# 5.4 Practical Application of the Model
+
+Using our model, we can calculate the environmental degradation cost in different years more accurately. Then we use four practical examples to apply the model, the details of the data can be seen in the appendix. The computed results are as follows:
+
+
+Figure 2. Environmental Degradation Cost of Bridge and Steel Plant
+
+
+Figure 3. Environmental Degradation Cost of House and Subway
+
+There is a turning point in both figures. The curve before the turning point is our raw material and construction. In this part of curve, a large amount of pollutant emission rises rapidly. Therefore, there is a little bit steep. The curve after the turning point is the operating period. In this part of curve, we find that because of the purification of ecosystem, the houses and bridges in the projects with less pollution will self-recover over time. However, for the steel mill and subway project with more pollution, the purification of the annual quantity is less than new emissions pollution. Without man-made management, pollution becomes more serious.
+
+# 6 Cost-Benefit Analysis of Land Use Project
+
+# 6.1 General Assumption
+
+Net cash flow (NCF) is the same every year.
+The depreciation method is linear depreciation.
+
+# 6.2 Cost Analysis of Land Use Project
+
+In order to simplify the model, we use factor cost to calculate the main cost of projects. The components of long-term costs of land use project are shown in the table below:
+
+
+Figure 4. Components of Main Long-Term Costs of Project
+
+# 6.2.1 Cost in Construction Time
+
+Considering the lifetime of the project, we introduce the concept of present worth, which is calculated by expected cash flow, present value factor and discount rate. By adopting the present value approach rather than simply adding the cash flow of each year together, we can get a more accurate cost figure. In this way, we fully account for the time cost of the input factor.
+
+$$
+C _ {1} = \sum_ {i = 1} ^ {t _ {1}} \frac {w _ {i} + k _ {i} + T _ {i} + C _ {\text {c o n} , i}}{(1 + r) ^ {t _ {1}}} \tag {31}
+$$
+
+Where, $C_1$ is the cost in construction time, $t_1$ is the construction period, $r$ is discount rate, $w_i, k_i, T_i, C_{con,i}$ are respectively the employee compensation, capital input, technology input and pollution treatment cost in year $i$ .
+
+# 6.2.2 Cost in Operation Time
+
+Likewise, we take the lifetime of the project into consideration. The depreciation method is linear depreciation. The cost in operation time can be calculated by the following formula:
+
+$$
+C _ {2} = \sum_ {j = 1} ^ {t _ {2}} \frac {m _ {j} + d _ {j}}{(1 + r) ^ {t _ {2}}} \tag {32}
+$$
+
+Where, $C_2$ is the cost in operation time, $t_2$ is the operation period, which is equal to expected useful life of the asset, $r$ is discount rate, $m_j, d_j$ are respectively the maintenance charge and depreciation expense. The depreciation expense can be calculated according to formula (20).
+
+$$
+d _ {j} = \frac {K}{t _ {2}} \tag {33}
+$$
+
+Where, $\mathbf{K}$ is constructed assets that need to be depreciated, $t_2$ is the expected useful life of the asset.
+
+# 6.3 Benefit Analysis of Land Use Project
+
+The long-term benefit of land use project is mainly consists of future inward cash flow and estimated net residual value of the project.
+
+Considering the time cost, we calculate the benefit by the following formula:
+
+$$
+I = \frac {I _ {O}}{(1 + r) ^ {t _ {2}}} + \sum_ {j = 1} ^ {t _ {2}} \frac {I _ {j}}{(1 + r) ^ {t _ {2}}} + I _ {u n} \tag {34}
+$$
+
+Where, I is the total benefit of the project, $I_0$ is expected net residual value, $r$ is discount rate, $t_2$ is the operation period, $I_j$ is inward cash flow in year $j$ , $I_{un}$ is the non-market benefit, which depends on the type of projects.
+
+# 6.4 Cost-Benefit Ratio
+
+Cost-benefit ratio is to reflect the profitability of the project, which can help decision makers to determine whether the project is beneficial to conduct. The cost-benefit ratio can be calculated by the following formula:
+
+$$
+r = \frac {C}{I} \tag {35}
+$$
+
+Where, $r$ is cost-benefit ratio, $C$ is the financial cost, $I$ is inward cash flow.
+
+In the traditional cost-benefit analysis, the decision maker may underestimate the total cost by ignoring the cost of ecosystem services.
+
+In the developed cost-benefit analysis, we take the cost of ecosystem services and the social benefit into consideration. The developed formula can be represented as follows:
+
+$$
+r = \frac {C + C _ {e}}{I + I _ {s}} \tag {36}
+$$
+
+Where, $r$ is cost-benefit ratio, $C$ is the financial cost, $C_e$ is cost of ecosystem services, $I$ is inward cash flow and $I_s$ is social benefit.
+
+# 6.5 Case Analysis
+
+In this part, we will conduct cost-benefit analysis of three different size projects using the self-recover model. Details of cost of ecosystem services and pollution are listed in the appendix. Moreover, we assume that the discount rate is $3\%$ .
+
+# 6.5.1 House of One Hundred Square Meters
+
+The construction time of residential building is around one year, while the lifetime of it is around twenty years. The expected net residual value rate is about $5\%$ . The cost and benefit can be seen in the following table:
+
+Table 10. Cost Analysis of House
+
+| House | Cost/\$ |
| Employee Compensation | 2,914.29 |
| Capital Input | 25,714.29 |
| Technology Input | 7,142.86 |
| Maintenance Charge | 2,571.43 |
| Cost of Derivative Effects of Pollution | 41,214.72 |
| Cost of Environmental Degradation | 699.00 |
| TOTAL | 80,256.59 |
+
+| Table 11. Benefit Analysis of House |
| House | Annual Payment/$ | Present Value of Annuity/$ |
| Rent Income | 5,142.84 | 76,512.60 |
| Net Residual | 1,917.74 | 1,061.85 |
| Social | 0 | 0 |
| TOTAL | | 77576.45 |
+
+The cost-benefit ratio calculated in the traditional way is 0.49, which means the project is worth conducting. However, the cost-ratio calculated considering the ecosystem services is 1.03, which means the project is not worth conducting.
+
+# 6.5.2 Subway of Forty Kilometers
+
+The construction time of subway is around five years, while the lifetime of it is around sixty years. The rate of depreciation of subway is $1\%$ . The cost and benefit can be seen in the following table:
+
+Table 12. Cost Analysis of Subway
+
+| Subway | Annual Payment/$ | Present Value of Annuity/$ |
| Employee Compensation | 137,142,857.14 | 626,742,857.14 |
| Capital Input | 274,285,714.29 | 1,253,485,714.29 |
| Technology Input | 45,714,285.71 | 208,914,285.71 |
| Maintenance Charge | 1,142,857.14 | 5,222,857.13 |
| Cost of Derivative Effects of Pollution | 9,300,938.57 | 42,595,508.37 |
| Cost of Environmental Degradation | 125.71 | 575.73 |
| TOTAL | | 2,136,961,798.37 |
+
+Table 13. Benefit Analysis of Subway
+
+| Subway | Annual Payment/$ | Present Value of Annuity/$ |
| Service Charge Income | 81,428,571.43 | 2,253,581,612.85 |
| Net Residual Value | 835,657,142.86 | 141,838,669.05 |
| Social Benefit | 4,428,571.43 | 122,563,210.56 |
| TOTAL | | 2,517,983,492.46 |
+
+The cost-benefit ratio calculated in the traditional way is 0.83, which means the project is worth conducting. However, the cost-ratio calculated considering the ecosystem services is 0.84, which means the project is worth conducting as well.
+
+# 6.5.3 Large-Scale Steel Mill of Twenty-Three Square Kilometers
+
+The construction time of a steel mill is around five years, while the lifetime of it is around twenty years. The rate of depreciation of steel mill is $1\%$ . The cost and benefit can be seen in the following table:
+
+Table 14. Cost Analysis of Steel Mill
+
+| Steel Mill | Annual Payment/$ | Present Value of Annuity/$ |
| Employee Compensation | 141,181,111.74 | 646,567,137.44 |
| Capital Input | 988,267,782.17 | 4,525,969,962.01 |
| Technology Input | 282,362,223.48 | 1,293,134,274.87 |
| Maintenance Charge | 27,697,106.54 | 126,844,438.84 |
| Cost of Derivative Effects of | 38,696,421.71 | 177,218,002.52 |
| Cost of Environmental Degradation | 719.67 | 3,295.89 |
| TOTAL | | 6,769,737,111.57 |
+
+Table 15. Benefit Analysis of Steel Mill
+
+| Steel Mill | Annual Payment/$ | Present Value of Annuity/$ |
| Revenue from Operations | 1,858,571,428.60 | 51,437,011,900.24 |
| Net Residual Value | 0 | 0 |
| Social Benefit | 0 | 0 |
| TOTAL | | 51,437,011,900.24 |
+
+The cost-benefit ratio calculated in the traditional way is 0.13, which means the project is worth conducting. However, the cost-ratio calculated considering the ecosystem services is 0.13, which means the project is worth conducting as well.
+
+# 7 Sensitivity Analysis
+
+In order to test the robustness of the model, we added $1\%$ noise value to the above index parameters of our model and observed the change rate of output cost. The results are shown in the following table.
+
+Table 16. Error of the Ecological Cost Corresponding to the Different Input Error
+
+| Parameter | Housing | Bridge | Subway | Steel Plant |
| H(1%) | 0.025% | 0.003% | 0.02% | 0.56% |
| SOx(1%) | 0.15% | 0.14% | 0.05% | 0.13% |
| NOx(1%) | 0.14% | 0.21% | 0.12% | 0.20% |
| Cyanide (1%) | 0.11% | 0.12% | 0.19% | 0.10% |
| General Industrial Waste (1%) | 0.14% | 0.17% | 0.14% | 0.13% |
| Hazardous waste (1%) | 0.09% | 0.12% | 0.10% | 0.09% |
+
+When we apply the model to practice, the margin $\mathrm{k} / 2$ is much higher than the damage caused by the project. Therefore, the ecosystem can always reach a dynamic balance. In order to verify the accuracy of the model, we replace $\mathrm{k} / 2$ value with a lower value, which can be reached more easily. We can see from the graph that once the degree of pollution surpasses the values of $\mathrm{k} / 2$ , it stops to increase. It means that ecosystem collapsed under the weight of pollution.
+
+The results can be seen by the following figure:
+
+
+Figure 5. Results with Different $K$
+
+# 8 Strength and Weakness
+
+# 8.1 Strength
+
+The model comprehensively considers multiple factors such as ecosystem services and environmental degradation.
+> We estimates the value of ecosystem services by opportunity cost and shadow cost, which can reflect real value objectively.
+$\succ$ We use the negative feedback form of artificial neural network to construct our ecosystem self-recover model originally.
+> Based on the model we conducted, we have a further discussion about the Green GDP and talk about the expression of Pigou tax for the externalities problem.
+
+# 8.2 Weakness
+
+> Limited by our knowledge, there exists some factors that we fail to consider.
+$\succ$ We do not consider the factor of inflation. Therefore, there may be errors in project costs.
+> Due to the limitation of space and data, we cannot do the cost analysis of land engineering in different zones.
+The cost-benefit analysis of steel mill could be not representative for we selected a steel mill with high profit rate for example.
+
+# 9 Further Discussion
+
+# 9.1 Pigou Tax for Externalities of Natural Resources
+
+Externality refers to the non-marketable influence of economic activities of economic subjects (including manufacturers and individuals) on others or society. Environmental pollution we discussed above is a typical negative externality, which do harm to the ecosystem and society. Both policy makers and business decision makers should consider the cost of Ecosystem services while making decisions. Then we will discuss the establishment of the Pigou tax levied on polluters.
+
+The target of Pigou tax is to equalize the gap between the private and social costs of polluters' production by the way of levying. From the model we built above, the Pigou tax can be formulated as the following formula:
+
+$$
+\mathrm {T} _ {p} = \mathrm {C} _ {d e g} + \mathrm {C} _ {d} \tag {37}
+$$
+
+Where, $\mathrm{T}_p$ is Pigou tax for a specific land use project, $\mathrm{C}_{deg}$ is the cost of environmental degradation, $\mathrm{C}_d$ is the cost of derivative effects of pollution.
+
+# 9.2 Inspiration to the Expression of Green GDP
+
+# 9.2.1 Traditional Accounting of Green GDP
+
+Green GDP refers to Gross domestic product (GDP) accounting considering the ecosystem services. Based on the traditional GDP accounting system, the value of Green GDP is the value of GDP after deducting the cost of natural resources and cost of environmental pollution. It can be represented by the following formula:
+
+$$
+\mathrm {E D P} = \mathrm {N D P} - \mathrm {C} _ {r e s} - \mathrm {C} _ {p o l} \tag {38}
+$$
+
+Where, EDP is Green GDP, NDP is net domestic product, $C_{res}$ is the non-market consumption of natural resources, and $C_{pol}$ is the cost of environmental pollution.
+
+# 9.2.2 Innovative Expression of Green GDP
+
+In the previous section, we put forward the concept of natural resources consumption. We use it to represent the initial production cost, which can reflect the total natural resource input in the entity district.
+
+# 9.2.2.1 General Assumption
+
+All of the capital input come from nature.
+All of the labor input come from the consumption of natural resources consumption.
+All of the labors are put into production.
+There is no natural resources escaping from the ecosystem.
+
+# 9.2.2.2 Process of Calculability
+
+Firstly, we replace the capital input and labor input with the previous natural resources consumption. GDP can be calculated with the total natural resources consumption.
+
+$$
+\mathrm {G D P} _ {n} = \sigma \left(\mathrm {C} _ {r e s, n} + \sum_ {i = o} ^ {n} \delta \cdot \mathrm {C} _ {i n p, i}\right) \tag {39}
+$$
+
+Where, $\mathrm{GDP}_n$ is GDP in year $n$ , $C_{res,n}$ is the natural resourced consumption in year $n$ , $\sigma$ is production efficiency, $\delta$ is resource depreciation rate, and $C_{inp,i}$ is the natural resources input generated in year $i$ .
+
+On this basis, the value of Green GDP in year $n$ is the previous results minus the pollution treatment costs, pollution derivative effects and environmental degradation losses generated in all processing processes. The following formula represent the whole process:
+
+$$
+\mathrm {E D P} _ {n} = \mathrm {G D P} _ {n} - \mathrm {C} _ {p o l, n} - \mathrm {C} _ {d e g, n} \tag {40}
+$$
+
+Where, $\mathrm{GDP}_n$ is GDP in year $n$ , $\mathsf{C}_{pol,n}$ is the cost of environmental pollution in year $n$ , $\mathsf{C}_{deg,n}$ is the cost of environmental degradation in year $n$ .
+
+# Appendix
+
+# I. Reference
+
+[1] Chee, Y., 2004. An ecological perspective on the valuation of ecosystem services. Biological Conservation 120, 549-565.
+[2] Costanza, R., d'Arge, R., de Groot, R., Farber, S., Grasso, M., Hannon, B., Limburg, K., Naeem, S., O'Neill, R.V., Paruelo, J., Raskin, R.G., Sutton, P., van den Belt, M., 1997. The value of the world's ecosystem services and natural capital. Nature 387, 253-260.
+[3] Richmond, A., Kaufmann R., Myneni, R., 2007, Valuing ecosystem services: A shadow price for net primary production. Ecological Economics 64, 454-462.
+[4] Xuning Qiao, Linfeng Wang, Haipeng Niu, Yalin Yang, Yangyang Gu, 2016. Ecological and economic coordination analysis of huahe river basin in henan province based on NPP data. Economic Geography 36, 173-183.
+[5] Ke Liang, 2017. Estimation of land vegetation NPP in shaanxi province based on CASA model.
+[6] Liu MuYu, Chen FangFang. Research on environmental impact cost analysis model of bridge life cycle. Chinese journal of civil engineering,2010(43):373-378.
+[7] Yu Fang, Wang JinNan, Cao Dong. 2009. Technical guide to environmental economic accounting in China. China science press.
+[8] Han Xiao, Zhiyun Ouyang, Jingzhu Zhao, Xiaoke Wang, 2000. Preliminary study on forest ecosystem service function and ecological economic value evaluation.
+[9] Weidong Han, Xiumei Gao, Changyi Lu, Peng Lin, 2000. Ecological value assessment of mangrove ecosystem in China. Ecological Science 19(1):40-46.
+[10] Zhuolin Li, 2014. Study on ecological value accounting of jishan jujube forest in Shanxi Province.
+
+# II. Cost of Ecological Service Data Sheet
+
+| Cost of Ecosystem Services | House | Subway | Steel Mill |
| Oxygen | 159.57 | 114.86 | 5,874.30 |
| Carbon Dioxide | 94.86 | 68.29 | 3,485.70 |
| Water Purified | 225.29 | 225.14 | 4,142.90 |
| Soil Fertilizer | 80.14 | 124.86 | 2,234.30 |
| Dust Clean | 9.14 | 7.14 | 674.30 |
| Biodiversity | 130.00 | 88.29 | 8,777.10 |
| Cost of Pollution | 699.00 | 628.57 | 25,188.60 |
| COD | 171.43 | 4,646,571.43 | 57,497,142.90 |
| Cyanide | 2,428.57 | 10,174,500.00 | 157,794,000.00 |
| Harmful Metals | 371.43 | 800,778.57 | 22,557,142.90 |
| SO | 1,200.00 | 2,925,000.00 | 216,357,142.90 |
| CO | 44.29 | 476,000.00 | 9,646,285.70 |
| NO | 242.86 | 6,319,714.29 | 328,971,428.60 |
| Dust | 3,142.86 | 7,162,857.14 | 153,771,428.60 |
| Smoke | 1,057.14 | 838,000.00 | 38,200,000.00 |
| General Industrial Solid Waste | 28,571.43 | 7,685,000.00 | 222,600,000.00 |
| Hazardous Waste | 3,285.71 | 5,475,642.86 | 146,955,000.00 |
| TOTAL | 41,913.72 | 46,505,321.44 | 1,354,399,948.80 |
\ No newline at end of file
diff --git a/MCM/2019/E/1916777/1916777.md b/MCM/2019/E/1916777/1916777.md
new file mode 100644
index 0000000000000000000000000000000000000000..4262acab2495f2d7f80b3974a9a10a195d519b3b
--- /dev/null
+++ b/MCM/2019/E/1916777/1916777.md
@@ -0,0 +1,604 @@
+# 2019 Interdisciplinary Contest in Modeling (ICM) Summary Sheet
+
+# Assessment of ecological services
+
+# Summary
+
+Traditionally, the economic theories have not considered the impact of ecosystem services. In order to implement the concept of sustainable development and change the status quo of ecosystem destruction caused by human abuse of land, an ecological service evaluation model is required to establish for determining the real and comprehensive valuation of the projects.
+
+Firstly, we give a set of indicators to analyze the true economic costs of land-use projects when ecosystem services are considered, including the natural resource depletion (NRD), preventive cost (PC) and repair cost (RC). By these indicators, we analyze the definition of environmental costs and the total cost of environmental pollution and ecological damage.
+
+Secondly, in order to make the cost-benefit analysis of land use development projects for different scales, we give two instances: a rural community-based and south-to-north water transfer center project in China. For the community-based project, we extract the main data from the collected data in the land use development area, use the transfer network based on the GIS map, and dynamically analyze the impact of land change on the environment. For the large national projects, we use the main pollutants based on principal component analysis (PCA). In addition, we combine with the ecological environment assessment model and the cost of environmental degradation.
+
+Thirdly, we use the sensitivity analysis to determine the influencing factors of the model and the dependence of the relevant parameters on the model. By changing the value of the parameters, we analyze the different results of the environmental cost and find our model and methods are stable. The results from the proposed model show that the main influencing factor of the accuracy is the depletion of production materials and the model is effective. Moreover, our model makes proper improves, and the introduction of human science factors is negatively correlated with environmental costs, resulting in new calculation expressions:
+
+Finally, the advantages and disadvantages of the model are described and summarized, for further expanding and improving the proposed model.
+
+Keywords: Ecosystem services; Land use projects; Principal component analysis; Environmental cost.
+
+# Contents
+
+# 1 Introduction 1
+
+1.1 Background 1
+1.2 Restatement of the problem 1
+1.3 Overview of our work 1
+
+# 2 General Assumptions and Justifications 2
+
+# 3 Variable Description 2
+
+# 4 Ecological Service Valuation Model 3
+
+4.1 How to assess the costs? 3
+4.2 Natural resource depletion 4
+4.3 Repair cost 6
+4.4 Preventive cost 8
+4.5 Evaluation of environmental degradation cost 9
+
+# 5 Performing a Cost Benefit Analysis 10
+
+5.1 For small community-based project 10
+5.2 For "South-to-North Water Transfer" projection 11
+5.3 Project benefit analysis 14
+
+# 6 Sensitivity analysis 15
+
+6.1 Basic issue 15
+6.2 Sensitivity analysis result 16
+
+# 7 Implications on Planners and Managers 16
+
+# 8 Model Changes Over Time 18
+
+# 9 Strengths and Weakness 19
+
+9.1 Strengths 19
+9.2 Weakness 19
+
+# 10 Conclusion 19
+
+# References 19
+
+# 1 Introduction
+
+# 1.1 Background
+
+In recent decades, in order to implement the concept of sustainable development and change the status quo of human ecosystem destruction caused by human abuse of land, the understanding of the ecosystem has been transformed from an inexhaustible unpaid theory to a service theory of paid supply [1]. However, ecosystem services are often in the category of open access and purely public services, which means that they often have no producer property rights, vague rights structures and prohibitive transaction costs. It is difficult for managers to measure the value of ecological services in the form of monetization.
+
+Considering these issues, Richmond et al. adopted the net primary production agency ecosystem service system [2], and Norgaard, elaborated on the ecosystem service value assessment method from an ecological perspective. The ecosystem service value assessment system was established in [3]. Therefore, the ecosystem services can be measured from the use value, non-use value, and they can be subdivided into the value of natural resources, soil, surface water and ecosystem.
+
+# 1.2 Restatement of the problem
+
+Traditionally, most land-use projects have not considered the impact and changes of ecosystem services. The economic costs to mitigate negative results of land use changes: polluted rivers, poor air quality, hazardous waste sites, poorly treated waste water, climate changes, etc., are often not included in the plan. To understand the true economic costs of land-use projects, our team was hired to create an eco-service evaluation model when considering ecosystem services. We need to answer the following two questions:
+
+(1) Is it possible to put a value on the environmental cost of land use development projects?
+(2) How would environmental degradation be accounted for in these project costs by the mathematic model?
+
+# 1.3 Overview of our work
+
+1. First, we make an analysis of the real economic cost of the land use project, namely, the environmental cost, when considering ecosystem services. Then, we consider this issue by three indicators, called natural resource depletion (NRD), preventive cost (PC) and repair cost (RC). Furthermore, we use the corresponding secondary and tertiary indicators.
+2. To show the efficiency of the proposed model, we explore the small-scale community project and the South-to-North Water Transfer Middle Line Project (a large-scale national project), and make the cost-benefit analysis of the project separately.
+
+3. Combining the local sensitivity analysis with the global sensitivity analysis, the validity of the model is testified.
+4. In order to help project planners for determining the location of the minimum environmental cost, we select the land use development project area and use numerical simulation.
+5. Finally, the extra factors related the times are added to the proposed model for illustrating the change over time.
+
+# 2 General Assumptions and Justifications
+
+To simplify the considered problems, we make the following basic assumptions, which are properly justified.
+
+- The data collected is more authentic and reliable, and there will be no critical data errors.
+- Sudden changes in the ecological environment are ignored in the calculation of environmental costs.
+- Annual resource rents remain constant at the comparable price levels.
+- Each sub-indicator only plays a role in the range of indicators set, ignoring the effect on other indicators.
+- Explosive changes are ignored in the predictions of more than a decade.
+
+# 3 Variable Description
+
+Table 1: Symbol TablelCVariables
+
+| Symbol | Description |
| NRD | Natural resource depletion |
| RC | Repair cost |
| PC | Protective cost |
| CNRDij | The NRD for the j-th second class indicator of the i-th first class indicator |
| CRCij | The RC for the j-th second class indicator of the i-th first class indicator |
| CPCij | The PC for the j-th second class indicator of the i-th first class indicator |
| ESA | Metrics for ecological service assessment |
| ei | The i-th index entropy |
| pij | The weight of the j-th index and the i-th evaluation index |
| vj | The comprehensive evaluation value under the j-th index |
+
+# 4 Ecological Service Valuation Model
+
+According to the information provided in the literatures [4-6], there are three types of main costs for exploring land development and utilization when the ecological services were considered: natural resource depletion, pollution remediation and protective expenditure. In this section, we follows their ideas and set up an unique mathematic model to evaluate the ecological services. Now, we explain three indictors mentioned above.
+
+- Natural resource depletion is a material environmental resource that calculates how much value is consumed by humans.
+- Reparative expenditure refers to the environmental protection expenditure, which is the costs to prevent the environmental damage, or incurred in repairing the damaged environment.
+
+# - Protective expenditure
+
+a. refers to the cost of education to eliminate the negative impact of land development and utilization;
+b. refers to the expenses incurred in measuring pollution, collecting data, and implementing relevant policies in order to manage environmental quality.
+
+# 4.1 How to assess the costs?
+
+Assessing the costs is of great importance for setting up the model and understanding the ecosystem services. Here, we give some principles for determining indicators and show their ranges, according to the theoretical and practical properties.
+
+# 4.1.1 Principles for determining indicators
+
+- The principle of market price. Environmental assets with market prices are preferred for inclusion in the accounting range. There are no direct market prices but there are indirect market prices and important ones are also optional. There is no price basis for non-selection.
+- Comparability principle. The results of the calculations should be comparable for comparison, including horizontal comparison and vertical comparison. To ensure comparability, the scope and method of environmental value accounting should be as consistent as possible for different regions
+- The principle of implement. Environmental accounting must be implementable. In addition to price factors, physical quantity data is also essential. For environmental assets that lack physical quantity data, they can only be temporarily not accounted for.
+
+# 4.1.2 Determine the accounting range of the indicator
+
+According to the literature [5], the ecological service consists of five parts. For Lifefulfilling services and Preservation of options, we do not include them in the model due to the lack of data and uncertainty.
+
+According to the three principles mentioned above, we have natural resource depletion and protective expenditure based on the physical quantity data available in the China Environmental Statistics Yearbook and the specific classification of China's physical quantity. The classification of environmental awareness was divided into three parts. Moreover, we use eight secondary influencing factors for further explaining the indicators in detail.
+
+
+Figure 1: All the indicators for exploring the costs.
+
+# 4.2 Natural resource depletion
+
+# 4.2.1 Potential indicators for natural resource depletion
+
+From the environmental theme of environmental protection [6], natural resource depletion contains many topics, such as forest resources, crop resources, aquatic resources, animal resources, metal minerals and non-metallic minerals. We divide these topics into two categories: production materials and energy consumption. And we show this relationship in Figure 2.
+
+
+Figure 2: Two categories of Natural Resource Depletion
+
+- The depletion of production materials is a component that focuses on understanding the value of commodity production in ecosystem services, mainly in forest resources, crop resources, aquatic resources and animal resources, which are the products of ecosystem cycles. When a new land development project is introduced, the production losses caused by the original ecosystem damage are equivalent to the sum of the four parts of the resource losses.
+
+Forest resource. According to the literature [7], the value in forest resource is the discounted value of the mature forest price after deducting the cost during the forest growth period. According to the standing value method, the direct application of the net present value is more complicated, so it is simplified to
+
+$$
+C _ {N R D _ {1 1}} = A \times p \times Q, \tag {4.1}
+$$
+
+where $A$ is the forest area (unit: hectare), $p$ is the average standing price per cubic meter (equivalent to the price of the log minus the cost of cutting), and $Q$ is the level of forest stock (unit: cubic meter hectare).
+
+- Crop resource. For crop resource, market prices can be obtained and valued by the market price method. The calculation formula is as follows:
+
+$$
+C _ {N R D _ {1 2}} = \sum_ {i = 1} ^ {n} P _ {i} Q _ {i}, \tag {4.2}
+$$
+
+where $P_{i}$ represents the price, $Q_{i}$ represents the crop yield, and $i$ represents the type of crop.
+
+Water resource. The water has been in a state of constant circulation. In the physical quantity data, the surface water and groundwater will be repeated, and the water has different quality. In theory, the price of water should also be different due to its quality. We use the possession method to estimate water resources. The formula for calculating the value of surface water resources reads as:
+
+$$
+C _ {s} = \sum_ {t = 1} ^ {n _ {1}} \frac {R _ {s}}{(1 + r) ^ {t}} = \sum_ {t = 1} ^ {n _ {1}} \frac {P _ {s} Q _ {s}}{(1 + r) ^ {t}}, \tag {4.3}
+$$
+
+where $R_{s}$ is the surface water resource rent; $P_{s}$ is the surface water resource fee, which represents the unit resource rent; $Q_{s}$ is the total surface water; $r$ is the discount rate; $n_{1}$ is the current total surface water useful life.
+
+Similarly, we can get the formula for calculating the value of groundwater resources as follows:
+
+$$
+C _ {g} = \sum_ {t = 1} ^ {n _ {2}} \frac {R _ {g}}{(1 + r) ^ {t}} = \sum_ {t = 1} ^ {n _ {1}} \frac {P _ {g} \cdot Q _ {g}}{(1 + r) ^ {t}}. \tag {4.4}
+$$
+
+By summarizing the value of surface water and groundwater, we can calculate the total value of water resources, expressed as $C_{NRD_{13}} = C_s + C_g$
+
+Aquatic resource. Fish cultured by water-producing institutions are assets of production that are privately owned and can be traded on the market. In most cases, it is easy to obtain the market price of fish, and the market price method can be used to estimate the fish value:
+
+$$
+C _ {N R D _ {1 4}} = \sum_ {i = 1} ^ {n} p _ {i} Q _ {i} + \sum_ {j = 1} ^ {m} p _ {j} Q _ {i},
+$$
+
+where $p$ represents the price; $Q$ represents the amount of aquaculture; $i$ indicates the type of aquaculture produced and $j$ indicates the type of fish caught.
+
+- Energy consumption reduction. Energy is the original commodity of ecosystem services. By accounting for metal minerals, non-metallic mines and biomass energy, we can assess the value of the energy depletion portion of ecosystems after land development. The following formula can be obtained by using the price-adjusted net present value method.
+
+$$
+C _ {N R D _ {2 i}} = \sum_ {i = 1} ^ {n} R \left(\frac {1 + \mathrm {i}}{1 + r}\right) ^ {t} = R \times \frac {(1 + \mathrm {i})}{(r - i)} \times \left(1 - \left(\frac {1 + i}{1 + r}\right) ^ {n}\right), \tag {4.6}
+$$
+
+where $R$ is the resource rent; $i$ is the price growth rate of the resource; $r$ is the discount rate; $n$ is the resource life.
+
+# 4.2.2 The NRD accounting system
+
+Based on the analysis of natural resource depletion above, we developed a method, called Natural Resource Loss (NRD), to assess the cost of ecological services at the natural loss level. The indicator includes two primary indicators and seven secondary indicators. We show the metrics for NRD in Table 2.
+
+Table 2: The indicators used in the natural resource depletion.
+
+ | Indicators | Notation | Indicators | Notation | Target |
| NRD | Production material consumption | NDR1 | Forest resources | NDR11 | ↓ |
| Crop resources | NDR11 | ↓ |
| Water resources | NDR12 | ↓ |
| Aquatic resources | NDR13 | ↓ |
| Energy consumption | NDR2 | Metal minerals | NDR21 | ↓ |
| Non-metallic minerals | NDR22 | ↓ |
| Biomass energy | NDR23 | ↓ |
+
+Explanation: the smaller the seven indicators listed in the above table, the more natural resource consumption and the corresponding calculated environmental expenditure costs. Since the cost proportion of each factor is different, in order to reduce the error caused by calculation, we add the correction coefficient and get the accounting formula of NRD as $C_{NRD} = \alpha_1 \cdot C_{NRD_1} + \alpha_2 \cdot C_{NRD_2} + \alpha_3 \cdot C_{NRD_3}$ .
+
+# 4.3 Repair cost
+
+# 4.3.1 Potential indicators for repair cost
+
+In this section, we analyze the evaluation indicators of repair expenditure using multi-criteria analysis [5]. In addition, we show the Amazon agricultural development case in evaluating sustainability options. The repair expenditure consists of three components, (a) detoxification and decomposition of waste; (b) investment in air and water purification, and (c) investment in afforestation. In order to better describe the
+
+components of the repair expenditure.
+
+- Contamination composition. During the land development process, various pollution is produced, such as: air, water and solid pollution. Pollution will cause degradation of environmental quality. In order to improve the environmental quality, it is necessary to take many effective measures. According to reference [7], we use more convincing and virtual governance costs to estimate the value of environmental degradation.
+
+1. When the environment quality is degraded, the degraded value becomes a component of the product value. This value is non-productive, and should be deducted.
+2. The actual treatment cost paid makes the environment improve. Deducted from it, there is still a part of the unrecovered environment. The value needs to be deducted.
+3. The part of the unrecovered environment is the cost of its full recovery, that is, virtual governance Cost, so virtual governance costs can be used entirely to represent the value of environmental degradation.
+
+By the analysis above, we decided to use the governance cost coefficient method to carry out cost assessment on the interpretation of waste affecting repair expenditure. The idea is to introduce the concept of treatment facility benefits, to calculate the treatment cost coefficient of each pollutant, and to apportion the treatment costs among various pollutants, so that the unit treatment cost of various pollutants can be estimated. The calculation steps are as follows:
+
+Step 1. Calculate the treatment benefit of the $i$ pollutant in a treatment facility
+
+$$
+\eta_ {i} = \frac {I _ {i} - E _ {i}}{S _ {i}} \cdot \frac {E _ {i}}{I _ {i}}, \tag {4.7}
+$$
+
+where $\eta_{i}$ represents the treatment benefit of the $i$ contaminant, $E_{i}$ represents the export concentration of the $i$ contaminant, and $I_{i}$ represents the import concentration of the $i$ -th contaminant, $\frac{E_i}{I_i}$ indicates the difficulty of the treatment of the $i$ -th contaminant, and $S_{i}$ represents the maximum allowable emission concentration of the $i$ contaminant.
+
+Step 2. Calculate the cost of treatment of the $i$ -th pollutant
+
+So as to share the total governance cost. The formula is:
+
+$$
+\gamma_ {i} = \frac {\eta_ {i}}{\sum_ {i} ^ {n} \eta_ {i}}. \tag {4.8}
+$$
+
+Among them, $M$ is the total amount of residue, $i$ is the category of pollutants, and $C$ is the total cost of treatment.
+
+Step3Calculates the unit governance cost of the $i$ -th pollutant
+
+The formula is:
+
+$$
+\overline {{C _ {i}}} = \frac {C _ {i} \cdot \gamma_ {i}}{M _ {i}} = C / S _ {i} \cdot M \cdot \sum_ {\mathrm {i} = 1} ^ {n} \frac {I _ {i} - E _ {i}}{S _ {i}}. \tag {4.9}
+$$
+
+Among them, $M$ is the total amount of residue, $i$ is the category of pollutants, and $C$ is the total cost of treatment.
+
+Step 4. Calculates the total cost
+
+$$
+C _ {p} = C + \overline {{C}} = C + \sum_ {i = 1} ^ {n} \overline {{C}} _ {i},
+$$
+
+where $C$ is the total cost of pollution control, and $\overline{C}$ is the cost of virtual pollution control.
+
+- Afforestation. In order to repair existing pollution, reduce the probability of bad weather, and improve biodiversity, afforestation is a long-term strategy. The cost is equal to the input labor plus the value of the trees.
+
+# 4.3.2 The RC accounting system
+
+Based on the above analysis of the impact of maintenance spending, we develop a method called Rehabilitation Cost (RC) to assess the cost of ecological services at the natural loss level. The indicator includes three primary indicators and five secondary indicators. We show the metrics for RC in the table 3.
+
+Table 3: The indicators used in the Repair cost
+
+ | Indicators | Notation | Indicators | Notation | Target |
| RC | Waste detoxification and decomposition investment | \( RC_1 \) | Solid waste pollution | \( RC_{11} \) | ↑ |
| Air and water purification investment | \( RC_2 \) | Air pollution | \( RC_{21} \) | ↑ |
| Water pollution | \( RC_{22} \) | ↑ |
| Afforestation investment | \( RC_3 \) | Severe weather occurrence probability | \( RC_{31} \) | ↑ |
| Biodiversity | \( RC_{32} \) | ↓ |
+
+Explanation: the larger the five indicators listed in the above table, the more the repair expenditure and the corresponding calculated environmental expenditure costs. Since the cost proportion of each factor is different, for reducing the error caused by calculation, we add the correction coefficient, and finally get the accounting formula of RC is $C_{RC} = \alpha_1 \cdot C_{RC_1} + \alpha_2 \cdot C_{RC_2} + \alpha_3 \cdot C_{RC_3}$ .
+
+# 4.4 Preventive cost
+
+# 4.4.1 Potential indicators for preventive cost
+
+In this section, we will discuss preventive spending, the cost used to prevent land development and utilization. From the analysis of environmental climate change caused by the Bangladesh land development strategy in [6], we summarize three components for the preventive expenditure: environmental education investment, environmental affairs costs and environmental isolation costs.
+
+- Environmental education investment refers to investing capital to cultivate specialized environmental protection personnel or raise people's awareness of environmental protection.
+- The cost of environmental affairs refers to a series of protective measures taken to prevent pollution, such as the government formulating environmental protection policies and recruiting relevant personnel to implement pollution monitoring.
+
+
+Figure 3: Components of Preventive Cost
+
+- Environmental isolation costs. Large-scale projects such as factories and nuclear power plants have great potential damage to the environment. The most direct way to protect life and reduce the impact of environmental pollution is to isolate pollution sources and evacuate nearby residents.
+
+# 4.4.2 PC accounting system
+
+Based on the above analysis of preventive expenditure indicators, we developed a measure called preventive expenditure (PC) to assess the cost of ecological services at the level of preventive expenditures. The indicator includes three primary indicators and six secondary indicators. We show the metrics for the PC in the table (4).
+
+Table 4: The indicators used in the Preventive cost
+
+ | Indicators | Notation | Indicators | Notation | Target |
| PC | Environmental education investment | PC1 | Educational investment | PC11 | ↑ |
| Publicity investment | PC12 | ↑ |
| Environmental transaction cost | PC2 | Formulate policies | PC21 | ↑ |
| Measuring pollution | PC22 | ↑ |
| Environmental isolation cost | PC3 | Pollution source isolation | PC31 | ↑ |
| Evacuation personnel | PC32 | ↑ |
+
+Explanation: the larger the six indicators listed in the above table, the more natural resource consumption, and the corresponding calculated environmental costs. Since the cost proportion of each factor is different, in order to reduce the error caused by calculation, we add the correction coefficient, and finally get the accounting formula of RC is $C_{PC} = \alpha_{1} \cdot C_{PC_{1}} + \alpha_{2} \cdot C_{PC_{2}} + \alpha_{3} \cdot C_{PC_{3}}$ .
+
+# 4.5 Evaluation of environmental degradation cost
+
+All in all, we built an evaluation system based on the NRD, RC and PC, which integrate the introduced factors. The cost can be summarized by the equation:
+
+$$
+C o s t = N R D + R C + P C.
+$$
+
+# 5 Performing a Cost Benefit Analysis
+
+# 5.1 For small community-based project
+
+In this section, we conducted a case of the environmental costs, including a small community project and a large country project. For a small community project, the data to obtain is a challenge, since the missing or ambiguous data sets cannot be used. By the ways from [7], we get the desired data with remote sensing techniques such as GIS.
+
+# 5.1.1 Area selection
+
+With the aids of Google Earth, we can obtain the map of the different communities in the city of China and selected a few of them for case analysis.
+
+
+(a) Year 2010.
+Figure 4: Comparison of map between 2010 and 2017 for the city of China.
+
+
+(b) Year 2017.
+
+The above pictures show two satellite maps obtained. Based on these data, we established a time-space ecological environment service evaluation model.
+
+# 5.1.2 Eco-environmental service evaluation model based on spatio-temporal pattern analysis
+
+The areas of different colors in the figure have different ecological service values.
+
+- The green and blue areas are ecological resource areas, such as forest land, and water sources can provide certain ecological resources;
+- The construction land is an ecological demand zone, such as houses and roads, which does not contribute to ecological services;
+- The yellow area is an unused land and can be developed as a forest land or a construction land.
+
+With the help of the GIS toolbox in Matlab, combined with the above two division rules, we clarify the research object to be the spatial and temporal changes of the ecological resource area, then make a land use transfer network of the study area which
+
+can visually and concretely indicate the mutual conversion between land use types. The situation helps us to reveal the direction of migration and spatial evolution of land use types over a period of time.
+
+
+Figure 5: Land use transfer network
+
+Through the land use transfer network shown in Figure 5, we can intuitively find that the grassland in the region is the most transferred to the construction land (increased cost); the second unused land is mainly converted to grassland and forest land (cost reduction). According to the formula, we can get the environmental degradation cost of the region,
+
+$$
+\begin{array}{l} C o s t = C _ {N D R _ {1 1}} + C _ {N D R _ {1 2}} \\ = \sum_ {i = 1} ^ {n} A _ {i} p _ {i} Q _ {i}. \tag {5.1} \\ \end{array}
+$$
+
+# 5.2 For "South-to-North Water Transfer" projection
+
+# 5.2.1 Project overview
+
+"South-to-North Water Transfer Project" is a strategic project in China. The total length of the main canal is $1,273 \, \text{km}$ , and the annual water transfer scale is 13 billion $\text{m}^3$ . The goal is to solve the problem of water shortage in more than 20 large and medium-sized cities along Beijing, Tianjin, Shijiazhuang and Zhengzhou, and to take into account the ecological environment along the line and agricultural water. The engineering route is shown in the figure below.
+
+
+Figure 6: The midline project of "South-to-North Water Transfer
+
+We select the city of wuhan in the middle and lower reaches of the han river on the middle route of "south-to-north water diversion project" as the research object, and analyze the environmental cost in the city of wuhan for considering ecological services.
+
+# 5.2.2 Data Preprocessing
+
+# (1). Data collection
+
+Based on the evaluation indicators in the previous ecological service evaluation model, we introduce a series of chemical indicators in various regions of Wuhan from 2007 to 2018 from the China Environmental Monitoring Center. By the data obtained, such as $DO$ , $COD$ , $NH_3-$ , dissolved oxygen, phosphorus content, nitrite nitrogen, etc., and from the literatures [7-9], we collect measurement data of Wuhan at other time points.
+
+# (2). Data padding
+
+The availability of data is an important issue. By using unreliable or untrue data, it will not provide an effective assessment. Therefore, the continuity and authenticity of the research data must be ensured. However, not all data can be collected, and in particular, the data provided in the literature is severely fragmented.
+
+In order to improve this situation, four methods have been proposed to improve the data, as shown below.
+
+- If the data value of the indicator is smooth, it can be replaced with the previous data;
+- If the previous and subsequent data are available, the average value can be considered as missing;
+- If the two groups of data are similar, the missing data in one group can be replaced with the value in the same position in the other group;
+- Interpolation method is used for data fitting.
+
+# 5.2.3 Calculating the environmental degradation cost of "South-to-North Water Transfer"
+
+# Step 1. Calculating the cost of natural resource loss
+
+Combined with the data of natural resources (forest area, aquatic resources and crop area) in the vicinity of Hanjiang River in Wuhan in the past 15 years, we have made a map of their changes based on the above data processing method.
+
+
+Figure 7: Natural resource changes
+
+From Figure 7, we may observe that
+
+- The early stage of forestland resource area has obvious shrinkage due to the influence of "South-to-North Water Transfer". However, the area of forest land has been increasing year by year and is not included in the calculation.
+- Crop resources have a slight fluctuation in area for some reasons, and then it is not included in the calculation.
+- Due to the development of "South-to-North Water Transfer", aquatic resources from the river ecosystem is decreasing.
+
+# Step 2. Calculating maintenance costs by combining fuzzy principal component analysis
+
+Calculating maintenance costs by combining fuzzy principal component analysis, we collect the chemical solubility of 11 groups of pollutants (including 5 groups of air pollutants, 3 groups of water pollutants and 3 groups of land pollutants). If the maintenance cost of each pollutant is calculated, the cost will be very huge.
+
+Therefore, we decide to use principal component analysis to perform principal component on these 11 groups of pollutants and select substances that have a major impact on the cost of calculation. The specific steps are as follows.
+
+# (1) Calculate the comparative treatment cost of various pollutants
+
+We use the literature [10] to calculate the cost of treatment of each pollutant unit in Zhejiang Province as $\overline{C}_i$ , and calculate the comparative treatment cost of each pollutant by the following formula.
+
+$$
+C _ {\text {c o m p a r e d}} = \bar {C} \cdot \left(I _ {i} - S _ {i}\right). \tag {5.2}
+$$
+
+# (2) Data standardization
+
+Since the units of these 11 indicators are different, the data cannot be directly compared. In order to normalize the data, all data is converted to a number between 0 and 1. Contaminants are cost-type indicators that can be standardized by the following formula.
+
+$$
+x _ {i j} = \frac {x _ {\operatorname* {m a x}} - x _ {i j}}{x _ {\operatorname* {m a x}} - x _ {\operatorname* {m i n}}}, i = 1, 2, \dots , 1 5; j = 1, 2, \dots , 1 1, \tag {5.3}
+$$
+
+where $x_{ij}$ is the $j$ -th indicator of $i$ year; $x_{max}$ is the maximum of the indicator $x_{ij}$ ; $x_{min}$ is the indicator the minimum value of $x_{ij}$ .
+
+# (3) Validity test of data
+
+Since the data obtained are standardized, we directly perform a KMO test on the data, and the test results are shown in Table 5.
+
+Table 5: KMO and Bartlett's Test
+
+| Kaiser-Meyer-Olkin Measure of Sampling Adequacy. | 0.913 |
| Bartlett's Test of Sphericity | Approx. Chi-Square | 1116.051 |
| df | 62 |
| Sig. | 0.011 |
+
+As shown above, the KMO value is 0.913, much higher than 0.5, which indicates that there are many common factors between the indicators. Thus, the method is very suitable for fuzzy principal component analysis.
+
+# (4) Fuzzy principal component analysis
+
+Fuzzy principal component analysis is performed on the obtained variables to obtain the global eigenvalue and the contribution rate of each principal component variance (the first principal component contribution rate is $69.875\%$ , and the second principal component contribution rate is $19.947\%$ , which adds up to $89.822\%$ ).) and the rotation component matrix, the results are shown in Table 6.
+
+Table 6: Component Matrixa
+
+ | Component |
| 1 | 2 |
| SO2 | 0.898 | 0.425 |
| NO2 | 0.377 | 0.805 |
| CO | 0.997 | 0.022 |
| O3 | 0.617 | -0.204 |
| PM2.5 | 0.926 | -0.34 |
| benzene | 0.551 | 0.738 |
| Ethyleneglycol | 0.997 | 0.057 |
| Dichloroethane | 0.728 | -0.62 |
| Hg | 0.973 | 0.294 |
| Cd | 0.929 | 0.335 |
| Cr | 0.953 | -0.277 |
+
+Extraction Method: Principal Component Analysis.
+
+The rotational component of the first principal component in the table is the highest of the 11 indicators, so we consider these three $(CO,Hg,Ethyleneglycol)$ as the main impact indicators.
+
+# Step 3. Calculating the cost of prevention
+
+The prevention cost of the "South-to-North Water Transfer Project" mainly comes from the monitoring of the water source along the way and the surrounding geology. Since the government has not disclosed this item, the total cost of our final calculation is
+
+$$
+C o s t = C _ {N R D} + C _ {R C} + C _ {P C}. \tag {5.4}
+$$
+
+# 5.3 Project benefit analysis
+
+The project benefit is the benefit of land acquisition development that causes certain damage to the ecosystem service system and leads to environmental degradation, including the investment in the ecosystem service system by preventive cost and remediation cost. The benefit can be divided into two parts: main resource efficiency and environmental benefit.
+
+- Resource benefit estimate
+
+$$
+C _ {Z} = \sum \left(Q _ {i} \times k _ {i}\right) + Q / A _ {S} \times B _ {S}. \tag {5.5}
+$$
+
+- Environmental benefit estimate
+
+$$
+C _ {h} = C _ {h 1} + C _ {h 2} + C _ {h 3} - C _ {h 4},
+$$
+
+$$
+C _ {h 1} = Q / A _ {S} \times B _ {S} + Q \times K _ {S},
+$$
+
+$$
+C _ {h 2} = \sum_ {i} ^ {n} Q _ {i} \times k _ {i} \times H _ {i}, \tag {5.6}
+$$
+
+$$
+C _ {h 3} = \sum_ {i} ^ {n} Q _ {i} \times k _ {i} \times (T _ {a} + T _ {w} + T _ {g} + T _ {S}),
+$$
+
+where $C_{h4}$ is a constant.
+
+Substituting the data obtained in the previous section into the above formula, we plot the figures of the project benefit results in Figure 8.
+
+
+(a) Small projects
+Figure 8: Project Benefits Results Map
+
+
+(b) Large projects
+
+# 6 Sensitivity analysis
+
+# 6.1 Basic issue
+
+Based on the ecological service evaluation model, we firstly select the six indicators, such as production material consumption reduction $(\mathrm{NRD}_1)$ and energy consumption reduction $(\mathrm{NRD}_2)$ as the variables for local sensitivity analysis (normally, the proportion of preventive cost in environmental cost is relatively small, resulting in the changes in preventive costs $(PC_1, PC_2, PC_3)$ are small, and the eco-service model is considered to be insensitive to preventive cost. To simplify the analysis, the environmental education investment $PC_1$ , environmental transaction costs and environmental isolation costs are accumulated, and preventive expenditure is used. PC indicates). Only one parameter value is changed during analysis, other parameters remain unchanged.
+
+The sensitivity of the model is measured by calculating the amount of change in the model output value when the parameter changes. When calculating, the parameters are slightly disturbed, such as $\pm 5\%$ change, and the response fluctuation caused by the model output to a single input is the sensitivity index.
+
+In the local sensitivity analysis, we use the finite difference method to calculate the sensitivity of each index. The calculation formula is as follows:
+
+$$
+\frac {\partial y}{\partial x _ {i}} = \frac {y \left(x ^ {i}\right) - y (x)}{\Delta x _ {i}} + 0 (\Delta x) \approx \frac {y \left(x ^ {i}\right) - y (x)}{\Delta x _ {i}}. \tag {6.1}
+$$
+
+
+Figure 9: Natural resource changes
+
+# 6.2 Sensitivity analysis result
+
+- According to the calculation of the range of flow velocity variation, it can be determined that the production material consumption reduction $(NRD_{1})$ has the most obvious impact on the model, followed by air and water purification investment $(RC_{2})$ , energy consumption reduction $(NRD_{2})$ , and climate change mitigation investment $(RC_{3})$ , preventive cost $(PC)$ , waste decomposition investment $(RC_{1})$ , when the production material consumption fluctuates by $\pm 5\%$ , the environmental cost fluctuates by about $+7.9\%$ and $-7.6\%$ , respectively. The waste decomposition investment fluctuates by $\pm 5\%$ , the environmental cost fluctuates by approximately $+2.0\%$ and $-2.5\%$ , respectively.
+- The six parameters are approximately linear with the calculated flow rate and are proportional to changes in environmental costs.
+- The fluctuations of the selected parameters are more obvious for the change of the output value of the model (the fluctuation is greater than $\pm 2.0\%$ ), which indicates that the indicators selected by the model have good representativeness and the model has good sensitivity and validity.
+
+# 7 Implications on Planners and Managers
+
+From the title, we can understand the impact of our ecological service model on land use project planners and managers when considering the ecological environment. It indicates that project technicians and managers need to analyze and evaluate the environmental impacts after the implementation of land use planning, and propose countermeasures and measures to prevent or mitigate adverse environmental impacts.
+
+Therefore, we choose 10 square kilometers of land use development area for simulation. However, the data of this development area is more complicated, it needs to simplify the data screening and delete the useless data. Therefore, we use the entropy weight method to process the data. The smaller the information entropy is, the lower the disorder degree of the information is. The greater the utility value of the information is, the larger the weight of the index is. On the contrary, the larger the information entropy is, the higher the disorder of the information is, and the more the utility value
+
+of the information is. Small, the weight of the indicator is also smaller. are as follows.
+
+Step1. Standardize the data of each indicator.
+
+$$
+x _ {i} = \frac {s _ {i} - s _ {\operatorname* {m i n}}}{s _ {\operatorname* {m a x}} - s _ {\operatorname* {m i n}}}. \tag {7.1}
+$$
+
+Step2. Calculate the entropy value of each indicator.
+
+$$
+e _ {i} = - k \sum p _ {i j} \ln \left(p _ {i j}\right), \tag {7.2}
+$$
+
+$$
+k = 1 / \ln (n), \tag {7.3}
+$$
+
+$$
+p _ {i j} = \frac {x _ {i j}}{\sum_ {i = 1} ^ {m} x _ {i j}} = \frac {1}{n}. \tag {7.4}
+$$
+
+where $e_i$ value range between $[0,1]$ .
+
+Step3. Calculate the difference coefficient between the indicators.
+
+The smaller the entropy value, the larger the coefficient of variation between indicators, the more important the indicator is. Its formula is $g_{i} = 1 - e_{i}$ .
+
+Step4. Define the weight value
+
+$$
+w _ {i} = g _ {i} / \sum_ {i = 1} ^ {m} g _ {i}. \tag {7.5}
+$$
+
+Step5. Calculate the comprehensive evaluation value
+
+$$
+v _ {j} = \sum_ {i = 1} ^ {m} w _ {i} p _ {i j} + \sum_ {k = 1} ^ {m} w _ {k} \left(1 - p _ {i j}\right). \tag {7.6}
+$$
+
+Finally, the indicator data of the filtered development zone is substituted into the model to obtain the final environmental cost distribution map as in Figure 10.
+
+
+(a) Floor plan
+
+
+(b) 3D map
+Figure 10: Environmental cost distribution map
+
+It can be seen from the figure that the environmental cost of the land use development project is about RMB 350,000, and the location is in the surrounding area of coordinates (1700, 1400). Thus, It helps the project designer's management personnel locate the land development project and the economic cost of the project save.
+
+# 8 Model Changes Over Time
+
+Over time, science and technology innovation, national environmental protection policies, and national land use development policies are all changing. It also affects our model in a subtle way. Based on this, the model adds human science indicators on the basis of the original, so as to improve the evaluation of the environmental cost of the model over time. Here are the changes that have occurred with the model over time:
+
+
+Figure 11: Model after the change
+
+Due to the development of the times, the advancement of science and technology, the further improvement of relevant policies and other human science factors contribute to the ecosystem service system. The relationship between human science factors and environmental costs is as follows.
+
+
+Figure 12: Relationship between human science factors and environmental costs
+
+As can be seen from the figure, the two show a negative correlation trend. As the humanities factor plays a role, the corresponding environmental costs are gradually reduced. Thus, the calculation formula for the final environmental cost can be changed.
+
+# 9 Strengths and Weakness
+
+# 9.1 Strengths
+
+- The establishment of the model starts from the definition of environmental cost, and constitutes an ecological service evaluation model for the three parts of natural resource depletion, remediation expenditure and preventive expenditure. Comprehensive methods such as grey correlation analysis and principal component analysis are used. Moreover, the algorithm is simple and easy to learn. The model is based on rigorous mathematical derivation, the solution process is strict, the result is highly reliable, and the persuasive power is strong.
+- Through the sensitivity test in the model validity analysis, the model has relative rationality and good generalization.
+
+# 9.2 Weakness
+
+- Data Deviation: The data we collect comes from multiple websites, and differences in statistical standards may lead to biased conclusions. More importantly, the lack of more metrics of data may lead to errors in the evaluation model.
+- Subjectivity: Some subjective methods are used in the model, and some indicators are based on our own experience and intuition. And because some conditions are simplified, there is a gap with the actual, thus affecting the accuracy of the results.
+
+# 10 Conclusion
+
+In this paper, we use three components, natural resource depletion, preventive cost and repair cost for considering the true economic cost of land use projects in ecosystem services. To analyze and model the effectiveness of the model, we give the sensitivity analysis to obtain the main impact factor of the model accuracy and the results show the model is effective.
+
+In addition, we extract the data from the collected complex data by the entropy weight method, and substitute the model to obtain the environmental cost distribution map in the whole region. Finally, as time goes by, our model makes appropriate changes, and the introduction of human science factors is negatively correlated with environmental costs, resulting in new calculation expressions.
+
+# References
+
+[2] Xiaoping Shi. Economic Analysis of Sustainable Utilization of Land Resources [D]. Nanjing Agricultural University, 2001.
+[3] Xu W, Yin Y, Zhou S. Social and economic impacts of carbon sequestration and land use change on peasant households in rural China: a case study of Liping, Guizhou Province[J]. Journal of Environmental Management, 2006, 85(3), 54-59.
+[4] Wu JJ. Land use changes: Economic, social, and environmental impacts[J]. Choices. 23 (4): 6-10, 2008, 23(4), 6-10.
+[5] Yang, Q., Liu, G., Casazza, M., Campbell, E., Giannettia, B., Brown, M., December 2018. Development of a new framework for non-monetary accounting on ecosystem services valuation. Ecosystem Services 34A, 37-54.
+[6] Glómez-Baggethuna, E., de Groot, R., Lomas, P., Montesa, C., 1 April 2010. The history of ecosystem services in economic theory and practice: From early notions to markets and payment schemes. Ecological Economics 69 (6), 1209-1218.
+[7] Guzmícn G I, Molina M G D, Alonso A M. The land cost of agrarian sustainability. An assessment[J]. Land Use Policy, 2011, 28(4), 825-835.
+[8] Jindal R, Kerr J M, Ferraro P J, et al. Social dimensions of procurement auctions for environmental service contracts: Eva luating tradeoffs between cost-effectiveness and participation by the poor in rural Tanzania[J]. Land Use Policy, 2013, 31(31), 71-80.
+[9] Verburg, René, Rodrigues Filho S, Debortoli N, et al. Evaluating sustainability options in an agricultural frontier of the Amazon using multi-criteria analysis[J]. Land Use Policy, 2014, 37, 27-39.
+[10] Chee, Y., 2004. An ecological perspective on the valuation of ecosystem services. Biological Conservation 120, 549-565.
+[11] Guanghai Lei. Research on Environmental Impact Assessment of Land Development and Consolidation Planning [D]. Nanjing Agricultural University, 2009.
+[12] Jahunul Islam M. Impact of private land development on the environment of the Eastern Fringe Area of Dhaka[J]. 2004, 76-79.
+[13] Wenliang Yu. Research on urban vein industry development model and its resource efficiency and environmental benefit estimation method [D]. Northwest University, 2009.
+[14] Li Gao, Jitao Wang. Summary of the impact of the South-to-North Water Transfer Project on the ecological environment[J]. Water Resources Science and Technology and Economy, 2008(02), 131-133.
+[15] Lingling Zhang. Evaluation and Influencing Factors of Ecosystem Services in Bai- long River Basin of Gansu Province [D]. Lanzhou University, 2016.
+[16] Zhipeng Yang, Jiawei Xu, Xinghua Feng, Meng Guo, Yinghua Yan, Xuejiao Gao. Study on the impact of land use change on habitats in Northeast China based on InVEST model[J]. Ecological Science, 2018, 37(06), 139-147.
+[17] Wurster D, Artmann M. Development of a Concept for Non-monetary Assessment of Urban Ecosystem Services at the Site Level[J]. Ambio, 2014, 43(4), 454-465
\ No newline at end of file
diff --git a/MCM/2019/E/1924813/1924813.md b/MCM/2019/E/1924813/1924813.md
new file mode 100644
index 0000000000000000000000000000000000000000..058c51768d4b88fbd51d95c0ec09b22ae77eea41
--- /dev/null
+++ b/MCM/2019/E/1924813/1924813.md
@@ -0,0 +1,448 @@
+# 2019
+
+# MCM/ICM
+
+# Summary Sheet
+
+Our team was hired to tackle one of the greatest problems remaining in the 21st century: how do we prevent the "tragedy of the commons?" Specifically, our task was to "create an ecological services valuation model to understand the true economic costs of land use projects when ecosystem services (ES) are considered." We discovered that answering this question is key for governments to rent land to entities for land-use projects at a price necessary to preserve the value of ES owned by all.**
+
+Our team began by exploring the axioms of rational choice, the fundamental philosophical underpinnings of value, and the economic systems which best support our theory of value. We settled on Bayesian Decision Theory, wellbeing-based Utilitarianism, and Georgism, respectively. Our model was created in the context of satisfying the requirements of these three philosophies.
+
+We then explored preexisting models for ES valuation and extracted their best elements. We ultimately settled on a model which prices land-use projects' impact on ES in terms of dollars necessary to artificially recreate the ES expected to be destroyed. Where destroyed ES may not easily be recreated, we calculate the expected Quality Adjusted Life Years prevented from occurring, and convert these into dollars at a median, non-industry rate.
+
+Our final model is as follows:
+
+$$
+V = t (E - r + \in)
+$$
+
+where $V$ is the value of the total estimated economic cost of the land-use project over its life span in years, $t$ is the expected life span of the land-use project in years, $E$ is the sum of economic benefit gained from all ES per year, $r$ is the total revaluation loss of all assets (based on periodic impairment tests) per year, and $\in$ is the unaccounted economic benefit of other ES not considered in our model. $E$ is further defined as:
+
+$$
+E = E _ {P} + E _ {R} + E _ {C}
+$$
+
+where $E_P, E_R$ , and $E_C$ are the sum of the economic benefit gained from provisioning, regulating, and cultural ES, respectively. These are each defined as:
+
+$$
+E _ {P} = F + G \quad E _ {R} = W + A \quad E _ {C} = T
+$$
+
+where $\pmb{F}$ represents Food and Fiber, $\pmb{G}$ represents Genetic Resources, $\pmb{W}$ represents Water Quality, $\pmb{A}$ represents Air Quality, and $\pmb{T}$ represents Eco-Tourism. The method for calculating these is outlined in our report.
+
+We demonstrated the application of our model for a hypothetical 3-acre housing project in Redding, California expected to last 75 years and a 200-acre, amusement park project in Valdosta, Georgia expected to last 150 years. Our model yielded the true ES costs of these projects as $143,612 and$ 1,825,764, respectively. Further, we discuss powerful ways evaluators can estimate probabilities and utilities themselves, such as utilizing prediction markets and Fermi Estimation.
+
+Much of the beauty of the global ecosystem lies in its diversity; however, this necessitates a way for such a model to be tailored to any number of vastly unique micro-ecosystems. Our model accounts for this by creating a process for the model's users to create new variables or subtract from the existing variables. In addition, the process creates a way for the model to be reassessed and changed over time. Ultimately, this allows planners and managers to create tax structures that account for the effect that land-use projects will have on the commons, both paying back the damage done to what is owned by all sentient beings and disincentivizing projects that would cause large scale environmental degradation.
+
+# Ecological Services Valuation Model: Understanding the True Cost of Land-Use Projects
+
+1924813
+
+January 29, 2019
+
+# Contents
+
+# 1 Introduction 3
+
+# 2 Background 4
+
+2.1 The Threat Facing the commons 4
+2.2 The Need to Price the Commons 4
+2.3 Definitions 4
+
+2.3.1 Ecosystem 4
+2.3.2 Ecosystem Services (ES) 4
+2.3.3 Biodiversity 4
+2.3.4 Value 5
+2.3.5 Quality Adjusted Life Years 5
+2.3.6 Well-Being 5
+
+2.4Assumptions 5
+
+2.4.1 Ecosystems are Valuable as a Means to an End 5
+2.4.2 Land-Use Projects Must Be Time Based 6
+2.4.3 Bayesian Decision Theory is the Framework for Rational Choice 6
+2.4.4 The Default of Nature is not Optimal 6
+2.4.5 Utility Independence of Land-Use Projects 6
+
+# 3 Past Ecosystem Evaluation Model 7
+
+3.1 The Value of a Statistical Life (VSL) Model 7
+
+3.1.1 Variable Inaccuracy 7
+3.1.2 Non-Inclusive Cost 7
+
+# 4 Modeling Ecosystem Degradation 8
+
+4.1 Accounting for Liabilities 8
+
+4.1.1 Ecosystem Accounting 8
+4.1.2 Treatment of Cultivated Biological Resources 8
+4.1.3 Treatment of Operating Leases 8
+
+# 5 Our Algorithm 9
+
+5.1 Our Hypothetical Scenarios 9
+5.2 Provisioning Ecosystem Services 9
+
+5.2.1 Food and Fiber $(F)$ 9
+
+5.2.2 Genetic Resources $(G)$ 10
+
+5.3 Regulating Ecosystem Services 10
+
+5.3.1 Water Quality (W) 10
+5.3.2 Air Quality (A) 11
+
+5.4 Cultural Ecosystem Services 11
+
+5.4.1 Eco-Tourism $(T)$ 11
+
+5.5 Further Variables $(\in)$ 12
+5.6 ES Valuation Model 12
+5.7 Project Results 12
+5.8 Sensitivity Analysis 13
+5.9 Limitations of Our Model 13
+
+6 Counterarguments 14
+
+6.1 Critiques of Valuation Based on Restitution 14
+6.2 Intractability of Probability and Utility Estimates 14
+
+6.2.1 Estimating Probabilities: Prediction Markets 14
+6.2.2 Fermi Estimation 14
+
+7 Implications of our Model 16
+
+7.1 A More Representative Cost 16
+7.2 After Estimated Life Spans 16
+7.3 Evaluators Need to be Utilitarian 16
+
+8 Conclusion 18
+9 References 19
+
+# 1 Introduction
+
+Our team was hired to create an ecological services (ES) valuation model to understand the true economic costs of land-use projects when ES are considered.
+
+In our pursuit of creating a model, we began by researching the philosophical underpinnings of value. We decided that well-being, based off conscious-subjective experiences, is the only good which is intrinsically valuable. While we maintain a degree of moral uncertainty on this matter, we ultimately decided to base our valuation of ecosystem services from their expected impact on well-being of conscious creatures, most especially humans.
+
+We then explored the economic systems that best support our value-theory, and settled on Georgism, an economic philosophy which asserts that, while individuals ought to own the fruits of their own labor, natural resources are a public good [1]. Then, we researched the possible frameworks we could use to price ecosystem services, and determined the price should reflect the cost of artificially replacing $ES$ . In other words, the value of an ES depends on the price to replace its services. For services that are irreplaceable, we propose a method of converting lost environmental services into Quality-Adjusted Life Years (QALYs), which may then converting into dollars based off the rate of producing QALYs.
+
+We explored preexisting models for pricing the ES affected by land-use projects, and found several highly-developed, but difficult to apply models. To solve for this, we sought to create a model which balances accurate valuation with ease of applicability, while still maintaining our values of maximizing well-being. Thus, we designed a general model with only the most applicable variables.
+
+# 2 Background
+
+# 2.1 The Threat Facing the commons
+
+Despite the long history of valuing select portions of nature economically, there seems to be a new quality to current approaches. [2] This new quality is based on an increasingly clear observation: there is real economic cost to over-exploiting ecosystems. Worse, this cost is not just limited to oneself and one's property, but to everyone as we live intimately connected in a global biosphere. Further, this implies that we will need to increase administration of the commons. A 'commons' is any resource that belongs to all sentient beings [3]. For this end, we need an open-source model that rationally prices the commons. Such a model must take into account the relative necessity of each plot of land capable of being exploited to protect humanities long term goals.
+
+# 2.2 The Need to Price the Commons
+
+The motivation for creating an ES valuation model is twofold. First, the mode produces a tangible metric that allows the complex concept of ES to be easily understood. Second, the model provides a way to hold respective entities accountable. To achieve these desired outcomes, we discussed accounting principles and frameworks that encourage greater accountability for and transparency of a land-use project's economic cost. We also explore the possible implications of our model, including land-use tax disincentives, and include recommendations for administrators who choose to use our model.
+
+# 2.3 Definitions
+
+# 2.3.1 Ecosystem
+
+"An ecosystem is a dynamic complex of plant, animal, and microorganism communities and the nonliving environment, interacting as a functional unit. Humans are an integral part of ecosystems" [6].
+
+# 2.3.2 Ecosystem Services (ES)
+
+"Ecosystem services are the benefits people obtain from ecosystems. These include provisioning services such as food, water, timber, and fiber; regulating services that affect climate, floods, disease, wastes, and water quality; cultural services that provide recreational, aesthetic, and spiritual benefits; and supporting services such as soil formation, photosynthesis, and nutrient cycling. (See Figure A.) The human species, while buffered against environmental changes by culture and technology, is fundamentally dependent on the flow of ecosystem services." [6]
+
+# 2.3.3 Biodiversity
+
+"Biodiversity is the variability among living organisms. It includes diversity within and among species and diversity within and among ecosystems. Biodiversity is the source of many ecosystem goods, such as food and genetic resources, and changes in
+
+biodiversity can influence the supply of ecosystem services." [6].
+
+# 2.3.4 Value
+
+"Value" can be something as intangible as the social satisfaction of belonging to a community. "While "value" has many non-monetary connotations (as proponents of economic valuation of nature are quick to point out), a monetary value, a price, is what matters for economic valuation" [4]
+
+A common way economists price value is contingent valuation, where value is determined through surveys where participants state their preferences and willingness to pay for certain outcomes, such as the preservation of an environmental feature [7].
+
+# 2.3.5 Quality Adjusted Life Years
+
+Part of our model is based on Quality Adjusted Life Years (QALYs), a useful health economics metric that combines length of life with quality of experience. "One QALY is equal to 1 year of life in perfect health" [8]. Our paper will not go into detail on the various methods of calculating QALYs, but we will use the median non-industry threshold price of a QALY $9,500 observed in one study [8]. For instance, if a land-use project is thought to prevent 20 QALYs from occurring over its lifecycle through various predictable Nth-order effects, and one QALY is priced at$ 9,500, then the true cost of the land-use project should include the $195,000 in damage to human well-being and life expectancy.
+
+# 2.3.6 Well-Being
+
+"Human well-being is assumed to have multiple constituents, including the basic material for a good life, such as secure and adequate livelihoods, enough food at all times, shelter, clothing, and access to goods; health, including feeling well and having a healthy physical environment, such as clean air and access to clean water; good social relations, including social cohesion, mutual respect, and the ability to help others and provide for children; security, including secure access to natural and other resources, personal safety, and security from natural and human-made disasters; and freedom of choice and action, including the opportunity to achieve what an individual values doing and being. Freedom of choice and action is influenced by other constituents of well-being (as well as by other factors, notably education) and is also a precondition for achieving other components of well-being, particularly with respect to equity and fairness." [6] In our report, the philosophical underpinnings for the valuation of ES rests on its influence on human well-being, although we admit that the lives of non-human conscious creatures are intrinsically valuable as well.
+
+# 2.4 Assumptions
+
+# 2.4.1 Ecosystems are Valuable as a Means to an End
+
+We make the assumption that ES are not intrinsically valuable, but because of their impact upon sentient well-being. This is our most essential assumption. The assumption is based on a utilitarian and biocentrist ethical framework that the only intrinsic good is the subjective experience of all sentient life [10][11]. Therefore, in
+
+creating a model to find the price of a land-use project's effect on ES, the cost will be measured in how environmental degradation will effect surrounding being's subjective experience. Due to the utilitarian nature of this framework, life is not viewed as being infinitely valuable which allows effects to be monetarily quantified. In addition, we maintain the traditional biocentrist view that all life is not equally valuable due to the varying levels of consciousness (i.e. the subjective experience of a beetle is less valuable then a human's)[11].
+
+# 2.4.2 Land-Use Projects Must Be Time Based
+
+Land-use Projects must be time-bound in order for our model to work. If this were not the case, our model would have to account for the expected impact of ES on well-being over a nearly infinite time-horizon.
+
+# 2.4.3 Bayesian Decision Theory is the Framework for Rational Choice
+
+Bayesian Decision Theory is the proper statistical and theoretic approach to quantifying the value of ES. In more concrete terms, we believe we ought to make decisions based off expected utility (the products of utility and probability) rather than just known effects. For example, we may know for certain a building a highway through an everglade will kill at least 1000 fish, but our best predictions may suggest that there is a $20\%$ chance that 10,000 fish may be killed. The expected dis-utility then is at least 2000 fish will be killed, and this is the appropriate number to factor into our model.
+
+# 2.4.4 The Default of Nature is not Optimal
+
+Changing the landscape is not inherently bad. Assuming otherwise presumes the natural state of nature happens to be optimal. This is clearly not the case. We can do better than the impersonal forces of evolution that optimize for survivability and reproduction rather than well-being. Given that land-use projects are theoretically permissible, we have a need to price ecosystem services that are expected to ultimately be affected by land-use projects.
+
+# 2.4.5 Utility Independence of Land-Use Projects
+
+While the actual non-environmental function and positive utility of land-use projects is extremely important, we do not consider this for our model. It is up to administrators to price the value of a hospital to be built. We consider it our job to price the ES. Therefore, our model is independent of the land-use project's utility.
+
+# 3 Past Ecosystem Evaluation Model
+
+# 3.1 The Value of a Statistical Life (VSL) Model
+
+Today, most research institutions and high-income countries base their ES models on the VSL concept—the quantification of a group's willingness to pay (WTP) to decrease the likelihoods of dying. To do so, an estimate of how much the average person would willingly pay to decrease the probability of dying by a marginal amount. The model then quantifies the VSL as being the average WTP of a population multiplied by the population size [14][15].
+
+$$
+V S L = W T P * P o p u l a t i o n \tag {3.1}
+$$
+
+The total economic cost of environmental degradation is then calculated by multiplying the VSL by the lives lost to its effects.
+
+$$
+E c o n o m i c C o s t = V S L * L i v e s L o s t t o E n v i r o n m e n t a l D e g r a d a t i o n \tag {3.2}
+$$
+
+We see this status quo way of modeling the monetary cost of environmental degradation as being problematic for two reasons:
+
+# 3.1.1 Variable Inaccuracy
+
+Accuracy for this model depends upon the ability of humans to accurately gauge the value of decreasing the probability of mortality. This is problematic because in practice, humans tend to be wildly insensitive to the implications of such complex concepts.
+
+# 3.1.2 Non-Inclusive Cost
+
+Even if the proper way to evaluate economic cost is to base models purely off the impact that environmental degradation has on human populations, the VSL model does not fully evaluate all such costs. By focusing purely on loss of life, the impact environmental degradation has on living population's quality of life is lost.
+
+A land-use project that does not result in any deaths can still affect the quality of life of surrounding populations. For example, a land-use project has the sole environmental impact of contaminating a municipality's water supply. The municipality is able to import water from a neighboring city's supply. In such a scenario, the VSL model would evaluate the environmental cost as being zero dollars, as no one was killed by the project. However, this project came at a cost to the surrounding population. It can be, thereby, inferred that the VSL model could use significant improvements to accurately evaluating human cost.
+
+# 4 Modeling Ecosystem Degradation
+
+# 4.1 Accounting for Liabilities
+
+Our model's general foundation is influenced by an Ecosystem Health and Sustainability report by Sue Ogilvy at the Fenner School of Environment and Society [17]. Throughout Ogilvy's report, the challenges associated with accounting for liabilities related to ecosystem degradation are addressed with several aspects.
+
+# 4.1.1 Ecosystem Accounting
+
+The first step in accounting for liabilities (where liabilities reflect the lost economic value of an ecosystem) is quantifying ecosystem information with standard economic accounts for production, income, capital, and net worth. To do so, evaluators must spatially delineate different ecosystem types within a broader area of interest, assess its condition, and categorize each asset's service. The asset is broken down into three specific classifications of services [6], as follows:
+
+- Provisioning (food, fresh water, fiber, biochemicals, and genetic resources)
+- Regulating (climate regulation, disease regulation, water regulation, water purification, and pollination)
+- Cultural (spiritual and religious, recreation, aesthetic, educational, cultural heritage)
+
+Lastly, evaluators must assess the relative value of the various benefits offered from the ES. This value will then be used when assigning total economic cost of a land-use project. Note that supporting services (those that are necessary for the production of all other ES, such as soil formation, nutrient cycling, and primary production) were not listed because "[these services] differ from provisioning, regulating, and cultural services in that their impacts on people are either indirect or occur over a very long time, whereas changes in the other categories have relatively direct and short-term impacts on people" [6].
+
+# 4.1.2 Treatment of Cultivated Biological Resources
+
+All provisioning assets that are bearer plants (i.e. grows produce) are subjected to periodic impairment tests. This ensures that the quality of individual assets are accounted for over a period of time, which ultimately holds the entity accountable for the lasting effects of their land-use project. The reduction in a given asset's value must be communicated as an outflow of economic benefit, which is labeled as revaluation loss and must be factored into the final valuation model.
+
+# 4.1.3 Treatment of Operating Leases
+
+The lease of an ecosystem is considered an operating lease and not a financial lease. An operating lease is an asset that either depreciates over time or is matched with a liability. In other words, the entity makes a contract that allows for the use of an asset but does not convey rights of ownership of the asset [18]. Essentially, entities may be obliged to restore certain ES if necessary.
+
+# 5 Our Algorithm
+
+# 5.1 Our Hypothetical Scenarios
+
+To facilitate understanding of our ES valuation algorithm, we have constructed two scenarios under which we will demonstrate our algorithm in action:
+
+Project 1 (Small Scale Land-Use Project): Lake Redding Estates, located in Redding, CA, is planning on expanding their housing development by building a new cul-de-sac (going West off of Harland Dr.) consisting of ten new houses. The land-use project is expected to cost three million dollars without accounting for environmental considerations and would take place over three acres of land. For the purposes of this report, we will assume that this project will last seventy-five years.
+
+Project 2 (Large Scale Land-Use Project): Six Flags is planning on opening a new amusement park location in Valdosta, GA. The land-use project is expected to cost three hundred million dollars without accounting for environmental considerations, and would take place over two-hundred acres of land (including parking) [19]. The expected life span of the park is one hundred fifty years, based on the oldest operating amusement park (172 years). This area of this project is defined by the coordinates: (30.865744284443828,-83.18809971213341), (30.87546684656096,-83.18345922719931), and (30.873419357875783,-83.168538852084).
+
+# 5.2 Provisioning Ecosystem Services
+
+# 5.2.1 Food and Fiber $(F)$
+
+One of the primary provisioning ES that our model considers is the available combination of food and fiber from a given ecosystem. This variable serves an important role in our model because humans take advantage of many production-related services from plants, which ultimately results in a quantifiable economic value. These variables are represented as the NPP (Net Primary Production: the rate at which all the plants in an ecosystem produce net useful chemical energy [20]), which is measured as the mass of carbon per unit area per year for a given ecosystem $\left(\frac{g*C}{m^2*yr}\right)$ . This ES can then be integrated into our model with its associated mean NPP [20]. To convert the mean NPP to a monetary value, a shadow price for NPP will be used [21]. The shadow price (the estimated price of a good or service for which no market price exists) is calculated as $1996/million kg carbon/year.
+
+Project 1: The ecosystem type in Lake Redding Estates is categorized as Oak Woodlands by the California Environment Information Sources [22]. The mean NPP for woodlands is $700 \frac{g * C}{m^2 * yr}$ . The calculated shadow price at three acres totals to $16.96 per year, or $1,272 over 75 years.
+
+Project 2: The projected Six Flags location in Valdosta, GA, is categorized as coastal plains [23], and is located in a woodland ecosystem. The mean NPP for the woodlands is $700 \frac{g * C}{m^2 * yr}$ . The calculated shadow price at two hundred acres totals to $1130.85 per year, or $169,627.50 over 150 years.
+
+# 5.2.2 Genetic Resources $(G)$
+
+This ES is defined as the genes and genetic information used for animal and plant breeding and biotechnology [6]. The value of individual species is proportional to the role that they play in the ecosystem in addition to their potential use by humans for research purposes (e.g. potential future use in developing pharmaceuticals). One way of quantifying this value involves considering the probability of humanity ever creating useful pharmaceuticals from the given species, as well as the value of this theoretical pharmaceutical. A simpler way to calculate the value of a genetic resource that could become critically endangered through a land-use project is to determine which is lower: 1) the immediate cost to sequence and store its DNA in addition to the future cost to clone it back into existence for research purposes, or 2) the cost to keep said endangered species alive in another ecosystem.
+
+Project 1: Our research does not indicate their being endangered species in the three acre area to be converted into houses.
+
+Project 2: Our research does not indicate their being endangered species in the two hundred acre area to be converted.
+
+# 5.3 Regulating Ecosystem Services
+
+# 5.3.1 Water Quality $(W)$
+
+The cost that land-use projects have have on Water Quality will be defined in terms of decreased arable land- land in which water can infiltrate and seep through. This is because, when rainwater cannot infiltrate the ground, runoff to nearby water sources is increased [24]. Water accumulates pollution as it runs across the ground. To calculate the amount of water that will fall on the land-projects non-arable land per year, the locations average rainfall (R), in inches per square foot per year, must be converted into gallons ( 1 inch per square foot per year equating to point six gallons). This will then be multiplied by the total area of non-arable square feet (N) of the land-use project through dimensional analysis. This is then multiplied by the cost of purifying a gallon of water in the United States(\$.0003), something that, if done, would maintain the welfare of sentient beings [26].
+
+$$
+W = R * N *. 0 0 1 8 \tag {5.1}
+$$
+
+Project 1: Using Redding's average rainfall, and assuming that forty percent of the land will remain arable results in the price of $814.10 per year, or$ 61,057.50 over 75 years [27].
+
+Project 2: Using Valdosta's average rainfall, and assuming that fifty percent of the land will remain arable results in the price of $3,817 per year, or$ 572,550 over 150 years [28].
+
+# 5.3.2 Air Quality $(A)$
+
+Ecosystems contribute useful chemicals and extract harmful chemicals from the atmosphere, influencing air quality. The economic cost of this ecosystem service can be quantified and integrated into our model in terms of dollars per ton of carbon that the environment could have removed from the atmosphere. According to NC State University: College of Agriculture and Life Sciences, an average aged tree can absorb approximately 48 pounds of carbon dioxide per year and can sequester 1 ton of carbon after 40 years [29]. As of 2015, the cost of offsetting carbon dioxide is \(3.30 per tonne [30]. The economic cost of air quality can then be calculated by multiplying the amount of carbon dioxide that an ecosystem absorbs, specifically by trees, and the cost of offsetting carbon dioxide.
+
+Project 1: According to the University of Maryland, the average number of trees per acre in woodlands is 500 [31]. Lake Redding Estates, CA, was classified as woodlands under the provisioning section for Food and Fiber, so we assume that a metric of 500 trees per acre in this potential neighborhood is fair estimate. The total economic cost of clearing out three acres of land totals to $107.77 per year, or$ 8,082.75 over 75 years.
+
+Project 2: Since the location in Valdosta, GA, was also classified as woodlands, we are also assuming that the average number of trees throughout the projected Six Flags location is 500 trees per acre. Thus, the total economic cost of clearing out two hundred acres of land totals to $7,184.91 per year, or$ 1,077,736.50 over 150 years.
+
+# 5.4 Cultural Ecosystem Services
+
+# 5.4.1 Eco-Tourism $(T)$
+
+Since the value of ecosystem belongs to everyone, we can deduce that everyone has a claim to the utility of enjoying and spending time in an ecosystem. To the degree in which a land-use project is expected to destroy this public good, we believe this cost has a place in our model. There are two approaches to calculating this cost: 1) consider past eco-tourism expenditures as revealed preferences 2) calculate the expected future QALYs lost by land-use projects and convert this into the cost to produce the same amount of QALYs. The latter approach can be modeled as:
+
+$$
+T = P * \frac {A _ {m}}{6 0 * 2 4 * 3 6 5} * W _ {f} * Q \tag {5.2}
+$$
+
+where $T$ represents the value of eco-tourism, $P$ represents the expected number of people per year to visit an ecosystem should a land-use project not be built, $A_{m}$ is the average number of minutes each person spends there per year, $W_{f}$ is the well-being factor (expected relative quality of a minute), and $Q$ the cost of a QALY.
+
+Project 1: This is not an area especially prone to eco-tourism, so our team estimates: $P$ to be 50 (primarily people from the neighborhood), $A_{m}$ to be 720 minutes per year (based off an estimation of twelve 1 hour outings), $Q$ factor of 1.5 (someone would equally prefer to spend 2 hours there than 3 hours of average living), and a cost of a QALY to be $9,500 [9]. Thus, the estimated economic cost totals to$ 976 per year, or $73,200 over 75 years.
+
+Project 2: As the land is in a rural area and does not appear to have hiking trails, the estimated economic cost totals to $39 per year, or$ 5,850 over 150 years.
+
+# 5.5 Further Variables $(\in)$
+
+This list of variables is non-exhaustive. Just as each variable has to be applied in a somewhat different manner to every individual case, as was seen in their application to Project 1 and Project 2, as each case is different. While we broadly see these variables as being the most applicable and important factors to consider when evaluating the environmental cost of land-use projects, we acknowledge that there are instances where some might not be necessary or an important factor not included. For instance, a land-use project may take place in a barren desert making consideration of the provisioning variables unnecessary. In another instance, there could be wildlife that a certain religious group considers sacred. It is ultimately up to those who use our model to decide what is applicable to their area, and if they find an area-specific variable that must be considered, they may reference our framework for variable creation.
+
+# 5.6 ES Valuation Model
+
+The final ecological services valuation model is composed of broader terms that all share the same units (USD\(\)). This section will clarify the composition of each term through various levels of depth. At the most broad level, the final model is as follows:
+
+$$
+V = t (E - r + \in)
+$$
+
+where $V$ is the total estimated economic cost of the land-use project over its life span in years, $t$ is the expected life span of the land-use project in years, $E$ is the sum of economic benefit gained from all ES per year, $r$ is the total revaluation loss of all assets (based on periodic impairment tests) per year, and $\in$ is the unaccounted economic benefit of other ES not considered by our individual model per year. The variable $E$ can be examined more closely by defining the ES category:
+
+$$
+E = E _ {P} + E _ {R} + E _ {C}
+$$
+
+where $E_P$ , $E_R$ , and $E_C$ are the sum of the economic benefit gained from provisioning, regulating, and cultural ES, respectively. At the most in-depth level of our model, each ES category is divided into individual components:
+
+$$
+E _ {P} = F + G \qquad E _ {R} = W + A \qquad E _ {C} = T
+$$
+
+where $F$ represents Food and Fiber, $G$ represents Genetic Resources, $W$ represents Water Quality, $A$ represents Air Quality, and $T$ represents Eco-Tourism.
+
+# 5.7 Project Results
+
+Although our ES valuation algorithm is applied accurately to the two constructed scenarios (i.e. Project 1 and Project 2), the resulting economic cost of each project
+
+includes a certain amount of variability and error. Note that there was no estimated economic cost for Genetic Resources $G$ , which has the potential to drastically affect the total economic cost of each project. The results of the two projects are as follows:
+
+Project 1: \(V = \\) 143,612.25\(
+
+Project 2: $V = \$ 1,825,764$
+
+The results of these two projects provide a general insight into the potential economic costs of small and large scale land-use projects when ES are considered. Two of the most influential factors appear to be the size of the land-use project and the type of ecosystem where the project is located. The land-use project's negative impact on Air Quality $A$ and Water Quality $W$ ultimately depend on the ecosystem type, suggesting that planners and managers can be strategic when deciding on a location for their project.
+
+# 5.8 Sensitivity Analysis
+
+As our model is primarily composed of seven variables added or subtracted together, and there are reliable methods of calculating these variables, it is unlikely that different evaluators will yield wildly different results. The variable most capable of influencing cost, time, is also relatively easy to get right by looking at past data. Further, tax structures may be created which are flexible under variable-length land-use projects.
+
+# 5.9 Limitations of Our Model
+
+Our model does not fully encompass exogenous factors that alter the condition of ecological services, such as climate change, invasive species, or wild fires. This is due to the fact that attempting to factor such unpredictable and complex events into our model would result in an unacceptable amount of variance.
+
+Specifically, we excluded factoring in ornamental factors, pollination, regulation of human diseases, storm protection into our model. These variables, while important, are difficult to calculate for most small and large scale land-use projects. Should they be able to be calculated, however, our model, as a cost-function, allows them to be added.
+
+# 6 Counterarguments
+
+# 6.1 Critiques of Valuation Based on Restitution
+
+For many cases, the cost to destroy an ES, which is a public good, can be best thought of as equivalent to the cost to add that service. For example, since it costs $3.33 to remove one ton of carbon from the atmosphere [30], we ought to tax companies for adding one ton so that we can make up for the cost. However, a fair critique of this valuation method is that once certain ecosystem problems become bad enough, we reach a point that not only do we need to pay$ 3.33 to reduce one ton of carbon, but we also need to prevent 1 ton of carbon from being placed into the atmosphere; we can't afford any more damage. The honest way to think about this, however, is that the cost of taxes should no longer be a one-to-one ratio, but a greater ratio. In other words, there has to be a tax-price that can be placed upon a plot of land so that even the most staunch environmentalists would hope that a corporation would buy the rights to exploit such plot of land. This is because this money could reliably produce more environmental services than are destroyed.
+
+# 6.2 Intractability of Probability and Utility Estimates
+
+Since our model is an algorithm which requires evaluators to determine the values of many variables themselves, it may seem that calculating the variables is sometimes computationally intractable. This is not the case for the practical application of our model, as even when little data is available, we have powerful tools for calculating both probabilities and utilities at our disposal:
+
+# 6.2.1 Estimating Probabilities: Prediction Markets
+
+For many land-use projects, we have lots of repetitive past data which allows us to extrapolate the probability distribution of possible outcomes of similar future land-use projects. For others, we have very little or no data, and are unable to predict future outcomes well with traditional statistical models. Based off the work of economist Robin Hanson, we have another tool to estimate the probabilities of various outcomes of land-use projects: prediction markets. These are markets whose primary purpose is to aggregate information rather than to entertain or hedge risk [32]. For the largest, most potentially impactful land-use projects, ES evaluators have the option of establishing a prediction market to pool the information and expertise of many environmental experts. They merely need to define an array of specific possible outcomes that they are curious about in an information market, and, once betting has taken place over several days, they may utilize the market prices to infer the probabilities of different outcomes of land-use projects. These probabilities may then be used in their model.
+
+# 6.2.2 Fermi Estimation
+
+As our model is a general algorithm for evaluators to use in a variety of different ecosystems, evaluators have no choice but to make personal estimations for several of the cost variables. For some variables, depending on the availability of data and
+
+viability of collecting new data, Fermi estimation is the only practical means to accomplish this. This involves making justified guesses about quantities and their variance, tends to be surprisingly accurate and within an order of magnitude.
+
+# 7 Implications of our Model
+
+# 7.1 A More Representative Cost
+
+The economic Shareholder Value Theory shows precisely why a company or organization would be interested in disregarding the environmental cost of their actions. The theory states that corporate managers should act exclusively in the economic interests of shareholders [35]. Therefore, if businesses and economic managers can take advantage of "the commons" to achieve their a desired business objective (such as building more housing or a new amusement park) at the lowest cost possible, it would behoove them to do so. This can most easily be seen throughout the early history of American industrialization in which businesses, to give one of numerous possible examples, would dump waste into public waters without cost or repercussion. The U.S. Congress passed a series of legislation in the 1960's and 1970's to curtail such exploitation of the commons, including the Environmental Protection Agency [36].
+
+In order to curtail this innate nature of commercial managers to utilize ecological services that belong to all people, we suggest that municipal, state, and national governments use this model to create tax policies that would account for the degradation of public environmental services. The effect of this would be twofold. First, governments would be able to utilize the tax-revenue to replace what was degraded, therefore maintaining the welfare of their citizenry. Second, this more accurate representation of cost would create an incentive structure that would encourage economic entities to have less negative environmental impacts. For example, say a certain business was planning a land-use project and could complete it in two different ways. One way would cost $2,000 without a tax, but would do a modeled$ 500 dollars in environmental damage. The other would cost $2,200 without a tax, but would only do $200 dollars of damage. If there were no tax in place, the company's managers would be incentivised to choose the first land-use project. However, if the tax were in place, they would be incentivised to choose the second, less damaging option. This new incentive structure, therefore, works within economic theory to preemptively decrease environmental damage.
+
+# 7.2 After Estimated Life Spans
+
+As the concept is grounded in Georgism, policy makers may find it necessary for land-use projects to be reevaluated if they exceed their estimated life spans. As can be seen in the discussion of variables, the costs of environmental degradation are not always immediate but rather accumulate with time. This means that, in order for the greatest accuracy in environmental cost, policy-makers may find re-assessment essential. We suggest, however, that project managers should be made aware of such conditions.
+
+# 7.3 Evaluators Need to be Utilitarian
+
+Ultimately, their is an incentive for land-trustees to profit from the economic exploitation of their land. As the cost-output produced by our model represents the negative externalities of land-use projects- in other words, the destruction of value owned by the public- evaluators must be impartial utilitarians who believe in the
+
+validity of our model. Thus, our team recommends that that evaluators have no financial stake in the valuation process.
+
+# 8 Conclusion
+
+In our report, we provided a clear framework, influenced by utilitarianism and biocentrism, for how it is possible to construct a finite value of environmental services, and, therefore, how it is possible to determine the environmental cost of land use development projects. This was done in a two-fold manner. First, by determining what it would cost to completely replace the ES that the land use project would degrade. Second, if it is not possible to replace what was degraded, mainly relating to cultural ES, metrics for the cost on sentient experience were determined.
+
+We then showed how our model was developed, and applied it to both a small and large-scale land-use project. While our existing variables account for the majority of the cost of environmental degradation, the real effectiveness of our model lies in its elasticity. Much of the beauty of the global ecosystem lies in its diversity; however, this necessitates a way for such a model to be tailored to any number of vastly unique micro-ecosystems. Our model accounts for this by creating a process for the model's users to create new variables or subtract from the existing variables. In addition, the process creates a way for the model to be reassessed and changed over time. Ultimately, this allows planners and managers to create tax structures that account for the effect that land-use projects will have on the commons, both paying back the damage done to what is owned by all sentient beings and disincentivizing projects that would cause large scale environmental degradation.
+
+# 9 References
+
+[1] Hooper, C. L. (n.d.). Henry George. Retrieved January 28, 2019, from http://www.econlib.org/library/Enc/bios/George.html
+[2] Kill, J. (2015, September). *Economic Valuation and Payment for Environmental Services Recognizing Nature's Value or Pricing Nature's Destruction?* [PDF]. Heinrich Böll Foundation.
+[3] Hardin, G. (1968). The Tragedy of the Commons. Science, 162(3859), 1243-1248. doi:10.1126/science.162.3859.1243
+[4] Harmon, L. J. (2012). An Inordinate Fondness for Eukaryotic Diversity. PLoS Biology, 10(8). doi:10.1371/journal.pbio.1001382
+[5] Kates, G. (2016, August 11). Environmental Crime: The Prosecution Gap. Retrieved January 28, 2019, from https://thecrimereport.org/2014/07/14/2014-07-environmental-crime-the-prosecution-gap/
+[6] Millennium Ecosystem Assessment. (2005). Ecosystems and Human Well-Being, Synthesis [PDF]. Washington D.C.: Island Press.
+[7] Cummings, R. G., Brookshire, D. S., Schulze, W. D. (1986). Valuing Environmental Goods: A State of the Arts Assessment of the Contingent Valuation Method (B ed., Vol. I, Rep.). Washington D.C.: The Institute for Policy Research.
+[8] National Institute for Health and Care Excellence. (n.d.). Glossary. Retrieved January 28, 2019, from https://www.nice.org.uk/glossary?letter=q
+[9] Azimi NA, Welch HG. The effectiveness of cost-effectiveness analysis in containing costs. J. Gen Intern Med. 1998;13:664-9.
+[10] Mill, John Stuart (2014). Utilitarianism. Cambridge University Press.
+[11] Robin Attfield. (2012). Biocentrism and Artificial Life. *Environmental Values*, 21, 83-94.
+[12] Hughes, C. (2018, November 22). Tyler Cowen's Stubborn Attachments-A Review. Retrieved January 28, 2019, from https://quillette.com/2018/11/21/stubborn-attachments-a-review/
+[13] Current World Population. (n.d.). Retrieved January 28, 2019, from http://www.worldometers.info/world-population/
+[14] Norgaard, R. (April 2010). Ecosystem Services: From Eye-Opening Metaphor to Complexity Blinder. Ecological Economics, 69, 6, 1219-1227.
+[15]The Cost of Air Pollution: Strengthening the Economic Case for Action. (2016). The World Bank and Institute for Health Metrics and Evaluation University of Washington, Seattle.
+[16] Desvoussges, W. Johnson, R. Dunford, R. Boyle, K. J. Hudson, S. and Wilson K. N. (1992). Measuring non-use damages using contingent valuation: experimental evaluation accuracy. Research Triangle Institute Monograph 92-1.
+[17] Sue Ogilvy, Roger Burritt, Dionne Walsh, Carl Obst, Peter Meadows, Peter Muradzikwa, Mark Eigenraam. (2018). Accounting for liabilities related to ecosystem degradation. Ecosystem Health and Sustainability, Vol 4, Iss 11, Pp 261-276 (2018),
+
+(11), 261. Retrieved from https://doi.org/10.1080/20964129.2018.1544837
+[18] Kenton, W. (2018, December 13). Operating Lease. Retrieved from https://www.investopedia.com/terms/o/operatinglease.asp
+[19] Platt, E. (2012, July 23). 7 Fascinating Facts About Six Flags. Retrieved from https://www.businessinsider.com/six-flags-facts-2012-7
+[20] Stiling, P. D. (1998). Ecology: Theories and Applications. Upper Saddle River, NJ: Prentice Hall.
+[21] Valuing ecosystem services: A shadow price for net primary production. (2007, May 03). Retrieved from https://www.sciencedirect.com/science/article/pii/S092180090700198X
+[22] Research Guides: California Environment Information Sources: Natural Communities and Habitats. (n.d.). Retrieved from http://libguides.humboldt.edu/c.php?g=303807&p=2028631
+[23] Clausen, E. (2017, September 13). Georgia's Ecosystems. Retrieved from https://www.peachdish.com/blog/WblBiR8AAEsOjnfY/georgias-ecosystems
+[24] The Links Between land-use and Groundwater. (August 2014). Global Water Partnership. Perspectives Paper.
+[25]Our Water Supply. (n.d.). City of Redding Public Works. Retrieved January 27, 2019, https://www.cityofredding.org/departments/public-works/public-works-utilities/water-utility/water-supply.
+[26]Rogers, Callie. (May 2008). Economic Costs of Conventional Surface-Water Treatment: A Case Study of the McAllen Northwest Facility.
+[27]Redding Weather Averages. (n.d.). U.S. Climate Data. Retrieved January 27, 2019, from https://www.usclimatedata.com/climate/redding/california/united-states/usca0922.
+[28]Valdosta Weather Averages. (n.d.). U.S. Climate Data. Retrieved January 27, 2019, from https://www.usclimatedata.com/climate/valdosta/georgia/united-states/usga1253.
+[29] NC State University: College of Agriculture and Life Sciences. (n.d.). Tree Facts. Retrieved from https://projects.ncsu.edu/project/treesofstrength/treeefact.html
+[30] Hamrick, K. Goldstein, A. (2016, May). Raising Ambition: State of the Voluntary Carbon Markets 2016. *Ecosystem Marketplace*. Retrieved January 25, 2019, from https://www.forest-trends.org/publications-raising-ambition/.
+[31] Stewart, N., Dawson, N. (2013). Forest Thinning: A Landowner's Tool for Healthy Woods. Retrieved from http://extension.umd.edu/
+[32] Hanson, R. (2003). Combinatorial Information Market Design. Information Systems Frontiers, 5(1), 107-119. Retrieved January 28, 2019, from http://mason.gmu.edu/~rhanson/combobet.pdf
+[33] Hanson, R. (1996, June 12). Idea Futures. Retrieved January 28, 2019, from http://econfaculty.gmu.edu/hanson/idealfutures.html
+
+[34] Muehlhauser, L. (2013, April 11). Fermi Estimates. Retrieved January 28, 2019, from https://www.lesswrong.com/posts/PsEppdvgRisz5xAHG/fermi-estimates
+[35]Loderer, C., Roth, L., Waelchli, U. Joerg. (2010). Shareholder Value: Principles, Declarations, and Actions. *Financial Management*, 39, 5-32.
+[36]EPA History. (n.d.). The Environmental Protection Agency. Retrieved January 27, 2019, from https://www.epa.gov/history.
\ No newline at end of file
diff --git a/MCM/2019/E/1926224/1926224.md b/MCM/2019/E/1926224/1926224.md
new file mode 100644
index 0000000000000000000000000000000000000000..85144f7eedd828eb72c19f7ed85c4a8c693616cf
--- /dev/null
+++ b/MCM/2019/E/1926224/1926224.md
@@ -0,0 +1,296 @@
+# Summary
+
+We create an ecosystem service valuation model to understand the true cost of land use projects by modeling the value of the unaffected ecosystem services and the extent in which they are impacted by the land use development. Due to these considerations the model is most capable of evaluating the value of ecosystem services that are most likely to be damaged by land development. We achieve this by considering variables from the land use project itself and variables from the location the project will be built in.
+
+The variables we consider for the project are the area and how eco-friendly it is. The variables we consider for the environment of the projects are the biome, its proximity to urban centers, precipitation, cost of energy in the region, and canopy coverage.
+
+When evaluating the ecosystem services themselves, we divided the type of services into two broad categories: direct and indirect use services. We draw upon a variety of well-established methods for valuation including, but not limited to: market-based valuation, the replacement cost method, avoided costs, and benefit transfer. We also utilize two data sets: The Economics of Ecosystems and Biodiversity Valuation Database (TEEB) and Emergy Society's Database.
+
+Our model is tested on six different case studies, finding the total monetary cost of the ecosystem services affected by land use projects:
+
+| Project | Ecological Cost (USD) |
| Road construction in Cairo, Egypt | $219 |
| Housing in Washington, US | $502 |
| Facebook MPK20 in CA, US | $19,110 |
| Road construction in Hobart, Australia | $1.7 million |
| Via Verde Pipeline in Puerto Rico | $642 million |
| Nicaragua Canal Project | $3.16 billion |
+
+Table 1: Case studies.
+
+Finally, given that our model is dynamic, we project our model as a function of time into the future and perform a sensitivity analysis by varying our initial parameters. Our model is robust to reasonable perturbations to within an order of magnitude.
+
+# Is this a monetary evaluation of ecosystem services?
+
+ICM Contest Question E
+
+Team # 1926224
+
+January 28, 2019
+
+# Contents
+
+1 Introduction 2
+1.1 Definitions 2
+2Assumptions 4
+3 Model 5
+
+3.1 Model Variables 5
+
+4 Case Studies 9
+
+4.1 Housing in Washington, United States 1432423 9
+4.2 Facebook MPK Building 20 in Menlo Park, United States13945 9
+4.3 Road construction in Hobart, Australia 612181 10
+4.4 Road construction in Cairo, Egypt207 10
+4.5 The proposed Vía Verde Pipeline project in Puerto Rico21 226 11
+4.6 The proposed 2013 Nicaragua Canal project between the Pacific and Atlantic Oceans1611 12
+
+5 Conclusion 13
+
+5.1 Future Projections 13
+5.2 Sensitivity Analysis 14
+5.3 Strengths and Weaknesses 14
+
+# 1 Introduction
+
+Our task is to create a valuation model of ecological services to quantify the economic costs of environmental degradation caused by land use development. We model land use development projects of varying sizes and in different locations. In order to evaluate the effectiveness and implications of our model, we perform sensitivity analysis and project our model into the future.
+
+# 1.1 Definitions
+
+# 1. Ecological Services
+
+An ecological (or ecosystem) service is any service provided by an ecosystem which could be beneficiary to humans. Ecological services can be categorized into use (those which can be directly or indirectly used by humans) and non-use (those which cannot be used by humans) and controversy often arises with non-use ecological services as it is contentious to place a price on that which offers no value. We choose to consider non-use ecological services as "subservices"27.
+
+The ecological services we consider include but are not limited to: carbon sequestration, water filtration, flood prevention, erosion prevention, recreation, biodiversity protection, fire prevention, timber, fuel wood and charcoal, eco-tourism, micro-climate regulation, biochemicals, natural irrigation, plants and vegetable food, hydro-electricity, deposition of nutrients, gas regulation, soil formation, cultural use, drainage, and science/research.
+
+# 2. Valuation
+
+The valuation of a given service is the monetary value assigned to that service. Since the value of a service must be greater than or of equal value to the price for consumers to purchase a service, any monetary estimation of an ecological service will underestimate the true value of the service.
+
+# 3. Direct Use
+
+Direct Use services describe measurable services produced by the ecosystem which directly benefit humans, such as carbon sequestration and ground water recharge.
+
+# 4. Indirect Use
+
+Indirect use services are services that don't directly benefit humans, but augment the benefits of other direct use services; e.g. biodiversity. Since this is difficult to measure, this often results in calculating their demand side valuation; i.e. the value the ecosystem service provides for humans. We use The Economics of Ecosystems and Biodiversity (TEEB) Database $^{27}$ as sources for these values, a compiled database of ecosystem services values from many ecosystem valuation studies. The values used for the ecosystem services were calculated based on three well-established methods for ecosystem valuation: benefit transfer, direct market pricing, and replacement cost techniques.
+
+# 5. Biome
+
+The biome is the naturally occurring flora and fauna occupying a habitat and can be broadly categorized into terrestrial and marine $^{17}$ , although we only consider terrestrial.
+
+The biome types we consider are: tropical forests, inland wetlands, coastal wetlands, cultivated, coral reefs, fresh water, coastal, multiple ecosystems, woodlands, deserts, forests [temperate and boreal], grasslands, urban, and marine.
+
+# 2 Assumptions
+
+- Clean water is accessible and uncontaminated water sources vary little between each other. Since water can be piped or trucked in, we assume that it is accessible and in our model we consider the distance to clean water.
+- Areas in the same ecosystem classification are equally productive. Even in ecosystems that are the same classification there can be huge variety. We assume that each biome is relatively uniform throughout so that grouping by biome is sufficient to differentiate between projects.
+- Any impact scales linearly. An increase in the area linearly affects the factors used to calculate the monetary representation of the ecological cost. For example, if one tree sequesters $N \, kg$ of $CO_{2}$ , then two trees sequester $2N \, kg$ of $CO_{2}$ .
+- Energy costs accurately reflect the value of ecological services, and accurately translate the cost of those services in different regions with differing energy costs. We translated some ecological costs into monetary value by calculating the approximated energy of ecological services and using the energy cost in the region. We are assuming that it is possible to estimate a conversion factor.
+- There is a non-linear inversely proportional relationship between the distance from an urban areas and the value of ecosystem service $^{25,28}$ . Therefore, we assume a relationship between urbanization and ecosystem services. This means that access to clean water, biodiversity, and other similar services are affected by urban proximity.
+
+# 3 Model
+
+# 3.1 Model Variables
+
+We used different methods to evaluate the monetary cost of different ecological services depending on the service. We use the equations described below for carbon sequestration and water filtration and purification. Where we cannot estimate the direct cost of a service we use costs from the TEEB Database[27].
+
+$$
+\left(D + \sum_ {i} S _ {i}\right) \left(1 + P _ {\text {u r b a n}}\right) (1 - E) \tag {1}
+$$
+
+In order to avoid double-counting, we discard any values from the TEEB data set which deal with carbon sequestration, water purification, water filtration, and any ambiguities related to water or carbon dioxide purification.
+
+The urban proximity index and the eco-friendly index both range from zero to one and are weighting factors which affect the final price.
+
+For the urban proximity index, a value of zero corresponds to a location which is very close in proximity to an urban setting, defined as $5\mathrm{km}$ or less. A value of one corresponds to a rural location far from an urban environment, at least $50~\mathrm{km}$ . This is because urban areas have irrigation services and other utilities already in place. In rural settings the landscape needs to be torn apart more to get the resources necessary, which leads to more damage to the ecosystem services that the land provides. We use a logarithmic scale because previous literature indicates that this relationship is non-linear[28].
+
+For the eco-friendly index, a value of zero corresponds to a company which puts no effort into reducing its carbon footprint or using other environmentally friendly practices. A value of one corresponds to a hypothetical company which is able to symbiotically live in the ecosystem without damaging any of the services and fully operating within the parameters of the ecosystem. For example, the Apple Park built in California, US would have a relatively high eco-friendly index since it is the world's largest naturally ventilated building with 7,000 trees planted around campus, and $100\%$ renewable energy powering the campus19. For the six case studies we estimate an index value. In reality, before a construction project is started, the company can use an index for determining this, such as the 2017 State of Green Business Index8.
+
+
+Resource Breakdown by Ecosystem Service
+Figure 1: The most relevant ecosystem service categories by number of occurrences, not broken into sub services. Any service which accounted for less than $3\%$ of the resources was omitted from the graph but not from analysis.
+
+| Symbol | Definition |
| D(C,W) | The monetary value (USD) of Direct Use factors from an ecological area using energy calculations. Takes in C and W. |
| C(A,F%,E$) | Monetary value(USD) of carbon taken out of the atmosphere by plants, with respect to A, F%, and E$ (USD). |
| W(Pw,A,E$) | Monetary value(USD) of water filtered by the soil, with respect to Pw, A, E$ (USD). |
| A | Area of the land use project (m2). |
| F% | Canopy Percentage: The percent that foliage covers one square meter of land (%). |
| E$ | Monetary value of energy varying depending on location (USD/Joules). |
| u | Urban proximity (m). |
| Purban(u) | An index of urban proximity from 0 to 1, with 0 being near an urban area and 1 being in a rural/remote area |
| Pw | precipitation (mm/year) |
| b | Biome, with data taken from TEEB Database27 |
| S | A Python list of the ecosystem services as described in the TEEB data set. The Sith variable is an ecosystem variable, for example raw materials. |
| Constants | Value |
| EC | Energy of carbon per square meter of canopy cover (117 J/m2). |
| p | Energy efficiency of photosynthesis: 26%15. |
| t | Time (1 year). |
| ECO2 | Energy of CO2in Joules per pound of CO25.045x106J/lb CO210 |
| ET | Energy of pounds of CO2per meter squared. 48 lbs CO215 |
| Em | Solar Transformity: the amount of emergy required to produce 1 Joule of clean groundwater from soil due to rain-fall. 22.83 J/g |
| ρH2O | Density of water: 997 kg/m3 |
+
+Table 2: Symbols, definitions, and constants.
+
+We use a summation model with a time-step of one year for the use of ecological features[22].
+
+$$
+D (C, W) = C + W \tag {2}
+$$
+
+Here we add together the monetary cost of the energy used by carbon sequestration and the monetary cost of the energy used to filter water. This is the total cost of the Direct Use services.
+
+$$
+E _ {C} = E _ {C O _ {2}} E _ {T} p \tag {3}
+$$
+
+This equation finds the energy of carbon sequestration per square meter of canopy cover. This is calculated by multiplying the energy of carbon sequestration per pound of $\mathrm{CO}_{2}$ by the conversion factor to turn this value into energy of carbon sequestration per meter squared. Then this is multiplied by the energy efficiency of photosynthesis.
+
+$$
+C = E_{\S}E_{C}F_{\%}A \tag{4}
+$$
+
+This equation calculates the cost of the energy used by carbon sequestration. The energy of carbon is calculated by taking the total area and multiplying it by canopy cover percent in order to estimate the total area in which photosynthesis is occurring. We then multiply the total canopy cover by the energy produced per square meter of canopy cover to find the total energy of carbon sequestration in the area, and convert that into US dollars by using the cost of energy relative to the location of the land use project.
+
+$$
+W = P _ {w} A \left(\rho_ {H _ {2} O}\right) E _ {m} E _ {\$} \tag {5}
+$$
+
+This equation calculates the cost of the energy used by the soil to filter water. By multiplying precipitation per square meter and area of the land use project we find the total precipitation in the area. We then multiply by the solar transformity to end up with the total amount of energy required to clean the water from precipitation. We can convert this into US dollars by using the cost of energy relative to the location of the land use project.
+
+$$
+P _ {u r b a n} = \log \left(\frac {x}{5}\right) \tag {6}
+$$
+
+This equation calculated the index for urban proximity and it works for values between 5 and 50 kilometers. If the environment is close enough, or less than 5 kilometers, then it is considered to have an index of 0 and if it is
+
+farther than 50 kilometers it is considered rural and to have an index of 1. The logarithmic scale, again, is because of the non-linearity between urban diversity and ecosystem services $^{25}$ .
+
+# 4 Case Studies
+
+By testing case studies, real world examples that can be applied to our model, we can confirm that our model's results are logical and on the right scale.
+
+The following case studies are listed in order from smallest to largest area $(m^2)$ .
+
+# 4.1 Housing in Washington, United States1432423
+
+The impact of individual housing on ecosystem services is generally hard to measure. In most case housing is built as parts of large projects that disrupts large areas of an ecosystem, yet in this case study we model a theoretical housing project in rural Washington state.
+
+Project Cost without environmental services considered: $300,000
+
+Environmental services cost per year: $502
+
+Combined cost after the first year: $300,502
+
+Percentage increase: $0.14\%$
+
+This projects large distance from the urban environment and its location in a high canopy coverage biome were factors expected to be very influential in the price inflation for this project yet its small size and is eco-friendly index resulted in a very low environmental systems cost compared to its total monetary cost. In this case the damaged ecosystem services would not be significant in project consideration.
+
+# 4.2 Facebook MPK Building 20 in Menlo Park, United States13945
+
+Large companies are constantly building or remodeling their headquarters to accommodate to their significant growth. Facebook's MPK Building 20 expansion serves as a good example for projects such as these.
+
+| Project Cost without environmental services considered: | $269 million |
| Environmental services cost per year: | $19,110 |
| Combined cost after the first year: | $269 million |
| Percentage increase: | 0.007% |
+
+The vast majority of large company headquarters are found in urbanized areas in which local ecosystem services have already been damaged so even when considering the large scale of such a project its environmental damage is relatively limited. The ecosystem services cost would likely not influence the development of the project.
+
+# 4.3 Road construction in Hobart, Australia $^{612181}$
+
+Hobart is the regional capital of Tasmania, Australia. Unlike the vast majority of Australia, Tansmania's biome is classified as a rain forest with unique flora and fauna at risk by urban development. Average road project costs in Australia are high and the value of the ecosystem services in such a diverse part of Australia is likely to increase the project cost significantly.
+
+Project Cost without environmental services considered: $73 million
+
+Environmental services cost per year: $1.7 million
+
+Combined cost after the first year: $74 million
+
+Percentage increase: 2.3%
+
+The cost of ecosystem services would be significant in this land use project as the roads would affect a diverse and bountiful ecosystem. As this project is being built relatively far away from the urban centers it would impact pristine natural environments with a high value for ecosystem services.
+
+# 4.4 Road construction in Cairo, Egypt $^{207}$
+
+Cairo is the capital of Egypt and considered an economic and cultural center for the entire region, yet it is also known for its abysmal traffic. Road project costs in Egypt are relatively cheaper than other parts of the world
+
+and the desert environment of Egypt suggests a project of this type should be generally cheaper both in construction cost and in possible damage to ecosystem services.
+
+Project Cost without environmental services considered: $21 million
+
+Environmental services cost per year: $218
+
+Combined cost after the first year: $21 million
+
+Percentage increase: $0.000\%$
+
+Considering this project is being built in a desert the damages to ecosystem services would be low. The desert environment of Egypt does not provide many ecosystem services and in truth it does not need to. Considering local characteristics the most significant part of the model that would be affected by the construction would be the direct use of water recharge, yet due to Egypt's significantly low precipitation this ecosystem service does not get much use to begin with.
+
+# 4.5 The proposed Via Verde Pipeline project in Puerto Rico $^{21226}$
+
+The Via Verde Pipeline Project was proposed as a landmark energy project to satisfy Puerto Rico's energy needs in 2009. The project was never completed once it proved to be controversial due to its planned route which would have covered around seven square km of Puerto Rico's untouched rain forest. Since this would potentially place many local communities and endangered species at risk the monetary ecological cost would be significant.
+
+Project Cost without environmental services considered: $800 million
+
+Environmental services cost per year: $642 million
+
+Combined cost after the first year: $1.442 billion
+
+Percentage increase: 44.5%
+
+The Via Verde Pipeline would significantly damage the vast ecosystem services that the ecosystem provides to the point the project is likely not worth its cost.
+
+# 4.6 The proposed 2013 Nicaragua Canal project between the Pacific and Atlantic Oceans1611
+
+Recently the Nicaraguan government proposed a massive project to construct a new canal to connect the Pacific and Atlantic oceans. This proved unpopular since the canal would cut through heavily wooded areas of Nicaraguan rain forest and cause massive disruption to the ecosystem.
+
+Project Cost without environmental services considered: $45 billion
+
+Environmental services cost per year: $3.16 billion
+
+Combined cost after first year: $47.16 billion
+
+Percentage increase: $6.6\%$
+
+The Nicaragua Canal project was always expected to significantly affect the environment around were it was built due to the scale of the project. While it would likely not significantly affect its construction its still demonstrate a significant impact.
+
+# 5 Conclusion
+
+A significant goal from the onset of this project was the creation of a land ecosystem evaluation model that could be applied at a global scale to a series multiple different projects with different rates of impact. To achieve this goal we developed carbon sequestration and water discharge models that took inputs that were valid in every environment and location. We incorporate the TEEB database as it provided a standardized data set for the different biomes our model encapsulates.
+
+By using case studies that varied in magnitude and cost, we were able to determine that our model correctly predicted the monetary cost of ecological services.
+
+# 5.1 Future Projections
+
+
+Figure 2: The change in ecosystem services valuation over time.
+
+We project our model into the future by performing simple random perturbations. We perform a kernel density estimation at the graphs corresponding to how certain we are that our model is accurate at a time $t$ after our initial construction.
+
+The lighter shades indicate a lower certainty as time goes on as the sensitivity due to initial conditions varies heavily into time.
+
+# 5.2 Sensitivity Analysis
+
+We perform a sensitivity analysis by varying initial parameters of the values in our model. We vary the eco-friendly and urban proximity index from -0.1 to 0.1. We also vary other initial parameters of ecosystem services that are given by the TEEB database as well as carbon sequestration parameters and water filtration parameters with bounded random values. Our model, for all six of the case studies, vary only within one order of magnitude for the value at present, which suggests some stability.
+
+# 5.3 Strengths and Weaknesses
+
+# Strengths
+
+- Our constants are backed up by validation tests, which confirm our model.
+- Multiple inputs provide an accurate assessment of the ecological value of a chosen location.
+- The model integrates multiple kinds of valuation, bridging the gap between supply value models and demand value models.
+
+# Weaknesses
+
+- We do not include all possible ecological factors that could contribute to the monetary value of the ecosystem.
+- One of the assumptions, and inevitably weaknesses of our model, is that we assume that grouping by biome is sufficient to differentiate between projects. Within each biome there are differences which would affect the ecological impact of a land use project, but we do not take these into account.
+
+# References
+
+[1] Australian biomes. URL https://mainweb-v.musc.edu/cando/ausdwnun/biom.html.
+[2] National land cover database, percent tree canopy coverage - puerto rico. URL https://databasin.org/datasets/ae0acddccbc548df9396d48078ddfd64.
+[3] Electricity rates by state (jan. 8, 2019). URL https://www.chOOSEnergy.com/electricity-rates-by-state/.
+[4] Facebook campus expansion. URL https://www.menlopark.org/995/Facebook-Campus-Expansion.
+[5] Menlo park electricity rates. URL https://www.electricitylocal.com/states/california/menlo-park/.
+[6] Australian tropical savanna climate, 2001. URL http://www.blueplanetbiomes.org/savanna_australiaclim_page.htm.
+[7] Egypt forest cover, 2005. URL https://rainforests.mongabay.com/ deforestation/archive/Egypt.htm.
+[8] Sustainable development goals, 2015. URL https:// sustainabledevelopment.un.org/sdgs.
+[9] Facebook spends $269 million on headquarters, May 2016. URL https://greenbuildingelements.com/2016/05/12/facebook-spends-269-million-headquarters/.
+[10] Erv Evans. Tree facts. Website. URL https://projects.ncsu.edu/project/treesofstrength/treeefact.htm.
+[11] Nathaniel Parish Flannery. Nicaraguan canal mega-project faces delays and opposition, Jan 2014. URL https://www.forbes.com/sites/nathanielparishflannery/2014/01/07/nicaraguan-canal-mega-project-faces-delays-and-opposition/#5a909f115041.
+[12] Australian Government. Road construction cost and infrastructure procurement benchmarking: 2017 update, 2017. URL https://bitre.gov.au/publications/2018/files/rr148.pdf.
+
+[13] Kelsey GraeberHaving. Construction tops $1b at facebooks west campus, May 2018. URL https://www.buildzoom.com/blog/facebook-west-campus-1b.
+[14] Davey Resource Group. Urban tree canopy assessment, 2011. URL http://file.dnr.wa.gov/publications/ rp_urban_bonneyLake_ca.pdf.
+[15] James Alan Bassham Hans Lambers. Photosynthesis, 2018. URL https://www.britannica.com/science/photosynthesis/Energy-efficiency-of-photosynthesis.
+[16] Anna Hochleitner. La construccin del canal interocenico en nicaragua: situacin de partida y efectos en el desarrollo nacional, 2015. URL https://library.fes.de/pdf-files/bueros/fesamcentral/12056.pdf.
+[17] S. Charles Kendeigh. Animal ecology, 1961. URL https://www.biodiversitylibrary.org/bibliography/7351#/summary.
+[18] Kristyn Maslog-Levis. First urban tree canopy cover benchmark in australia. URL https://citygreen.com/blog/first-urban-tree-canopy-cover-benchmark-in-australia/.
+[19] Andrea Miller. Apple's headquarters, facilities now powered by 100 percent renewable energy, 2018. URL https://abcnews.go.com/ Technology/applest-headquarters-facilities-now-powered-100-percent-renewable/story?id=54362901.
+[20] Hanan Mohamed. Electricity minister reveals new 2018/2019 electricity prices, 2018. URL http://www.egypttoday.com/Article/3/51993/Electricity-Minister-reveals-new-2018-2019-electricity-prices.
+[21] U.S. Army Corps of Engineers. Va verde natural gas pipeline project. URL https://www.vermontlaw.edu/sites/default/files/Assets/enrlc/cases/protecting-puerto-ricos-rich-biodiversity-from-proposed-via-verde-natural-gas-pipeline/2012-01-30%20--%20UPR-Via%20Verde%20Public%20Comments%20on%20Draft%20EA.pdf.
+[22] Marco Casazza Elliot T. Campbell Biagio F. Giannetti Mark T. Brown Qing Yang, Gengyuan Liu. Development of a new framework for non-monetary accounting on ecosystem services valuation. 34, 2018.
+
+[23] Mike Rosenberg. Home prices rising faster in washington than in any other state, 2016. URL https://www.seattletimes.com/business/home-prices-rising-faster-in-washington-than-in-any-other-state/.
+[24] Tom Trimbath. Seattle houses are getting bigger while households get smaller, 2016. URL https://seattle.curbed.com/2016/7/6/12094572/seattle-houses-bigger-households.
+[25] Christopher Trisos. Urbanization effects on biodiversity, 2015. URL https://www.sesync.org/project/postdoctoral-socio-environmental-immersion-program-2015/urbanization-effects-on-biodiversity.
+[26] USDA. Forests of puerto rico, 2014, 2014. URL https:// www.srs.fs.usda.gov/pubs/ru/ru_srs121.pdf.
+[27] S. Van der Ploeg and R.S. de Groot. The teeb valuation database a searchable database of 1310 estimates of monetary values of ecosystem services., 2010.
+[28] Maibritt Pedersen Zari. The importance of urban biodiversity an ecosystem services approach, 2018. URL https:// medcraveonline.com/BIJ/BIJ-02-00087.
\ No newline at end of file
diff --git a/MCM/2019/E/2019_ICM_Judges_Com2/2019_ICM_Judges_Com2.md b/MCM/2019/E/2019_ICM_Judges_Com2/2019_ICM_Judges_Com2.md
new file mode 100644
index 0000000000000000000000000000000000000000..e50b7f73cd6836131134691abe5498fa54a3446e
--- /dev/null
+++ b/MCM/2019/E/2019_ICM_Judges_Com2/2019_ICM_Judges_Com2.md
@@ -0,0 +1,233 @@
+# Judges' Commentary: Environmental Degradation
+
+Kristin Arney
+
+U.S. Military Academy
+
+West Point, NY
+
+kristin.arney@westpoint.edu
+
+Kasie Farlow
+
+Dominican College
+
+Orangeburg, NY
+
+# Background
+
+This year's environmental science problem in the Interdisciplinary Contest in Modeling $(\mathrm{ICM}^{\mathrm{TM}})$ challenged teams to explore the true costs associated with land-use development projects. On the surface, most firms consider only the monetary costs associated with the creation of new land-use projects; but we asked teams to consider also the environmental costs of changing the use of the land and the potential for environmental degradation, aspects typically disregarded by economists and industry. Teams sought to develop an ecological services valuation model to understand the true costs of land-use projects when ecosystem services are considered.
+
+Economic theory often disregards the impact of its decisions on the biosphere or assumes unlimited resources or capacity for its needs. There is a flaw in this viewpoint, and the environment is now facing the consequences.
+
+The biosphere provides many natural processes to maintain a healthy and sustainable environment for human life, which are known as ecosystem services. Examples include turning waste into food, water filtration, growing food, pollinating plants, and converting carbon dioxide into oxygen.
+
+However, whenever humans alter the ecosystem, we potentially limit or remove ecosystem services. The impact of local small-scale changes in land use, such as building a few roads, sewers, bridges, houses, or factories may
+
+seem negligible. But add to these small projects large-scale projects such as building or relocating a large corporate headquarters, building a pipeline across the country, or expanding or altering waterways for extended commercial use. Now think about the impact of many of these projects across a region, country, and the world. While individually these activities may seem inconsequential to the total ability of the biosphere's functioning potential, cumulatively they are directly impacting the biodiversity and causing environmental degradation.
+
+Traditionally, most land-use projects do not consider the impact of, or account for changes to, ecosystem services. The economic costs to mitigate negative results of land-use changes—polluted rivers, poor air quality, hazardous waste sites, poorly treated waste water, climate changes, etc.—are often not included in the plan. Is it possible to put a value on the environmental cost of land-use development projects? How would environmental degradation be accounted for in these project costs? Once ecosystem services are accounted for in the cost-benefit ratio of a project, then the true and comprehensive valuation of the project can be determined and assessed.
+
+# The Problem
+
+Your ICM team has been hired to create an ecological services valuation model to understand the true economic costs of land-use projects when ecosystem services are considered. Use your model to perform a cost-benefit analysis of land-use development projects of varying sizes, from small community-based projects to large national projects. Evaluate the effectiveness of your model based on your analyses and model design. What are the implications of your modeling on land-use project planners and managers? How might your model need to change over time?
+
+# Judges' Criteria
+
+We describe the general framework used by the judges to evaluate submissions. The judges included representatives from a diverse set of fields, including sustainability, biology, geography, applied mathematics, statistics, and engineering. Their main objective was to find and evaluate modeling that included good science and led to measurable and viable solutions. The judges were looking for papers that clearly communicated the following major components for a submission to be considered as one of the best papers:
+
+- Above all else, we wanted students to show an understanding of the complexity of the problem beyond what was provided in the problem
+
+prompt. The inclusion of a well-researched and unique introduction, with elements similar to a literature review prior to the formulation of the model, indicated to judges a level of true comprehension of the criticality of devoting time to such an endeavor.
+
+- As in all ICM problems, we expected the formulation of a model—in this case, an ecological services valuation model that shows an understanding of the true economic costs of ecosystem services in land-use projects. Teams had to identify the key factors that were important to their model and analysis as well as consider the inclusion of actual costs. Judges sought papers that included the qualitative analysis to augment a quantitative model.
+
+Ideally, judges wanted the best model to be developed, coupled with a discussion of the best proxy factors that then had to be incorporated into the model. This had to be based on data that the team had available, and not the other way around. The best papers identified what would be ideal to use, then what they could actually find, and finally how those data might not match but how it would work. A team's recognition of the limitations of the modeling process showed great maturity and confidence in their research and knowledge of ecological services and the impact land-use projects have and can contribute to environmental degradation.
+
+- Judges wanted teams to apply their model of ecological services to understand the true economic costs of land-use projects when ecosystem services are considered. Judges also sought the consideration of environmental degradation in project costs. Teams then were asked to include a true cost-benefit analysis of land-use development projects of varying sizes, from small community-based projects to large national projects. Judges were looking for specific examples of either existing projects or details of a proposed project for a specific area. Teams needed to find, create, or use data to test and explain their measures and models as appropriate in this section.
+
+Additionally, judges were pleased with teams that addressed the reasoning behind the projects chosen. Providing the motivation and explaining the critical research done on the understanding the components of each project in the area chosen for development showed the judges a level of understanding and sophistication that set the best papers apart from the rest.
+
+- Lastly, the contest challenged teams to evaluate the effectiveness of their model based on their analyses and model design. Teams needed to discuss the applicability of their model to different locations, projects, or potential need to modify their model over time. The best teams were able to incorporate a dynamic element into their model to see the changes of the land's use over time during a project build and then after completion. Additionally, the advice to project planners and managers
+
+was a critical component to the submission and offered judges insight into a team's understanding of the greater applicability of their metric.
+
+As always, judges expected the basic elements of model formulation as well as the delineation of strengths and weaknesses of the model, the conduct of sensitivity analysis, model validation and verification, and good written communication:
+
+- Model Formulation Basics. As teams developed their model, judges expected teams to include definitions of their variables, to state reasonable and necessary assumptions and to include an explanation of the process of parameter estimations. It is critically important to identify and appropriately cite sources for data or existing models used as a baseline for constructing the team's model or for comparing it to others.
+- Strengths and Weaknesses. After modeling and analysis, judges expected discussions of the strengths and weaknesses of their model and some concluding thoughts versus just ending the paper after completing all tasks.
+
+In past years, the problem prompt had specifically required teams to detail their strengths and weaknesses in their model. This year did not, and therefore a discussion of strengths and weaknesses became an indicator of a deeper understanding of the aspects of modeling. Judges were encouraged by teams who provided the analysis throughout the submission versus right at the end of their report seeming as though it was an afterthought. Evaluation in terms of its strengths and weaknesses is relevant to the entire modeling process.
+
+- Sensitivity Analysis. Sensitivity analysis could have been done in a variety of ways, so judges were looking closely at the rationale behind each team's approach. At a minimum, the expectation was a revisit of early simplifying assumptions. Judges also saw teams assessing the relative impacts of different types of model improvements. There was no one way but teams which attempted a sensitivity analysis to determine the robustness, flexibility, or accuracy of their model demonstrated to the judges a higher level of knowledge concerning of the impact and usefulness of their model.
+- Model Validation and Verification. These aspects of modeling often set a great paper apart from just a good report. Validation is an important part of the modeling process, as it can instill confidence in results of help identify weakness in the model. Several papers presented a range of models from simple to complex and used a validation approach to justify the selection of one of those choices, considering the tradeoff. Judges saw teams who included the removal of some of the indicators in their model to see if their removal truly changed the output of their model. Other teams ran their model against other known metrics to see if it accurately predicted the success. Judges understood the time
+
+constraint that teams were under and therefore appreciated teams that had given it consideration.
+
+- Written Communication. Every year, judges seek to highlight submissions that offer a balance of sound mathematics with well written justifications from their approach. Judges looked for implementable measures that were backed by strong research and then well explained. The strongest submissions had a clear organizational structure with equations coupled to explanations with (when appropriate) graphics to help convey complicated ideas.
+
+# Recognition of the Outstanding Papers
+
+Due to the nature of the environmental problem, teams used varying modeling techniques focusing on different factors representing ecosystem services and how to place a value on the environmental cost of land-use development projects. Teams also selected assorted projects with different scales in diverse areas throughout the world for their cost-benefit analysis. As a result, the submissions provided great innovations and excitement for the judging panel.
+
+The eight Outstanding papers demonstrated a nice array of modeling techniques and then showcased their model's capabilities to handle true costs, scalability, and multiple factors.
+
+These Outstanding papers were well-written and provided clear explanations of their modeling procedures. Some demonstrated unique and innovative approaches distinguishing themselves from other papers. Others were noteworthy for either their thoroughness of their modeling or for the significance of their results. Some provided well-thought-out, implementable recommendations to project managers, perfectly tailored to the type of project or area of implementation. We would like to congratulate the eight Outstanding papers from Problem E:
+
+- Central University of Finance and Economics, Beijing, China: "Land Counts! Better Use and Lower Cost"
+- Chongqing University, Chongqing, China: "What is the Cost of Environmental Degradation?"
+- Emory University, Atlanta, Georgia, USA: "Is This a Monetary Evaluation of Ecosystem Services?"
+- Lanzhou University of Technology, Lanzhou, Gansu, China: "Assessment of Ecological Services"
+- Nanjing University of Information Science and Technology, Nanjing, Jiangsu, China: "Ecosystem Services Matter! Sustainability is Necessary"
+
+- Renmin University of China, Beijing, China: "Take Environmental Effect into Consideration: Cost-Benefit Analysis on Land-Use Projects"
+U.S. Military Academy, West Point, New York, USA: "Ecological Services Valuation Model: Understanding the True Cost of Land-Use Projects"
+- University of Virginia, Charlottesville, Virginia, USA: "What is the Cost of Environmental Degradation?"
+
+Summaries of the eight Outstanding team papers follow.
+
+# Chongqing University, Chongqing, China: "What is the Cost of Environmental Degradation?"
+
+The team from Chongqing University had a standard base model derived from the literature but shined in their model extensions and applications. Using a derivation of the Cobb-Douglas equations, the team was able to calculate net primary production with shadow price analysis applied to four different projects.
+
+Initially, the judges were concerned that analysis of the four projects—a house, a subway, a steel mill, and a bridge—was without any context as to where these projects might occur. However, unlike other models, their model is land-type dependent. Each project was accompanied by extensive temporal cost-benefit analysis as shown in Figure 1, as well as calculations of capital and technological input costs, employee payment, maintenance costs, revenue, social benefit, and environmental degradation.
+
+
+Figure 1. Environmental degradation cost for a subway (curve rising over time, cost scale on right) and for a house (curve rising then falling, cost scale on left), from Chongqing University
+
+Judges were most impressed by this Outstanding paper's treatment of recommendations and their further discussion. After an extensive sensitivity analysis followed up by a tax discussion, the team nailed the recommendations. One of our sustainability judges lauded the team for the inclusion of all aspects of environmental impact, demonstrating a deep understanding of the complexity of the problem. Their logic was very clear and left a road map for further analysis.
+
+# Central University of Finance and Economics, Beijing, China: "Land counts! Better Use and Lower Cost"
+
+This paper received the classification of Outstanding for its readability, organization, consideration of time, and advice to project planners. This paper stood out to the judges due to its incorporation of seasonal and yearly change.
+
+The team created a model based on work of Costanza et al. [1997] and the MEA model [Millennium Ecosystem Assessment Board 2005]. While the team's model was not novel, the framework of their model, depicted in Figure 2, was clearly outlined and the discussion of their results was well written.
+
+
+Figure 2. Model framework for Central University of Finance and Economics.
+
+To test their model, the Central University team measured the value of ecological services in 14 regions of China, taking into account the impact of total land area on ecological value assessment. After testing their model, they applied it both to a large project and to a small project. Based on
+
+their results, they made thoughtful recommendations to planners and conducted a cost-benefit analysis.
+
+The judges were impressed by the team's accounting for seasonal change by considering factors such as change in temperature and rainfall throughout the year. The team had a well-written and organized paper that addressed change in time in a way not seen in other papers. For this, they received a rating of Outstanding.
+
+# Emory University, Atlanta, Georgia, USA: "Is This a Monetary Evaluation of Ecosystem Services?"
+
+This Outstanding paper pleased the judges with a straightforward description of their model and a clearly written paper. Not only did the team consider several case studies but they also incorporated the change in ecological service over time.
+
+To evaluate the cost of ecological services, the Emory team included models for carbon sequestration and water filtration and purification. They considered both an urban proximity index and an eco-friendly index. These indices corresponded to a location's proximity to an urban setting and to the location's effort to reduce its carbon footprint respectfully.
+
+Six different cases studies were conducted of varying size and location. Making use of The Economics of Ecosystems and Biodiversity (TEEB) valuation database, the team calculated the total monetary cost, taking the cost of ecosystem services into account. Their ecological costs can be seen in Figure 3.
+
+| Project | Ecological Cost (USD) |
| Road construction in Cairo, Egypt | $219 |
| Housing in Washington, USA | $502 |
| Facebook MPK20 in California, USA | $19,110 |
| Road construction in Hobart, Australia | $1.7 million |
| Via Verde Pipeline in Puerto Rico | $642 million |
| Nicaragua Canal Project | $3.16 billion |
+
+Figure 3. Economic Cost obtain by Emory University for six case studies.
+
+For each location, the team gave the total with and without the first year's environmental service costs. Taking the percentage increase into account, the team considered whether the cost of ecosystem services had a significant impact on the total cost of the land-use project. The judges commented that there was some disconnect between the model and the values obtained for the ecological cost; however, the team clearly presented their findings in a concise and easy to read way.
+
+The team from Emory concluded their paper by considering the accuracy of their model in the future and a discussion of sensitivity analysis. While the judges felt that the paper lacked advice to project managers and
+
+planners, the team's ability to clearly describe their model and results was worthy of the designation Outstanding.
+
+# Lanzhou University of Technology, Lanzhou, Gansu, China: "Assessment of Ecological Services"
+
+Judges always seek out well-researched submissions with unique models based in the philosophy of the literature and on the reality of the available data. The team from Lanzhou University of Technology had such a submission. They impressed the judges in their creation of a metric based on the environmental components shown in Figure 4, as well as in their acknowledgment of data abnormalities (in some of the factors that they wished to incorporate and in others that they did not include due to lack of a reputable source).
+
+
+Figure 4. Lanzhou University's graphic depiction of their unique cost metric incorporating natural resource depletion, prevention costs, and repair costs.
+
+The team began with a nice restatement of the problem and included both costs and benefits presented from both sides of the argument before the creation of their own metric for natural resources lost. They carried this detailed self-created model throughout the entire submission, adding for more extensive analysis, when needed, aspects such as time. Judges applauded this approach, since many other teams created many separate models to handle different aspects of the problem.
+
+The only caution for this Outstanding paper was the use of their analysis for their recommendations. Although grounded in their unique and
+
+tested model, the recommendations were very technical. Instead, the team should have taken their quantitative results and put them into tangible recommendations that would be truly implementable by project managers.
+
+Overall, Lanzhou University's treatment of the overall problem from differing perspectives led to the creation of a solid, scalable, and adaptable model which should stand as an example for future teams as they seek to tackle environmental problems.
+
+# Nanjing University of Information Science and Technology, Nanjing Jiangsu, China:
+
+# "Ecosystem Services Matter! Sustainability is Necessary"
+
+The team from Nanjing University of Information Science and Technology conducted solid analysis throughout and submitted a well-written paper. The judges want to recognize this Outstanding submission for its initial analysis and inclusion of how we need to consider the element of time in our analysis for the true understanding of the impact of ecosystem services in a particular site.
+
+The team identified a variety of indicators from the categories of provisioning services, regulating services, biotope services, and cultural services, based on the Millennium Ecosystem Assessment [Millennium Ecosystem Assessment Board 2005]. The team then utilized the entropy-weight method to determine the weights of each before compressing 11 total indicators into four comprehensive variables. The judges appreciated their description of the process and their normalization of weights to determine a true intensity evaluation, which also allowed the team to classify each ecosystem service index in terms of the appropriate levels for weak, moderate, or strong impact.
+
+The judges praised the concluding analysis, in which the team used vector regression to integrate the temporal requirement, and their inclusion of a detailed discussion of each project with change over time, as shown for one example in Figure 5.
+
+Although the projects included were generic, the sustainability analysis and then the comparison of each project with and without the consideration of ecosystem services was excellent. The team from Nanjing University truly utilized descriptive graphics coupled with supporting explanations to communicate clearly the impacts and their recommendations for each project.
+
+
+Figure 5. Cost-benefit ratios for one project from the Nanjing University of Information Science and Technology.
+
+# Renmin University of China, Beijing, China: "Take Environmental Effect into Consideration: Cost-Benefit Analysis on Land-Use Projects"
+
+The Outstanding submission from Renmin University stood out for a straightforward cost-benefit model that included an aspect of time from the beginning, plus their tremendously detailed explanation and analysis of two projects of different scales from different areas of the world. Including the dynamical nature of the problem from the start of the analysis indicated to the judges that this team really understood that short-term implications of a project on the environment contrast with some of the lasting impacts on ecosystem services.
+
+Although the only temporal analysis was two models that built upon each other to account for short-term, and then long-term, environmental degradation, the team nicely explained why this is not just an economic issue. They explained the necessity for the inclusion of the costs to mitigate negative results of land-use changes. Their models for short- and long-term impacts on ecosystem included different factors, with the long-term model truly focused on the environmental costs, as shown in Figure 6, as opposed to the short-run costs balanced between economic and environmental costs.
+
+Their models were applied expertly to a modern small-scale paper mill in China and then to a long-term analysis of the expansive electric power development by the Tennessee Valley Authority (TVA). The use of diverse
+
+
+Figure 6. Long-run cost schematic from Renmin University.
+
+application projects—modern vs. historical, small vs. large-scale, built-up area vs. rural expansive region, and from different areas in the world—really allowed the team to showcase the nice features and flexibility of their model. The use of the TVA allowed them to include a historical perspective on the decision to build electric power in the United States. This analysis led the team to a nice discussion of their identified strengths but, more importantly, the weaknesses in their formulation. Judges appreciated their honesty in the critique of the data utilized for the creation of their weights, as well as the ambiguity in boundaries between short and long term.
+
+# U.S. Military Academy, West Point, New York, USA: "Ecological Services Valuation Model: Understanding the True Cost of Land-Use Projects"
+
+The team from the U.S. Military Academy was selected as this year's Rachel Carson Award winner. This award honors an American conservationist whose book Silent Spring initiated the global environmental movement and whose work spanned many disciplines concerned with local and global environments [Carson 1962]. This award is presented to the team for their excellence in using scientific theory and data in its modeling.
+
+The judges praised this team for their well-thought-out, simple model based in the reality of the complexity of the issue at hand. The team spent time through the examination of their project on the cost to an individual project manager or business owner as opposed to the price necessary to preserve the value of ecosystem services, which are owned by all in the surrounding community. They proposed a method of converting lost environmental services into Quality-Adjusted Life Years (QALYs), which may then be converted into dollars as necessary. This incorporation of the
+
+human aspect, and not simply just dollars, provided unmatched multidimensional considerations and understanding by the team.
+
+Although the actual model presented was simple, the detailed use of the concepts in two specific future projects was outstanding. The analysis for each project consisted of a breakdown for each indicator on the value for the particular project, as well as the shadow price or tradeoff cost for an increase in the indicator, in terms of both QALYs and dollars. The 16 indicators were broken down into three classifications for the overall ecological services of provisioning, regulating, and cultural.
+
+The judges found the best analysis within the team's counterarguments section, where the team discussed the limitations and critiques of their methods as well as other methods that exist today. For example, the team considered restitution calculations that currently are based on the cost to add that service, but should—as specified by the team—be based on the future impact, not just the present.
+
+Overall, this submission showed to the judges a deeper understanding of the interconnectedness of the human experience and the ecosystem services by examining the current theory and evidence to create a simple model to account for the true impact of environmental degradation done through the implementation of projects on our valuable land.
+
+# University of Virginia, Charlottesville, Virginia, USA: "What is the Cost of Environmental Degradation?"
+
+The Institute for Operations Research and the Management Sciences (INFORMS) selected this submission from the University of Virginia as this year's outstanding INFORMS winner for Problem E. The INFORMS designation is given to a team whose modeling and analyses best exemplify the style and content reflected in the professional practice of operations research. This submission was unique in the team's choice to use a differential equations model.
+
+Their model was based on a logistic growth function to predict the impact of projects on the value of the land in terms of ecosystem services. The choice of such a formulation was based solidly on the fact that the ecosystem is a complicated interrelated system; and although impacts on it may initially grow exponentially, they cannot do so forever. The environmental nature of the problem resulted in their using a differential equation model that limited exponential growth.
+
+Another unique aspect of the model was the understanding of land use over time and how it transitions naturally, with or without human implementation of projects. The team understood that they needed first to evaluate the existing land not just based on the type of land but also on its quality, which was a truly unique perspective. Additional consideration was given to the valuation of the land and the physical area of the land.
+
+The judges appreciated the scope of the three different projects that the team considered—the building of steel production factories, regreening of a desert, and the creation of a panda habitat—one very traditional, one with mixed opinions, and one truly meant to be environmentally conscious.
+
+Take the more controversial regretting project, as seen in Figure 7 in the team's cost-benefit analysis, where the independent variable is area of the project. The project loses money (from the status quo established for the land) until the scope of the project expands to $4285\mathrm{km}^2$ , it and reaches maximum gains at $5980\mathrm{km}^2$ . The team accompanies their graphics by thoughtful interpretations of the implications in terms of the context of the problem as well as the mathematical analysis.
+
+
+Figure 7. Cost-benefit plot of the Kubuqi Greening Project from the University of Virginia.
+
+The team's discussion of parameter estimation, initial conditions, and intercept analysis were components unmatched in this year's submissions. Although the team did not include a strict sensitivity analysis nor specific discussion of strengths and weaknesses, those topics were implied throughout the paper.
+
+# Recommendations to Future Participants
+
+In the past, we have made recommendations for future participants for Problem E. We suggest looking at the details of these recommendations in the Problem E judge's commentary in last year's ICM issue [Arney 2018]. In general, the judges recommend focusing on three areas for those attempting the environmental problems during next year's competition:
+
+- First, make a plan for the weekend and conduct the initial research.
+
+- Next, solve the problem and all the subtasks that were stated or implied in the problem statement.
+- Finally, ensure that you present explanations and interpretations for your solutions and recommendations.
+
+# Conclusion
+
+This problem presented challenges to teams in the form of considering the impact of or accounting for changes to ecosystem services due to land-use projects. Only when ecosystem services are accounted for in the cost-benefit analysis for a proposed project can the true comprehensive valuation of the project be determined and assessed.
+
+Many teams had innovative and useful ideas for some parts of the problem but were unable to satisfy all the tasks required in the problem within the time constraints. The eight Outstanding teams completed the required tasks the best and were able to communicate the results effectively. Members of all the competing teams are to be congratulated for their excellent work and dedication to interdisciplinary modeling and problem solving. The judges were truly impressed by the ability of so many to combine great modeling, well researched science and effective written communication skills to address the critical issues of future environmental degradation and its impact on ecosystem services.
+
+# References
+
+Arney, Kristin. 2018. Judges' commentary: Climate change and regional instability. The UMAP Journal 39 (2): 187-196.
+Carson, Rachel. 1962. Silent Spring. New York: Houghton-Mifflin.
+Costanza, Robert, Ralph d'Arge, Rudolf de Groot, Stephen Farber, Monica Grasso, Bruce Hannon, Karin Limburgh, Shaid Naeem, Robert V. O'Neill, Jose Paruelo, Robert G. Raskin, Paul Sutton, and Marjan van den Belt. 1997. The value of the world's ecosystem services and natural capital. Nature 387: 253-260. https://www.pdx.edu/sites/www.pdx.edu.sustainability/files/Nature_Paper.pdf.
+Millennium Ecosystem Assessment. 2005. Guide to the Millennium Assessment Reports. https://www.millenniumassessment.org/en/index.html.
+
+# About the Authors
+
+Kristin Arney is pursuing her Ph.D. in Industrial Engineering at the University of Washington. Kristin began her military career after graduating with a B.S. in Mathematics from Lafayette College. During her career, she has served in assignments all over the globe, received her M.S. in Operations Research from North Carolina State University, and taught as an Assistant Professor at the U.S. Military Academy, where she returned and rejoined the faculty in January 2017.
+
+Kasie Farlow is Assistant Professor of Mathematics at Dominican College. Kasie obtained her Ph.D. in Mathematics from Virginia Tech followed by three years at the U.S. Military Academy as an Assistant Professor of Mathematics and Davis Fellow. It was at West Point where she was first introduced to the Interdisciplinary Contest in Modeling (ICM). Since 2016, Kasie has been actively involved in the ICM, serving as a triage judge, commentator, and final judge.
\ No newline at end of file
diff --git a/MCM/2019/F/1904381/1904381.md b/MCM/2019/F/1904381/1904381.md
new file mode 100644
index 0000000000000000000000000000000000000000..5b8068c783eb7a4748916dc556454847fbc0301e
--- /dev/null
+++ b/MCM/2019/F/1904381/1904381.md
@@ -0,0 +1,559 @@
+# 2019
+
+# MCM/ICM
+
+# Summary Sheet
+
+# Digital Currency System is Coming
+
+# Summary
+
+In recent years, digital currency has developed rapidly and become an indispensable part of the modern economic system. We construct a brand-new dynamic macroeconomic operation system, taking digital currency into account. Our model is based on three perspectives: individual choice, national economy, international economy. We first consider the individual behavior choice decision, which is because the aggregate demand of all micro-individuals selections determine the domestic economic operation state, describing as the product market equilibrium and money market equilibrium. We use models to analyze three short-term economic as well as a long-term economic performance objective, under different exchange rate systems, economic shocks and policies.
+
+We first build a virtual country whose central bank issues the new digital currency. The impact of digital currency on the current banking system and macroeconomic system is determined by the economic interaction with a particular country in the real world.
+
+We discuss two different exchange rate systems that a country will choose. Using theoretical analysis and R, we find that either systems would achieve internal and external equilibrium and realize economic objects if proper policies are implemented when digital currency is introduced.
+
+The innovation of our model is that we introduce the parameters of international capital flows (ICF) to the model. We analyze the impact of ICF on economy of the country both theoretically and empirically. It is inspiring that we find that decentralized digital currency can not only run the economy more efficiently by removing barriers to currency flows, but also enhance the world's welfare by the ultra-high liquidity of digital currency.
+
+To ensure the new banking and the macroeconomic systems involving digital currency can operate with stability and effectiveness we propose all countries accepting digital currencies co-create a United Nations-affiliated organization—the World Digital Money Bank (WDCB) and develop a series of regulations.
+
+Keywords: digital currency; fixed exchange rate; float exchange rate
+
+# 1 Introduction
+
+# 1.1 Background
+
+Considering the application of the digital currency, there lies both advantages and disadvantages. Its digital forms can enable instantaneous transactions and borderless transfer-of-ownership, which improves the efficiency of the markets and constructs a more convenient form of financial exchange. But in the meantime, while not technically money, their value is tied to real-world currencies which from the very beginning has placed them in a very precarious situation. Lack of regulation around these currencies and their anonymity also bring risk to both citizens and economic analysts.
+
+Therefore, modeling the working mechanism of financial system adding the new type of currency is necessary. This model would enable us to have a clear overview of the market model and help make policies with regard to both regulations of the currency market and stability of the system.
+
+# 1.2 Statement of the Problem
+
+- Task1: Construct a model in which digital currency is taken into consideration, and the model should adequately elucidate this financial system.
+- Task2: Identify the viability and effects of a global decentralized digital financial market and develop a kind of general digital currency.
+- Task3: Identify major factors which will limit or facilitate its growth, access, security, and stability at both the individual, national, and global levels.
+- Task4: How will the countries modify their current banking and monetary models according to their different needs and their willingness to work with this new financial marketplace? And what are the consequences of these modifications?
+- Task5: Include the mechanisms for oversight of the global digital currency built above.
+- Task6: Analyze the long-term effects of this system on the current banking industry; the local, regional, and world economy; and international relations between countries.
+- Task7: What will happen if the countries abandon their own currency and only use digital currency?
+
+# 2 Assumptions and Notations
+
+# 2.1 Assumptions
+
+In order to better quantify our model, we may relax some of the assumptions.
+
+- Consumption and savings occur in individuals, while production and investment occur in the enterprise sector. Individuals and enterprises have to pay taxes.
+- Consumers are rational and pursue maximum utility.
+- The government has two kinds of behavior: government purchase expenditure and tax revenue.
+- Depreciation and undistributed profits are zero. GDP, NDP, NI, PI and DPI are equal.
+- The government aims to achieve economic growth, adequate employment, price stability and internal and external equilibrium under balance of international payments.
+
+# 2.2 Notation of Parameters
+
+We make a list of parameters involving in the model as shown in Tabel 1.
+
+# 3 Mathematical Model
+
+In order to construct a model that sufficiently represent the brand-new financial system, we should take the willingness of countries to accept digital currency into account and primarily consider two different paths of developing digital currency, which laid the foundation of our mathematical model.
+
+Path I Central banks in all countries refuse to accept digital currency. In this case, the system will be the same as today's, which rarely influence the traditional currency market.
+
+Path II NOT ALL central banks refuse to accept digital currency. When a central bank of a country with certain international influence that regard digital currency as traditional currency, the central bank will be willing to pay a certain amount of traditional currency to purchase digital currency owned by the public, and will sell digital currency to potential demanders to earn a certain amount of traditional currency. From the analysis above, it can be seen that digital currency will directly enter the existing open macroeconomic model.
+
+Table 1: List of Parameters and Notations
+
+| Parameters | Descriptions |
| Y | GDP of a country |
| C | Consumption of a country |
| I | Investment |
| G | Government purchase |
| T | Tax |
| X | Exports |
| M | Imports |
| m | Total currency demand of individuals,
+a constant in a short period |
| y | Individual income level |
| i | Market interest rate level |
| m1 | The amount of money held in traditional currency |
| m2 | The amount of money held in digital currency |
| u | Individual utility level |
| α,β | Parameters. Depending on individual micro-traits |
| L1(Y) | Money demand caused by transactions motives and precautionary motives |
| L2(i) | Money demand caused by speculative motive is a function with regards to interest rate |
| Mss | Total money supply
+in the form of digital currency |
| Msl1 | Total money supply
+in the form of traditional currency |
| Msl2 | Total money supply
+in the form of digital currency |
| P | Price level measured by inflation rate |
| k,h | Constants |
| CA | The balance of current account |
| FA | The balance of financial account |
| NX | Net export |
| NF | Net capital inflow |
| AX | Capital outflow |
| AM | Capital inflow |
| Pf | Price level of foreign countries,
+measured by inflation rates of foreign countries |
| e | Exchange rate of national currency
+under direct price method |
| iw | Interest rates of foreign countries |
| γ,n,σ,φ | Parameters |
+
+In the following sections, we will mainly focus on path two. Our idea is to discuss the details of our mathematical model from three different perspectives in macroeconomics: 1) the domestic product market 2) the money market including individual level and national level 3) the foreign exchange market. In addition, we would observe how every single model changes with the introduction of digital currency and analyze the working mechanism of all these three general models.
+
+# 3.1 Digital Currency System
+
+
+
+The figure above demonstrates us the development of international monetary system. It is evident that this sytem will be changed with the introduction of brand-new type of currency-digital currency. Therefore, we name this new system as Digital Currency System.
+
+- Build a virtual country
+
+Regardless of all the existing digital currencies, we assume a mysterious organization issues a brand-new digital currency, which is initially held by a minority. Also, we envision there is a virtual country. The central bank of this country is the very mysterious organization that issues the digital currency, the currency in circulation is digital currency and the residents of this country are the individuals holding the currency.
+
+We assume the total amount of digital money issued by the central bank of this virtual country as a constant. Each unit of electronic money can be infinitely divided. The unit price of digital money measured in real-world currency depends on the economic interactions between the virtual country and the countries in the real world. Denote the virtual country as $A$ .
+
+In the following analysis, we suppose a particular country in the real world will be economically associated with virtual countries through existing currencies and digital currencies. And this international economic relationship can be characterized by FE curve. Starting with the FE curve, we will in turn analyze the IS curve and the LM curve of this particular country which has economically associated with the virtual country.
+
+# 3.2 The Money Market
+
+# 3.2.1 Basic Theory
+
+Money Supply
+
+Money supply refers to the process in which the economic entity creates the money supply and puts it into circulation. That is, the behavior and process of the central bank and commercial banks investing, expanding or contracting the money in circulation.
+
+The money supply refers to the sum of the currency held by enterprises, individuals and foreign countries outside the banking system and other deposit currencies that are freely available at any time.
+
+Money Demand
+
+Money demand refers to the amount of money the public is willing to hold after comprehensively weighing the benefits and costs of various assets.
+
+- Liquidity Preference Theory
+
+Keynes's study of money demand is based on a study of the motives of demand for economic agents. Keynes believed that peoplehatAZs demand for money is determined by three motives:
+
+1. The transactions motive: in order to make daily transactions, people must hold money
+2. The precautionary motive: people prefer to have liquidity to cope with unexpected situations.
+3. Speculative motive: due to the uncertainty of the future interest rate, people will adjust the asset structure and demand more money to hold in order to avoid capital loss or increase capital gains.
+
+# 3.2.2 Individual Level - Micro Level
+
+Modern new classical macroeconomics tend to focus on the determination of money demand from the perspective of dynamic optimization. Based on the new classical currency model, we interpret currency as the amount of treasure that people are willing to retain under the premise of invariant income. Hence we propose the following model to describe the economic behavior at the micro level.
+
+- Individual Currency Choice
+
+Denote $\bar{m} \triangleq \bar{m}(y, i)$ . According to our model, we have $\bar{m} = m_1 + m_2$
+
+- Individual Money Utility Function
+
+# Indifferent Curve:
+
+$$
+u = m _ {1} ^ {\alpha} m _ {2} ^ {\beta}
+$$
+
+where $\alpha, \beta$ are parameters. They depend on individual micro-traits, such as risk preferences, liquidity preferences, etc.
+
+Assumptions: 1. Individual preference is normal. 2. Monotonicity. 3. Convexity.
+
+In the next section, we try to solve this equation through both mathematical calculation and explicit figures.
+
+$$
+\max _ {m _ {1}, m _ {2}} u = m _ {1} ^ {\alpha} m _ {2} ^ {\beta}
+$$
+
+subject to
+
+$$
+\bar {m} = m _ {1} + m _ {2}
+$$
+
+We assume that both $u$ and $\bar{m}$ have continuous first partial derivatives. To find the maximum of this function, we introduce a new variable $\lambda$ as the Lagrange multiplier.
+
+$$
+L (m _ {1}, m _ {2}, \lambda) = m _ {1} ^ {\alpha} m _ {2} ^ {\beta} - \lambda (m _ {1} + m _ {2} - \bar {m})
+$$
+
+We take the partial derivatives with regard to $m_{1}, m_{2}$ and $\lambda$ .
+
+$$
+\left\{ \begin{array}{l} \frac {\partial L}{\partial m _ {1}} = \alpha m _ {1} ^ {\alpha - 1} m _ {2} ^ {\beta} - \lambda = 0 \\ \frac {\partial L}{\partial m _ {2}} = \beta m _ {1} ^ {\alpha} m _ {2} ^ {\beta - 1} - \lambda = 0 \Longrightarrow \\ \frac {\partial L}{\partial \lambda} = \bar {m} - m _ {1} - m _ {2} = 0 \end{array} \right. \tag {1}
+$$
+
+
+(a) individual choice between currency
+Figure 1: The slope
+
+
+(b) The points that are not on the curve
+
+# 3.2.3 National Level - Macro Level
+
+1. Total Demand for Money: $M = \sum \bar{m} = \sum m_{1} + \sum m_{2}$
+2. Total Supply for Money: $M_{s} = \bar{M}_{s1} + \bar{M}_{s2}$
+
+3. Balance:
+
+$$
+\left\{ \begin{array}{l} M = \frac {\bar {M} _ {s 1} + \bar {M} _ {s 2}}{P} \\ L (Y, i) = L _ {1} (Y) + L _ {2} (i) = k Y - h i \\ M = L (Y, i) \end{array} \right.
+$$
+
+$\Rightarrow i = -\frac{1}{h}\frac{M_s}{P} + \frac{k}{h} Y$ , where $k, h:$ are parameters.
+
+Figure 1 explains the economic meaning of points that are not on the curve. As noted in the figure 1, the points above the LM curve indicate insufficient money demand of the country while those below the curve represent excessive demand for money.
+
+
+(a) Sensitivity of Monetary Demand to Income
+Figure 2: The slope
+
+
+(b) Sensitivity of Monetary Demand to Interest Rate
+
+Figure 2 demonstrates that the slope of the LM curve is determined by the slope of the currency transaction demand against the precautionary demand curve and against the speculative currency demand curve:
+
+1. When the sensitivity of money demand to income is certain, the higher the sensitivity of money demand to interest rates, the flatter the LM curve.
+2. When the sensitivity of money demand to interest rates is certain, the lower the sensitivity of money demand to income, the flatter the LM curve.
+
+
+(a)
+Figure 3: LM Curve shifts
+
+
+(b)
+
+Figure 3 explains why LM curve will shift to the right(or move downward):
+
+1. Exogenous decrease in money demand: both $L_{1}$ and $L_{2}$ decrease.
+2. Exogenous increase in money supply: Ms increases while P decreases.
+
+# 3.3 The Domestic Product Market
+
+The IS curve describes all possible combinations of interest rate (i) and real GDP (Y) when the domestic product market is in equilibrium, given other fundamental factors.
+
+1. National Income Balance: $Y = C + I + G + (X - M) = C + S + T$
+2. Saving-Investment Equality: $I_{all} = I + G + (X - M) = S + T = S_{all}$
+
+Note: I and S below refer to $I_{all}$ and $S_{all}$
+
+IS Model:
+
+$$
+\left\{ \begin{array}{l} I = I _ {0} - d i + (X - M) \\ S = - a + (1 - b) Y + (T - G) \\ I = S \end{array} \right.
+$$
+
+Therefore, we have
+
+$$
+i = \frac {a + I _ {0} + (\bar {G} - \bar {T}) + (\bar {X} - \bar {M})}{d} - \frac {(1 - b)}{d} Y
+$$
+
+| The position of a point on a non-curve | Curve slope size | Curve panning |
| Above | Below | the slope of the is curve depends on the slope of the savings function and the investment function
+(1) The higher the elasticity of investment to interest rates, the more straight the IS curve.
+(2) The lower the elasticity of savings to income, the more straight the IS curve. | cause The IS curve moves to the right (or up)
+(1) investment curve panning to the right: increased business confidence, increased exports, reduced imports
+(2) savings curve panning down: consumer confidence increases, taxes decrease, government purchases increase |
| Aggregate demand | Aggregate demand |
| Insufficient | Excessive |
+
+Figure 4: The analysis of the IS model
+
+# 3.4 The Foreign Exchange Market
+
+The FE curve describes all possible combinations of interest rates (i) and real GDP (Y) when the balance of payments is in equilibrium, given other fundamental factors.
+
+The equilibrium of the balance of payments means that the sum of the balance of current account balances and the balance of financial account is zero.
+
+# Assumptions:
+
+1)The current account balance is a net export
+2) The balance of financial account balance is net capital inflow.
+
+We have $B = CA(Y) + FA(i) = 0$
+
+$$
+\left\{ \begin{array}{c} C A \approx N X \\ N X = X - M \end{array} \right. \left\{ \begin{array}{c} F A \approx N F \\ N F = A M - A X \end{array} \right.
+$$
+
+Solve the equations shown above
+
+$$
+X - M + (A M - A X) = 0
+$$
+
+$$
+N X = X - M = N F = A X - A M
+$$
+
+Suppose NX and NF satisfy the following equations
+
+$$
+\left. \begin{array}{c} N X = X - M \\ M = m _ {0} + \gamma Y - n * r e r \\ X = X _ {0} \end{array} \right\} \Longrightarrow N X (Y) = X _ {0} - m _ {0} - \gamma Y + n \frac {P _ {f} * e}{P} \tag {2}
+$$
+
+$$
+\left. \begin{array}{c} N F (i) = \sigma \left(i _ {w} - i\right) + \phi \\ N X = N F \end{array} \right\} \Longrightarrow i = \frac {\gamma}{\sigma} Y + \left(i _ {w} + \frac {n}{\sigma} \frac {P _ {f} * e}{P} - \frac {X _ {0} - m _ {0}}{\sigma}\right) \tag {3}
+$$
+
+| The position of a point on a non-curve | Curve slope size | Curve panning |
| Above | Below | the slope of FE curve depends on the sensitivity of marginal import tendency and net capital outflow to interest rate
+(1) when the sensitivity of net capital outflow to interest rate is certain, the smaller the marginal import tendency, the more straight the FE curve;
+(2) When marginal import tendency is certain, the higher the sensitivity of net capital outflow to interest rate, the more straight the FE curve | The FE curve moves to the right (or down)
+(1) lead to net exports NX 's exogenous increase: exports increase, imports decrease
+(2) lead to net capital outflows NF 's exogenous decline: reduced capital outflows and increased capital inflows
+(3) devaluation of the local currency |
| Balance of payments Surplus | Balance of payments Deficit |
+
+Figure 5: The analysis of the FE model
+
+# 4 Analysis of Our Model
+
+# 4.1 Choice of Exchange Rate Institution
+
+In the real world, the central bank of a particular country choose an exchange rate institution that allows the exchange rate between domestic currency and digital currency issued by the virtual country. Denote a particular country in the real world as $B_{i}(i = 1,2,3,\dots n)$ . Suppose $B_{i}$ chooses either fixed exchange rate or floating exchange rate. The exchange rate is expressed in domestic currency using direct price method. Therefore, the meaning of this exchange rate is the national currency in which one unit of digital currency can be exchanged. We also assume that there is no exchange control on digital currency and the bank's buying price is the same as selling price.
+
+In the following sections, we will analyze the macroeconomic performance of the country $B_{i}$ after every country chooses different exchange rate institution, including: 1) long run goals 2) short run goals 3) internal equilibrium 4) external equilibrium. To be more specific, we will analyze the following :
+
+# Fixed Exchange Rate
+
+- International Balance of Payments under a fixed exchange rate institution
+Policy Effects under a fixed exchange rate institution
+- Shocks to the Economy under a fixed exchange rate institution
+- Internal and External Disequilibrium Adjustment under a fixed exchange rate institution
+
+# Floating Exchange Rate
+
+- International Balance of Payments under a floating exchange rate institution
+Policy Effects under a floating exchange rate institution
+- Shocks to the Economy under a floating exchange rate institution
+- Internal and External Disequilibrium Adjustment under a floating exchange rate institution
+
+# 4.2 Fixed Exchange Rate Institution
+
+# 4.2.1 Balance of International Payments
+
+In order to maintain exchange rate stability, the central bank will take interventions. Interventions include sterilized intervention and non-sterilized intervention.
+
+Non-Sterilized Intervention
+
+Situation I There is a surplus for the official settlement and the domestic currency is facing pressure to appreciate.
+
+Under this circumstance, digital currency is under pressure to depreciate. Since the central banks of virtual countries do not have monetary and fiscal policies, we only need to analyze the behavior of central banks in existing countries. In addition, we regard the electronic money held by the central bank as a foreign exchange asset.
+
+Central bank intervention: purchase digital currency, sell domestic currency.
+
+| Major assets | Main liabilities |
| Domestic assets | Base currency ↑ |
| Debt securities | Cash |
| Loans to the bank | Deposit from bank |
| International reserve assets ↑ | |
| Foreign exchange assets | |
+
+Table 2: Central bank's assets and liabilities
+
+Table 2 demonstrates the impact of central bank intervention on the central bank's balance sheet. Base currency and international reserve assets will increase under this condition. Moreover, the way in which official interventions adjust the assets and liabilities of the central bank not only changes the holdings of a country's official international reserve assets, but also changes the money supply. Under the partial reserve system of the bank, the amount of change in the money supply will be several times more than the amount of official intervention.
+
+- International Balance of Payments - Realization of External Equilibrium
+
+# 4.2.2 Policy Effects
+
+- Monetary Policy: take the expansionary monetary policy as an example. The mechanism works as follows.
+
+
+Figure 6: Mechanism
+
+- Fiscal Policy: take the expansionary monetary policy as an example. The mechanism works as follows.
+
+ | | Monetary policy | Fiscal Policy |
| Mechanism | Hypothetical scenarios | The initial state of a country is non-full employment; external equilibrium |
| Policy options | Expansionary | Expansionary |
| Impact on the balance of payments | Deteriorating | Not sure. |
| Policy Effect | Capital flows are less sensitive to interest rates | The effect of monetary policy is limited | fiscal policy in the short term on the actual GDP has a strong impact. |
| Capital flows are more sensitive to interest rates |
+
+Figure 7: Mechanism
+
+ | Fixed exchange rate system |
| Assumption | Complete capital flow |
| Monetary policy | It is impossible for a country to implement a completely independent monetary policy |
| Fiscal policy | Fiscal policy is completely effective |
| Conclusion | Under the fixed exchange rate system, fiscal policy is relatively effective |
+
+Figure 8: Mechanism
+
+# 4.2.3 Shocks to the Economy
+
+In this section, we will introduce definitions of different shocks to the economy help explain the impact of introducing digital currency based on our mathematical model.
+
+Shocks to Domestic Currency: Changes in the way citizens hold money
+
+Under the fixed exchange rate system, the impact of currency shocks on interest rates and output is very limited.
+
+Domestic Spending Shock Effect: Changes in consumer confidence
+
+Under the fixed exchange rate system, the impact of expenditure will have a certain impact on interest rates and output.
+
+International Trade Shock Effects: Reduced export demands
+
+Under the fixed exchange rate system, the impact of international trade has a huge impact on the internal equilibrium of a country.
+
+# International Capital Flow Shock Effects: international capital outflow
+
+The new financial transaction digital method enables users to instantly exchange currency via email addresses or fingerprints. A peer-to-peer payment system provided by the companies such as Google Pay enables global virtual currency transfers in seconds without the need for bank or currency verification transaction exchanges. Digital transactions exceed cash and check transactions because they are not affected by bank policies, national boundaries, citizenship, debt or other socio-economic factors. Due to the influence of exogenous factors such as political fluctuations and insider information, countries with an electronic monetary system will often face huge international capital flows. For example, when a country has a war, individuals holding the currency of the country will frantically want to exchange the country's currency for electronic money and transfer the electronic money to the proper places.
+
+Under the fixed exchange rate institution, the impact of international capital flows has a dramatic impact on internal equilibrium.
+
+# 4.2.4 Internal and External Disequilibrium Adjustment
+
+Tinbergen's Rule: The number of independent policy instruments that a country can utilize is at least equal to the number of economic policy objectives to be achieved. That is to say, to achieve an economic goal, at least an independent policy tool is needed.
+
+Mundell's policy matching principle: Each policy tool should be assigned to its most influential goal, which has a comparative advantage in influencing this policy goal.
+
+According to Mundell's policy matching principle, we summarize all the possible policy matches as shown below.
+
+
+(a)
+
+| Initial State of Balance of International Payments | Initial state of domestic economy |
| Unemployment | Inflation |
| Surplus | Expansionary monetary policy
+Expansionary fiscal policy | Expansionary monetary policy
+Tightening fiscal policy |
| Deficit | Tightening monetary policy
+Expansionary fiscal policy | Tightening monetary policy
+Tightening fiscal policy |
+
+(b)
+
+# Swan Model:
+
+| Initial state of balance of payments | Initial state of the domestic economy |
| Unemployed | Inflation |
| Surplus | Domestic currency appreciation | Domestic currency appreciation |
| Expansionary policy | Tightening policy |
| Deficit | Domestic currency depreciation | Domestic currency depreciation |
| Expansionary policy | Tightening policy |
+
+# 4.3 Floating Exchange Rate Institution
+
+# 4.3.1 Balance of International Payments
+
+The floating exchange rate institution can realize the external equilibrium spontaneously through the free fluctuation of currency.
+
+Situation I: official settlement surplus
+
+If there is a surplus in balance of international payments, domestic currency will appreciate and digital currency will depreciate. For individuals in virtual countries, they will reduce imports. Therefore, the country's exports are reduced, FE curve moves to the left, and the IS curve is panned to the left (or below). Ultimately, it will achieve internal and external equilibrium and hence the digital currency will depreciate.
+
+# 4.3.2 Policy Effects
+
+The Influence of Monetary Policy on Exchange Rate
+- The Influence of Fiscal Policy on Exchange Rate
+
+
+
+Summarizing the above, the following table can be summarized.
+
+ | Monetary policy | Fiscal Policy |
| Mechanism | Hypothetical scenarios | The initial state of a country is non-full employment |
| Policy options | Expansionary | Expansionary |
| Impact on exchange rates | Depreciation of the domestic currency | Not sure. |
| Policy Effect | Capital flows are completely insensitive to interest rates | Strong | Weak |
| Capital flows are extremely sensitive to interest rates | Highly effective | Completely invalid |
+
+Comparison of policy effects under different exchange rate system
+
+ | Fixed exchange rate system | Floating exchange rate System |
| Capital flows Assume | Low capital Liquidity | High capital Liquidity | Low capital Liquidity | High capital Liquidity |
| Monetary policy | Weak | Basically invalid | Strong | Highly effective |
| Fiscal Policy | Strong | Highly effective | Weak | Basically invalid |
| Comparative conclusions | In the floating exchange rate system, monetary policy is relatively effective
+In the fixed exchange rate system, fiscal policy is relatively effective |
+
+# 4.4 Impact effect under floating exchange rate system
+
+- Domestic currency impact effect: National changes in the way money is held. Currency shock has a strong impact on a country's economy
+- Impact effect of domestic expenditure: a change in consumer confidence. The impact of expenditure shocks depends on which changes are greater in international capital flows and current projects.
+
+# 4.5 Real Cases Analysis
+
+Taking into account the economic interaction of the following countries with virtual countries, the following countries will achieve internal and external balanced economic objectives through the use of policy instruments.
+
+# 4.5.1 Analysis of Fixed Exchange Institution - Sweden
+
+For fixed exchange rate system, we choose Sweden for analysis.
+
+
+
+From the figure above and Firdgue9, we can give the conclusion that there are domestic inflation, balance of international payments deficit. The domestic currency is under pressure to depreciate. Sweden should pursue a tight monetary policy and a tight fiscal policy. To maintain a fixed exchange rate, the central bank sells its own currency and buys digital currency. The IS curve moves to the left and the LM curve moves to the left. Finally achieves internal and external equilibrium.
+
+
+Figure 9: three model together result for Sweden and Israel
+
+
+Figure 10: linear regression anova result
+
+# 4.5.2 Analysis of Floating Exchange Institution - Israel
+
+For floating exchange rate system, we choose Israel for analysis.
+
+From the figure above and Firdgue9, we can give the conclusion that there are domestic inflation, balance of international payments deficit. The domestic currency depreciates and digital currency appreciates. Israel's imports have decreased and exports have increased. For virtual countries, imports have increased. The FE curve moves to the right. Israel has a tight monetary policy, a currency appreciation, an digital currency depreciation, and LM curve moving to the left. finally achieve internal and external equilibrium.
+
+# 5 Evaluations of the Model
+
+# 5.1 Sensitivity Analysis
+
+The sensitivity of our model is considered in this subsection. To test the sensitivity, we apply anova test to the models in Section 4.4 and to see whether the test is viable and reliable.
+
+| Analysis of Variance Table |
| Response: IR |
| Df | Sum Sq | Mean Sq | F value | Pr(>F) | |
| GDP | 1 | 0.0041739 | 0.0041739 | 40.4351 | 3.616e-05 | *** |
| IEI | 1 | 0.0005016 | 0.0005016 | 4.8598 | 0.04775 | * |
| Residuals | 12 | 0.0012387 | 0.0001032 | | | |
+
+(a) result for Sweden
+
+| Analysis of Variance Table |
| Response: IR |
| Df | Sum Sq | Mean Sq | F value | Pr(>F) | |
| GDP | 1 | 0.037189 | 0.037189 | 151.704 | 8.567e-11 | *** |
| IEI | 1 | 0.005325 | 0.005325 | 21.722 | 0.0001506 | *** |
| Residuals | 20 | 0.004903 | 0.000245 | | | |
+
+(b) result for Israel
+
+From the linear regression result above, we can conclude that each variables of the model is indispensable, and the coefficients of the model are both small,
+
+which demonstrates that the model doesn't change significant when the variables fluctuate. Therefore, it is reasonable to consider that the model is insensitive to the outliers.
+
+# 5.2 Test for Robustness
+
+In this subsection, we consider the robustness of our model. To test the robustness, we apply the model to other countries to see whether the model also work well as the real cases analysis' result in 4.4.
+
+For fixed exchange rate system, we take Thailand as example. According to the Figure 11, Thailand should pursue a tight fiscal policy and an expansionary monetary policy and will finally achieve internal and external equilibrium. For floating exchange rate system, we take Mexico as example.According to the Figure 11, Mexico should implement a tight monetary policy and eventually achieve internal and external equilibrium, which attests to the effectiveness of our model.
+
+Additionally, using the anova test in R, we can easily get that these models are significant enough to explain the real cases.
+
+
+(a) result for Thailand
+Figure 11: model evaluation on robustness
+
+
+(b) result for Mexico
+
+# 6 Conclusions
+
+# 6.1 Strengths and Weaknesses
+
+# 6.1.1 Strengths
+
+- We establish a multi-level dynamic analysis framework from micro-individual to macroeconomic operation
+- We creatively propose ideas and mathematical models for analyzing digital currency
+- The model is suitable for analysis under a variety of economic shocks
+
+- The model is applicable to various exchange rate systems and has strong robustness
+- The model proposes a variety of policy tools to resolve internal and external imbalances, with strong realistic policy implications.
+
+# 6.1.2 Weaknesses
+
+- The equations in the model are linear equations that fail to truly portray the economic conditions of some countries.
+- We only consider the economic interaction between the two countries.
+
+# 6.2 Conclusion
+
+We first construct a representative model of financial operating systems and macroeconomic operating systems, involving digital currency. This model analyzes individual behavioral decisions on digital currency from a micro perspective. We also discuss two different exchange rate systems that a country will choose. Using theoretical analysis and R, we find that either systems would achieve internal and external equilibrium and realize economic objects if proper policies are implemented when digital currency is introduced.
+
+# Recommendation
+
+Dear leaders,
+
+We are writing this letter to provide you with an overview of our efficient models on digital currency, since we are hired by International Currency Marketing (ICM) to help you identify viability of digital currency. We first construct a representative model of financial operating systems and macroeconomic operating systems, involving digital currency. On one hand, this model analyzes individual behavioral decisions on digital currency from a micro perspective. On the other hand, it analyzes the domestic economic operation from a macro perspective based on product market and money market. Moreover, it demonstrates the economic interactions within and between countries help build a framework in terms of foreign exchange market.
+
+We discuss two different exchange rate systems that a country will choose. Using theoretical analysis and R, we find that either systems would achieve internal and external equilibrium and realize economic objects if proper policies are implemented when digital currency is introduced.
+
+If the central bank of your country chooses a fixed exchange rate system, it is advisable to adopt an expansionary monetary policy to stimulate economy if there is a equilibrium of international payments surplus. If the unemployment rate is high, adopting an expansionary fiscal policy is advocating and vice versa. More importantly, the central bank and the government should adopt a proper policy match based on the actual economic situation.
+
+In terms of floating exchange rate institution, the balance of international payments has been realized spontaneously. Monetary policy is relatively effective for this system.If unemployment rate is high, an expansionary monetary policy can be adopted to ease the situation and vice versa.
+
+In the real digital currency era, the peer-to-peer payment system constructed by companies such as Google Pay can achieve global virtual currency flow in a short term. In order words, countries within digital currency system will potentially face drastic international capital flows due to the influence of exogenous factors. Hence we introduce parameters concerning international capital flows in the model to help us focus on the impact of intense international capital flows on the national economy with digital currency. Consequently, we are capable of knowing what kind of policy tools the state should adopt to alleviate the negative effects.
+
+To achieve a better performance with the new macroeconomic system and financial institution involving digital currency, we advise you to join the organization World Digital Currency Bank (WDCB) if your country accepts digital currency. Considering stability and security, advanced technology like blockchain can be adopt to help build a safer system and improve the total welfare of the world with digital currency.
+
+Sincerely yours,
+
+Team 1904381
+
+# References
+
+[1] Fleming, J. Marcus. "Domestic Financial Policies under Fixed and Floating Exchange Rates." IMF Staff Papers 9 (March 1962), pp. 369aA\$377.
+[2] Klein, Michael W., and Jay C. Shambaugh. Exchange Rate Regimes in the Modern Era. Cambridge, MA:MIT Press, 2010.
+[3] Ghosh, Atish R., Jonathan D. Ostry, and Charalambos Tsangarides. "Exchange Rate Regimes and the Stability of the International Monetary System." International Monetary Fund Occasional Paper 270, 2010.
+[4] Yin Chen, "Research on the development of electronic money and government regulation in the era of "Internet +", Technology and Economic Guide, 2018iiijN26iiijL24iiijL'.
+[5] Feenstra, Robert C. âÅIJEstimating the Effects of Trade Policy." In Grossman and Rogoff (1995).
+[6] Eichengreen, Barry. Exorbitant Privilege: The Rise and Fall of the Dollar and the Future of the International Monetary System. New York: Oxford University Press, 2011.
+[7] Ghosh, Atish R., Jonathan D. Ostry, and Charalambos Tsangarides. "Exchange Rate Regimes and the Stability of the International Monetary System." International Monetary Fund Occasional Paper 270, 2010.
+[8] Fischer, Stanley. "Distinguished Lecture on Economics in Governmenta'ATExchange Rate Regimes: Is the Bipolar View Correct?" JEP 15, no. 2 (Spring 2001), pp. 3aA\$24.iijZ
+[9] Froot, Kenneth A., and Kenneth Rogoff. "Perspectives on PPP and the Long-Run Real Exchange Rate." In Grossman and Rogoff (1995).
+[10] Grossman, Gene M., and Elhanan Helpman. "Protection for Sale." AER 84, no. 4 (September 1994), pp. 833aA850.
+[11] Guangyou Zhou, "The Impact of Electronic Money on Deposit Reserve System", Journal of Guangdong University of FinanceijjN2010iiJL05iijL'
+[12] Findlay, Christopher, and Tony Warren, eds. Impediments to Trade in Services: Measurement and Policy Implications. New York: Routledge, 2000.
+[13] Bordo, Michael D., and Barry Eichengreen, eds. Retrospective on the Bretton Woods International Monetary System. Chicago: University of Chicago Press, 1992.
+[14] Broda, Christian, and David Weinstein. "Globalization and the Gains from Variety." Quarterly Journal of Economics 121, no. 2 (May 2006), pp. 541aA\$585.
+
+[15] Eun, Cheol S., and Bruce G. Resnick. International Financial Management, 6th ed. New York: McGraw-Hill/ Irwin, 2012.
+
+# Appendices
+
+
+
+
+
+
+
+
+
+
+
+# Appendix A Real Data Result
+
+
+Domestic Product Market Model on Thailand
+
+
+Money Market Model on Thailand
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/MCM/2019/F/1905127/1905127.md b/MCM/2019/F/1905127/1905127.md
new file mode 100644
index 0000000000000000000000000000000000000000..891018473cd68228ab6d8cdeeeb2598070fed47c
--- /dev/null
+++ b/MCM/2019/F/1905127/1905127.md
@@ -0,0 +1,811 @@
+# 2019
+
+# MCM/ICM
+
+# Summary Sheet
+
+General Digital currency Circulation Model
+
+(based on the theory of adjusted optimal currency area)
+
+This article described the influence of the digital currency on the currency circulation of sovereign countries, the government behavior and the world trade system from three aspects: individual transactions, national regulation and world trade.
+
+In order to clearly describe the behavior of individuals and countries from micro to macro perspectives, we divided the analytical framework into three sub-models: Search and Matching Model, Long-term Government Behavior Model and Supranational Monetary System Model.
+
+Firstly, we used Fisher's equation and Keynes's Theory of Currency Demand to analyze the pricing and risk characteristics of digital currency. It shows that the volatility of digital currency originates from speculative demand which is too maneuverable, while the transactional demand is insufficient because of too few individuals who trust digital currency. Therefore, expanding transaction demand and acceptance will help stabilize the currency value and enable it to act better as a trading medium.
+
+Later, we extended the Diamond-Mortensen-Pissarides Model (DMP) and established a Search and Matching Model for holding digital and legal currencies in the market. We proved that there exists an equilibrium in the absence of external non-economic factors, that is, the proportion of people accepting digital to legal currency will converge to a fixed range. But there is also a long-term situation in which some factors gradually expel the use of legal or digital currency.
+
+The initial recognition of the proportion of French currency to digital currency is exogenous, which leads to the possibility for the government and people to adjust the recognition of digital currency (i.e., artificial intervention). On this basis, we referred to the actual economic situation of more than 130 major countries in the world, substituted relevant parameter estimates, evaluated the possible currency holding patterns of these countries without policy intervention, and drew a conclusion that the more developed and open economies will accept the payment status of the coexistence of two currencies. The instability of currency value and other factors may cause the state to abandon legal tender.
+
+We set up a Long-term Government Behavior Model to measure the government's regulatory behavior. The government regulates the proportion of currency use through the difference of taxation between the two payment modes, resulting in the cost difference for different currency users. On the one hand, the high liquidity of digital currency will promote the emergence of transactions, and at the same time, it will increase the matching efficiency in transactions, which will motivate the government to promote the use of it to increase the total tax base. On the other hand, due to the existence of additional regulatory costs for digital currency and the possible social losses caused by illegal transactions, the government also needs to regulate the development of digital currency to some extent.
+
+Then, we used the Profit Margin Model under Exchange Rate Fluctuation to draw the conclusion that under the hypothesis of relatively low volatility (that is, according to the theory of currency demand, the increase of demand for digital currency transactions will increase its stability), the form of digital currency transactions will become the mainstream of international trade.
+
+After that we established a Supranational Monetary System Model, in which digital currency is controlled by a supranational group and distributed to countries according to foreign exchange demand. When the world circulates publicly, digital currency will achieve lower price volatility. At this time, world trade will be settled mainly by digital currency. However, due to differences in development, legal currencies will still be retained and independent monetary policies will be formulated among countries. Central banks of all countries have the tendency to control the balance of trade and capital flow. This will help countries formulate sound and positive economic development programs, promote the free flow of capital, and tap the most potential growth point of economic investment.
+
+Finally, we discussed the impact of digital currency on banking industry and the mechanism of long-term restructuring. The high liquidity and informatization of digital currency will lead to the oligopoly tendency of banking industry and disappearance of the intermediary function of bank payment. Lending platforms similar to banking industry will rise in the Internet. How to strengthen the supervision of illegal fund-raising and black-market transactions has become a challenge for governments of all countries.
+
+# POLICY RECOMMENDATION
+
+To: National leaders
+
+From: Team 1905127
+
+Subject: Policy Recommendation on building a digital financial market
+
+Date: Jan 28,2019
+
+Dear National Leaders:
+
+This policy proposal is designed to give you an intuitive impression of the current development of the global digital monetary system. Our team has collected the characteristics of digital currencies and block chain technology, and combined it with the existing economic and monetary finance theories. We established a mechanism analysis model from the personal, national and international levels, trying to explain the inevitable trend of the development of the digital currency system and the existing risk factors for you to make policy decisions.
+
+According to the circulation law of money, for there are not many transactions covered by digital currencies at present, its speculative demand is far higher than transaction demand. According to the Fisher equation, a lack of trades by digital currencies leads to the high volatility of it at present. So long as we can get a large proportion of people to participate in the transactions in digital currencies, its fluctuation of demand will become relatively more stable.
+
+In our individual model, the market may reject the use of one currency and choose the other because of the difference in the degree of currency recognition. But ultimately the balance of a currency's recognition is determined by whether the sellers and buyers can benefit from the use of it. With the increase of recognition, the use of digital will be more frequent. Equilibrium of the use of this currency will finally be achieved. Therefore, our model assumes that the market will automatically choose a certain form of currency or achieve the long-term coexistence of two currencies. However, due to the lower transaction and storage costs of digital currencies (the currency value will be more stable reflected in the model), traders's acceptance of digital currencies will increase, holding other factors constant.
+
+At the national level, governments' goal is to maximize long-term tax income. The goal is to change the amount of transactions settled by two currencies by adjusting the tax rates of different transactions and guiding public recognition of the currency. Of course, for the transaction cost of digital currency is low, adoption of lower tax rate will lead to the increase of the total national economy (but not necessarily normal economic transactions, may involve smuggling, etc.). At this time, the matching efficiency of transactions will be improved, which will promote the development of taxation. It is worth noting that the increase of law enforcement costs will also occur, so the government needs to weigh the pros and cons.
+
+At the international level, we have established a model of the monetary system across countries. Because of its widely use in trade settlement as a global currency, digital currencies' stability will rise, which makes enterprises bear less fluctuation losses of exchange rates. Global trade settlement will thus be dominated by digital currency. With the establishment of a worldwide digital monetary control system, our central bank will still be able to conduct monetary policies independently, and will be more motivated to balance import, export and capital flows. Because of the free flow of digital money, capital will flow to more profitable places, which will make up for many previously neglected underdeveloped areas.
+
+Therefore, in summary, we should give full play to the advantages of digital currency, while controlling the uncertainty of exchange rate and the risk of illegal transactions.
+
+Sincerely,
+
+Team#1905127
+
+# General Digital currency Circulation Model
+
+(based on the theory of adjusted optimal currency area)
+
+# Contents
+
+# 1 Introduction 2
+
+1.1 Background 2
+1.2 Restatement of the Problem 2
+1.3 Overview of Our Work 2
+
+# 2 General Assumptions and Justifications 3
+
+2.1 Assumptions 3
+2.2 Variable Description 3
+
+# 3 Analysis and Model Building 4
+
+3.1 An Equilibrium Model of Legal and Digital Currency at Individual Level ....4
+
+3.1.1 Digital Currencies: Price and Risk 4
+3.1.2 Search and Matching Model under Dual Monetary System 5
+
+3.2 A Model for Maximizing Government Tax Revenue at National Level............8
+
+3.2.1 National Cost-benefit Analysis 8
+3.2.2 Government Intertemporal Tax Model 9
+
+3.3 International Digital Currency Trade Model (Based on Improved OCA) 9
+
+3.3.1 Difference of Import and Export Profit under Exchange Rate Fluctuation 9
+3.3.2 Supranational Monetary System Model at the World Level 10
+
+# 4 Empirical Model based on Algebraic Operation 13
+
+4.1 Empirical Study Based on Micro-model 13
+
+4.1.1 Theoretical Restatement 13
+4.1.2 Parameter Estimation and Result Solution 14
+4.1.3 Empirical Model Based on Econometric Methods 14
+4.1.4 The Final Results of the Empirical Model 16
+
+4.2 Empirical Study Based on Macro-model 17
+4.3 Further Discussion and Application of Factors Outside the Model 19
+
+# 5 Conclusion 20
+
+5.1 Strengths and Weaknesses 20
+
+5.1.1 Strengths 20
+5.1.2 Weaknesses 21
+
+5.2 Conclusion 21
+5.3 Future Work 21
+
+# Reference 22
+
+# Appendix 23
+
+# 1 Introduction
+
+# 1.1 Background
+
+Ever since Cong Nakamoto has released Bitcoin in 2009, digital currency trading has been expanding rapidly. Digital currencies, on the one hand, has the advantages of strong liquidity, high secrecy and low transaction costs. On the other hand, however, it is accompanied by a lack of national regulations. Also, central banks have found difficulties in macro-regulation and so on (though this may be part of the liberal ideal that Nakamoto wants to achieve).
+
+Undeniably, almost all nations are stepping up the establishment of digital currency trading mechanism, trying to bring digital money into the regulatory system to maintain the dominance of their own sovereign currency. With the characteristics of Internet transactions, digital currency has more global currency characteristics. Its ability to promote global trade and capital circulation is better than that of the regulated sovereign currency. Whether the benefits of using digital currency in domestic trade and Global trade can exceed the costs is a topic of concern.
+
+# 1.2 Restatement of the Problem
+
+Our team will measure the impact of digital currencies on the legal tender and financial markets at the individual, national and global level.
+
+We will also include the motivation of nations to limit or encourage the development of digital currencies in our study.
+
+Last but not the least, we will explore the regulatory objectives and means of this free-flowing digital currency at the global level, as well as the monetary behavior game among countries.
+
+# 1.3 Overview of Our Work
+
+We built up models of transaction and regulation of legal and digital currencies at three levels.
+
+As for the transactions between individuals, different forms of equilibrium will be achieved (e.g. natural exclusion of the legal tender, exclusion of digital currency or acceptance of two currencies at the same time), because of the difference in the degree of recognition and ease of circulation.
+
+For the measurement of the recognition degree of legal currency and digital currency, we drew lessons from the idea of Diamond-Mortensen-Pessaries Model of Search and Development (DMP)
+
+model. We introduced the matching efficiency of trades, economic development and other factors into the application of the model, and made use of the econometric model to evaluate the samples.
+
+At the national level, we examined the factors that contributes to the national income (tax, regulation, regulation and cost control) to establish a model to maximize the national revenue, so as to evaluate the motivation of the country to encourage or restrict the development of digital currency.
+
+In the aspect of the monetary circulation of international trade, we build up a Universal Digital Currency System Model based on Mundell's theory of Optimal Currency Areas (OCA).
+
+Under reasonable assumptions, we can prove that:
+
+In order to reduce the risk of trading volatility, normal trade in goods will rely on digital currency in circulation worldwide rather than on the legal tender of a country.
+
+At the same time, a supranational financial institution (similar to the current European Central Bank but more independent) will be created to regulate national trade and capital flows, but nations still have the ability to use sovereign currencies within their borders and to carry out effective
+
+macroeconomic regulation.
+
+This will make up for the weakness of monetary policy and fiscal policy under the fixed floating exchange rate system in the Mundell-Fleming-Dornbusch Model (MFD) with only sovereign currencies.
+
+Finally, based on the impact of the decentralization of digital money on commercial banks, we explored the long-term trend of the evolution of commercial banks and policies.
+
+
+Figure 1 Overview of Our Work
+
+# 2 General Assumptions and Justifications
+
+# 2.1 Assumptions
+
+As discussed above, we make several assumptions in our model.
+
+- This new type of digital currency is widely distributed all over the world. It has the characteristics of decentralization, constant circulation, anonymity and so on
+- Everyone, including their country, is independent and free to choose currency without interference from others
+- Digital currency and flat currency are relatively independent, and fluctuations in the value of one currency do not affect another currency.
+- Although digital money is anonymous, countries can trace every transaction with the advanced technique, which has a higher cost.
+
+These are the basic assumptions of our model, and we will add other assumptions for different models later.
+
+# 2.2 Variable Description
+
+Table 1 Parameter List
+
+| Parameter | Description |
| P | Commodity price |
| Y | Commodity yield |
| v | Currency circulation speed |
| εdig | Exchange rate of digital currency to flat currency |
| Mdig(Mflat) | Monetary Supply of Digital currency (flat currency) |
| prob(success) | Probability of matching buyer and seller successfully |
| e | Matching efficiency |
| p o-d(p o-f) | Matching probability of a person without any currency with a person with legal currency or digital currency |
| d(f) | Proportion of digital currency (flat currency) owners |
| μ(λ) | Probability of accepting digital currency (flat currency) |
| V d(V f) | Value of digital currency (flat currency) |
| p c d(p c f) | Costs of saving digital currency (flat currency) |
| g | Revenue of government |
| t d(t f) | Tax rate of digital currency (flat currency) |
| Π | Profit of exporters |
| π | Inflation rate |
| r | Real interest rate |
+
+# 3 Analysis and Model Building
+
+What gave rise to the popularity of digital money? What impedes the function of digital currency in its currency? First of all, digital currency is based on block chain technology. The distributed accounting method generated by block chain will make transactions need not be confirmed by a third institution, and transfer operation only requires the operation between person and person. Money is no longer circulating in banks; it is just a piece of information in the Internet's ocean that keeps an account.
+
+The convenience and concealment of transactions are the source of the demand for digital currency transactions. On the other hand, the monetary function defined by Krugman (1984) is "medium of exchange, measure of value and value of storage". Digital currency is unlikely to be guaranteed by a single country in the future because of its global liquidity and no sovereign guarantee. Digital currency itself is not real goods of useable value, so it is not recognized by most people, most trading occasions (that is, do not accept it as the equivalent of trading). As a result, the trading demand for digital currency will not be magnified indefinitely and will be volatile (the value of which exists only in the recognition of the population). At the same time, the high circulation of digital money will help its speculative demand rise. Reasons above make it unable to fully perform the exchange medium of currency and store value. The model of this chapter at three levels will be based on the above features.
+
+# 3.1 An Equilibrium Model of Legal and Digital Currency at Individual Level 3.1.1 Digital Currencies: Price and Risk
+
+First, we will focus on a single domestic market, in which there is the circulation of legal tender $M_{flat}$ , and there is also a digital currency involved in the part of the transaction process, the number
+
+of which is $M_{\text{dig}}$ . At that time, the exchange rate of digital currency against legal tender is $\varepsilon_{\text{dig}}$ , thus the supply of digital money in legal tender is:
+
+$$
+M _ {d i g} ^ {s} = \varepsilon_ {d i g} M _ {d i g}
+$$
+
+According to the analysis above, the demand for digital currency will be divided into two parts: transaction and speculative, namely:
+
+$$
+M _ {d i g} ^ {d} = M _ {d i g} ^ {b u \sin e s s} + M _ {d i g} ^ {s p e c u l a t e}
+$$
+
+According to Fisher's equation:
+
+$$
+P Y = M v
+$$
+
+In the case of incremental fluctuations, we get:
+
+$$
+\Delta P = \frac {\Delta M v}{Y} o r \Delta P = M v \Delta \frac {1}{Y}
+$$
+
+Where the price of money is equal to the amount of money in circulation $M$ (in this case the demand for money) times the velocity $v$ , divided by the amount of goods $Y$ settled using digital currency. Due to the technical characteristics of digital money, the velocity of currency circulation will be a fixed value when the storage factor is controlled. As a result, the price of goods measured by digital money will fall as transactions expand, that is, more goods are approved and traded. And when the speculative demand of digital currency in the market increases, the aggregate demand will be too high, which will trigger a rise in the price of goods measured by digital money. When the digital currency is not widely recognized, the quantity of goods $(Y)$ is small, and the value volatility risk of the digital currency is expected to be greater.
+
+In Section 3.3, we will verify that when digital currency becomes the settlement of global trade, the universal recognition of digital currency leads to a more stable exchange rate than a sovereign currency, thus deepening the dependence of international trade on digital money.
+
+Combined with Keynes' theory of speculative monetary demand and the nature of digital money itself, the speculative demand for digital currency is positively related to the exchange rate of the currency to foreign currency, the expectation of interest rate and the increasing attention of the crowd. As the concept of digital currency expands, the value of digital currency as an investment property will increase, that is, the increase of speculative demand in the next period will push up the price of digital currency. On the other hand, it also shows that speculative demand has time series related attributes in the model.
+
+# 3.1.2 Search and Matching Model under Dual Monetary System
+
+# 3.1.2.1 Model Specification
+
+Suppose that in a closed economy with one single period, there are specific buyers and sellers, in which:
+
+a. The number of buyers holding flat currencies as a proportion of $f \in [0,1]$ (flat currency), and the proportion of $d \in [0,1]$ (digital currency) holders with virtual currencies. No one holds both currencies. The percentage of sellers (those who do not hold any currency) is $1 - d - f$ .
+b. Each buyer is a representative actor, and there is demand for $i$ kind of goods in the market ( $i < j$ ), and the quantity that each buyer wants to buy is exact 1 unit. Whoever buys the goods will get the utility $u$ .
+
+c. Each seller is also a representative actor. For the seller, only 1 unit of product is produced in each period, and the cost of production is c.
+d. Everyone in the market carries out only one search and pairing when doing trades, and the searching process satisfies: the larger the economic volume, the larger the market size; the more efficient the pairing is, the higher the Matching efficiency: e (GDP); and the more diversified the market demand, the more likely a match will succeed. So the probability of a successful match can be expressed as:
+
+$$
+p r o b (s u c c e s s) = e (G D P) m (i, j)
+$$
+
+In order to simplify the composition of the model and concretize the expression, we set the matching efficiency at this point to 1 and satisfy the random matching between the seller and the seller:
+
+$$
+\operatorname {p r o b} (s u c c e s s) = \frac {i}{j} = \delta
+$$
+
+In the case of a random match between the buyer and the seller, the probability of matching the seller with no currencies with the buyers holding the flat money $(p_{o - f})$ , and digital currency $(p_{o - d})$ are as follows:
+
+$$
+p _ {o - f} = \min \{1, \frac {f}{1 - f - d} \}
+$$
+
+$$
+p _ {o - d} = \min \{1, \frac {d}{1 - f - d} \}
+$$
+
+On the contrary, the probability of matching a buyer holding flat currency $(p_{f - o})$ , and a buyer with a digital currency $(p_{d - o})$ with a seller not holding a currency, are as follows:
+
+$$
+p _ {f - o} = \min (1, \frac {1 - f - d}{f})
+$$
+
+$$
+p _ {d - o} = \min (1, \frac {1 - f - d}{d})
+$$
+
+We set the probability of acceptance of flat money to be $\lambda$ , the probability of acceptance of digital currency to be $\mu; V_{o}, V_{f}, V_{d}$ . Separately represent the value contained by a person holding no currency, flat money and digital currency. Therefore, the following equations can be obtained:
+
+$$
+r V _ {o} = \left(1 - p _ {o - f} - p _ {o - d}\right) \delta^ {2} (u - c) + p _ {o - f} \lambda \delta \left(V _ {f} - V _ {o} - c\right) + p _ {o - d} \mu \delta \left(V _ {d} - V _ {o} - c\right) \tag {1}
+$$
+
+$$
+r V _ {f} = p _ {f - o} \lambda \delta \left(u + V _ {o} - V _ {f}\right) - p c _ {f} \tag {2}
+$$
+
+$$
+r V _ {d} = p _ {d - o} \mu \delta \left(u + V _ {o} - V _ {d}\right) - p c _ {d} \tag {3}
+$$
+
+Where $r$ is the discount rate, $p c_{f}$ , $p c_{d}$ are the cost of storing flat money and digital currency.
+
+The equation indicates that conducting the transaction and refusing it are the same, for one can preserve the value of the labor force and wait for the next transaction.
+
+
+Figure 2 Matching relation
+
+The first part of the formula (1) is a barter happens with a matching failure, the second part deals with the exchange with the non-currency sellers and buyers holding legal tender and digital currency respectively. Formula (2) (3) are from the angle of legal currency and digital currency holder.
+
+# 3.1.2.2 Equilibrium State of the Model
+
+According to the equation (1), when $V_{f} > V_{o} + c$ , all sellers will choose to accept flat money In order to maximize their profits $(\lambda = 1)$ . Contrarily, when $V_{f} < V_{o} + c$ , all sellers will choose reject flat money $(\lambda = 0)$ . And when $V_{f} = V_{o} + c$ , rejection and acceptance of flat money are the same:
+
+$$
+\lambda = \left\{ \begin{array}{c} 0 \dots i f V _ {f} < V _ {o} + c \\ (0, 1) \dots i f V _ {f} = V _ {o} + c \\ 1 \dots i f V _ {f} > V _ {o} + c \end{array} \right.
+$$
+
+likely:
+
+$$
+\mu = \left\{ \begin{array}{c} 0 \dots i f V _ {d} < V _ {o} + c \\ (0, 1) \dots i f V _ {d} = V _ {o} + c \\ 1 \dots i f V _ {d} > V _ {o} + c \end{array} \right.
+$$
+
+When the proportion of people receiving legal tender is balanced, the following exists:
+
+$$
+\lambda = 1 \quad o r \quad \lambda = \hat {\lambda}
+$$
+
+It can be inversely solved by equation (1) that:
+
+$$
+\hat {\mu} = \left\{ \begin{array}{l l} \frac {(1 - p _ {o - f} - p _ {o - d}) \delta}{p _ {d - o}} + \frac {r c + p c _ {d - o}}{p _ {d - o} \delta (u - c)} + \frac {p _ {o - f} (V _ {f} - V _ {o} - c)}{p _ {d - o} (u - c)} & i f \lambda = 1 \\ \frac {(1 - p _ {o - f} - p _ {o - d}) \delta}{p _ {d - o}} + \frac {r c + p c _ {d - o}}{p _ {d - o} \delta (u - c)} & i f \lambda < 1 \end{array} \right.
+$$
+
+Similarly, when the proportion of people who accept digital money in the market reaches
+
+equilibrium:
+
+$$
+\mu = 1 \quad o r \quad \mu = \hat {\mu}
+$$
+
+We can use these equations to derive that:
+
+$$
+\hat {\lambda} = \left\{ \begin{array}{l l} \frac {(1 - p _ {o - f} - p _ {o - d}) \delta}{p _ {f - o}} + \frac {r c + p c _ {f - o}}{p _ {f - o} \delta (u - c)} + \frac {p _ {o - d} (V _ {d} - V _ {o} - c)}{p _ {f - o} (u - c)} & i f \mu = 1 \\ \frac {(1 - p _ {o - f} - p _ {o - d}) \delta}{p _ {f - o}} + \frac {r c + p c _ {f - o}}{p _ {f - o} \delta (u - c)} & i f \mu < 1 \end{array} \right.
+$$
+
+From the results of the above model, it can be seen that:
+
+1. when a currency (legal tender or digital currency) is in the market equilibrium, the other currency can also maintain a certain level of acceptance in the market. It is possible that two currencies can achieve stable symbiosis and maintain the same level of acceptance in the long run.
+2. When a first currency has reached its equilibrium and the acceptance of the second one is lower than the theoretical equilibrium level (the low liquidity of the second currency will inevitably lead a wider population to abandon the use of the currency), the latter will fall into a vicious circle and gradually be marginalized.
+3. When a first currency has reached its equilibrium and the acceptance of the second one is higher than the theoretical equilibrium level, the second one will fluctuate between a stable state of partially approval and being fully approved until a balance is reached.
+
+Note that, in reality, due to the low storage cost of digital currency, there is no risk of management cost and hyperinflation, thus the possible equilibrium coefficient may be quite low.
+
+# 3.2 A Model for Maximizing Government Tax Revenue at National Level
+
+We assume that the government's goal will be maximizing its own revenue over the long term, under which the government is motivated to expand the economy in order to expand the tax base and increase the efficiency of transactions. (In the settings above, the expansion of economic is beneficial to the improvement of searching efficiency
+
+e (GDP)。
+
+# 3.2.1 National Cost-benefit Analysis
+
+Development of digital money will externally increase the proportion of recognition of digital currency, while wider acceptance of digital currency will inversely contribute to the economy development and thus promote the expansion of the tax base.
+
+At the same time, the rise in public acceptance of digital money will reduce the use of flat money, forcing the government's tax base to shift from legal currency transactions to digital currency transactions. But overall, as trade develops, the total tax base will increase, and the government's current tax revenues will rise.
+
+In terms of transaction costs, digital money has higher liquidity and concealment. The government's regulatory costs will rise with the volume of transactions using digital money, and the widespread use of digital money will deprive the country of its monetary policy controls over the macro economy. This potential systemic risk also needs to be factored into the total cost.
+
+# 3.2.2 Government Intertemporal Tax Model
+
+In the current period (t0), the government's revenue function is:
+
+$$
+g _ {t} = t _ {f} p _ {o - f} \lambda e (G D P (\mu)) m (i, j) | V _ {f} - V _ {o} - c | + t _ {d} p _ {o - d} \mu e (G D P (\mu)) m (i, j) | V _ {d} - V _ {o} - c | - C (\mu)
+$$
+
+Among them, $t_f, t_d$ represent tax rates applied by the government in the use of sovereign and digital currencies, respectively. $C(u)$ is the government's legislation cost of monitoring tax evasion of digital currency and the risk of losing independent monetary policy. Because of $C(u)$ , the government needs to impose different tax rates on the two payment modes to meet the needs of macro-control; On the other hand, the government has different regulatory costs for the two forms of payment, leading to different tax rates.
+
+GDP levels will rise as dependence on digital money intensifies (although some of this growth may occur in illegal practices such as black-market transactions), and the increase in total trading volume will lead to greater chances of market matching. The tax base is expanded from the direction of total income effect, so the total tax revenue will expand even if the tax revenue is transferred to the direction of digital money. The above effects are as follows:
+
+$$
+\frac {\partial [ \pi_ {t} + C (u) ]}{\partial u} > 0
+$$
+
+For willingness to change the way of payment is influenced by time, under the goal of maximization government revenue, the tax rate based on the shares of the current legal tender and digital currency transactions can only be optimized in the long run. We introduce the intertemporal model as follow:
+
+$$
+\max z = \sum_ {t = 0} ^ {n} \frac {g _ {t}}{(1 + r) ^ {t}}
+$$
+
+In our intertemporal model, the government continuously "induces" the transaction to take an optimal proportion of payment means by selecting the different tax rates of each period, until the total benefit of the discounted government tax to be the greatest, deducting the supervision and risk cost.
+
+# 3.3 International Digital Currency Trade Model (Based on Improved OCA)
+
+In this section, we extend the use of digital money to the global domain. We will also prove that digital currency transactions will be used in large numbers in the payment of international trade, in order to reduce the risk of uncertainty arising from exchange rate fluctuations. Referring to Bacchetta & Wincoop (2002)'s model about fluctuation of import and export exchange rate, we extend it to the regulation mechanism of foreign exchange stability of digital currency as a third-party currency.
+
+# 3.3.1 Difference of Import and Export Profit under Exchange Rate Fluctuation
+
+Assuming that in the world market, the supply enterprises and demand enterprises of one commodity come from different countries, the cost equation of the exporter is $C(q)$ ; the demand equation is $D(p)$ ; $p^p, p^L$ are prices of products expressed in the currencies of exporting and importing country, respectively. The exchange rate of the exporting country against the currency of the importing country is $\varepsilon$ , so the total profits of the firms using the currencies of the exporting countries and the importing countries are as follows:
+
+$$
+\prod^ {p} = p ^ {p} D \left(p ^ {p} / \varepsilon\right) - C \left(D \left(p ^ {p} / \varepsilon\right)\right) \tag {4}
+$$
+
+$$
+\prod^ {L} = \varepsilon p ^ {L} D \left(p ^ {L}\right) - C \left(D \left(p ^ {L}\right)\right) \tag {5}
+$$
+
+Suppose the exchange rate fluctuations of the two currencies follow the normal distribution, their mean value can be calculated from the above expression. Their variance is $\sigma^2$ , in the context of the exchange rate fluctuation, the expected profit difference between the two settlement methods is as follows:
+
+$$
+E U \left(\prod^ {p}\right) - E U \left(\prod^ {L}\right) = \frac {1}{2} U ^ {\prime} \frac {\partial^ {2} \left(\prod^ {p} - \prod^ {L}\right)}{\partial \varepsilon^ {2}} \sigma^ {2} = \frac {1}{2} U ^ {\prime} \frac {\partial^ {2} \left(\prod^ {p}\right)}{\partial \varepsilon^ {2}} \sigma^ {2} \tag {6}
+$$
+
+Therefore, when the currency of the exporting country is used as the unit of settlement, the expected profit will expand if $\prod^p$ is a convex function (secondary derivative greater than zero). Countries from both sides will therefore choose the home currency of the exporter as the unit of valuation. On this basis, we introduce digital currency as an alternative. Assuming that the exchange rate between the digital currency and the importing country is $\varepsilon_{dig}$ , the variance of the exchange rate fluctuation is $(\sigma^{dig})^2$ , and the price of the digital currency is $p^{dig}$ , the profit function using the digital currency as the settlement means is:
+
+$$
+\prod^ {d i g} = \frac {\varepsilon}{\varepsilon_ {d i g}} p ^ {d i g} D \left(\frac {p ^ {d i g}}{\varepsilon_ {d i g}}\right) - C \left(D \left(\frac {p ^ {d i g}}{\varepsilon_ {d i g}}\right)\right)
+$$
+
+As a result, the expected profit ratio of the three currencies is higher:
+
+$$
+E U \left(\prod^ {d i g}\right) - E U \left(\prod^ {L}\right) = \frac {1}{2} U ^ {\prime} \frac {\partial^ {2} \prod^ {d i g}}{\partial \left(\varepsilon^ {d i g}\right) ^ {2}} \left(\sigma^ {d i g}\right) ^ {2}
+$$
+
+$$
+E U (\prod^ {d i g}) - E U (\prod^ {p}) = \frac {1}{2} U ^ {\prime} \frac {\partial^ {2} \prod^ {d i g}}{\partial (\varepsilon^ {d i g}) ^ {2}} (\sigma^ {d i g}) ^ {2} - \frac {1}{2} U ^ {\prime} \frac {\partial^ {2} \prod^ {p}}{\partial (\varepsilon) ^ {2}} \sigma^ {2}
+$$
+
+The results show that when the exchange rate fluctuation of the digital currency to the importing country is smaller than that between the exporting country and the importing country, the highest expected profit income will be obtained if the digital currency is chosen as the unit of valuation. Therefore, when the uncertain risk of digital currency can be effectively controlled, the choice of digital currency can promote effective trade between countries.
+
+# 3.3.2 Supranational Monetary System Model at the World Level
+
+Mundell (1961) and McKinnon (1963) put forward the idea that countries should abandon issuing sovereign currencies and establish supranational central banks in groups of countries with similar economic development patterns(theory of optimal currency area/OCA).
+
+However, viewing current practice, there are still great differences in the actual economic situation of various countries, and the scheme of giving up the legal tender of each country directly does not take into account the importance of independent monetary policy to the domestic economy.
+
+The supranational monetary system to be established will maintain the two-track parallel state of the global digital currency and the sovereign currency. Moreover, the high liquidity of digital money also gives the unified issuing department of the global common currency a more precise and flexible
+
+means of regulation and control. The pattern of using digital currency as the form of international circulation can be based on the power of independent choice of monetary policy in member countries, and will nicely corporate with the mutual restriction relations among countries.
+
+# 3.3.2.1 Basic Hypothesis, Mechanism Analysis and Model Construction
+
+Assume that:
+
+Countries still issue and use sovereign currencies at home $f_{i}$ ( $i = 1,2,\dots,n$ ), the currency supply in each country is $M(f_{i})$ , and each of them worth $V(f_{i})$ .
+
+The International Digital Money is issued by a financial institution at the international level, which distributes money to central banks in proportion to the needs of each country, which is then issued separately by central banks in each country. Money supply is expressed as: $M(d_{i})$ ( $i = 1,2,\dots,n$ ),
+
+At the same time, the digital currency will be in circulation globally, and its value will remain stable around the globe (i.e. The exchange rate of the currency will always be 1 between countries). The speed of circulation between countries will also remain unchanged, namely:
+
+$$
+v (d) \equiv v \left(d _ {i j}\right) (i \neq j)
+$$
+
+The goal of supranational financial institution is to keep the value of digital money stable, with low inflation rate, following Fisher's equation:
+
+$$
+P Y = M v
+$$
+
+It also estimates actual global output to determine the total supply of the global digital currency at a given velocity level:
+
+$$
+\sum M \left(f _ {i}\right) = \frac {P Y}{v}
+$$
+
+From the results of the 3.3.1 model, when the exchange rate volatility of the universal digital currency is less than that of each country (The total value of the global digital currency in global circulation is nearly constant, thus more conducive to maintaining the relative stability of exchange rates), every country in the system has the motivation to adopt digital currency in foreign trade, and the domestic situation can refer to the search matching model of 3.1.
+
+The country's total demand for digital money comes from two sources: the demand for imported goods and services $Y_{imp}$ , and the total demand for investment in all other countries $CF_{-i}$ . Also, the actual demand for digital money depends on the aggregate demand and the speed of circulation:
+
+$$
+D d _ {i} = \sum_ {j = 1 j \neq i} ^ {n} \left[ Y _ {i m p} / \varepsilon_ {d i g} + C F _ {i j} \right] / v \left(d _ {i j}\right)
+$$
+
+Where $Dd_{i}$ is the demand for global digital currency in the country i. Note that since consumer goods themselves are denominated in local currency, the price need to be converted into local form.
+
+In order to maintain the stable value of digital currency, the supranational financial institution will supply the currency similar to the quantity of demand, and for each country, it will supply the digital currency of the same amount as the proportion of demand:
+
+$$
+M \left(d _ {i}\right) \approx D d _ {i}
+$$
+
+# 3.3.2.2 Analysis on the Behavior Patterns of Each Subject
+
+a. The supply of international digital money must be stable to ensure that the exchange rate of the digital currency against sovereign currencies is less volatile, and the supranational financial
+
+institution should set inflation targets, that is:
+
+$$
+\frac {d \ln [ \sum M (d _ {i}) ]}{d t} = \pi_ {t \arg e t} ^ {t} \in [ \pi_ {l} ^ {t}, \pi_ {h} ^ {t} ]
+$$
+
+$\pi_{l}^{t}, \pi_{h}^{t}$ are the range set for inflation, the rate in the international market should generally be kept in the range of 1.5 to 2.5 per cent (Jian-Feng Liu, 2010). Since inflation expectation is an important determinant of inflation rate itself, when the real interest rate is known, setting a reasonable nominal interest rate will guide the market to adjust to the target inflation rate. Thus, the nominal interest rate should be set by $R_{d} = r_{d} + \pi_{target}^{t}$ , according to the real interest rate of the universal digital currency, which is favorable to the control of inflation.
+
+On the other hand, the supranational financial institution needs to evaluate and regulate the trade balance of each country. It also needs to control the surplus or deficit of the net capital outflow / inflow of a country:
+
+$$
+\Delta M \left(d _ {i}\right) = \left(\alpha N X _ {(- i)} - \beta N C F _ {- i}\right) / v \left(d _ {i}\right)
+$$
+
+Among above, $\alpha, \beta \in [0,1]$ are weight coefficients for the country's trade surplus / net capital outflow.
+
+Generally speaking, after generating trade surplus, capital will remain outflow to realize the full use of capital, so in this case, the supranational financial institution do not need to adjust the money supply to the country; If the country has a trade deficit and capital inflow, it means that the country has more pessimistic economic expectations, so the supranational financial institution will reduce their money supply and regulate the country's deficit in order to achieve the smooth operation of international trade.
+
+b. From the national level, due to the existence of universal digital currency in international trade, countries will focus on balance of payments when formulating foreign trade strategies. The share system of digital currency allocated by the supranational financial institution will make countries balance the relationship between capital flow and trade surplus (deficit). Assume that a country has a large trade deficit and there is no investment opportunity in the country. To attract foreign funds, the state will lose the power to issue general currency. Therefore, consciously adjusting the balance of payments will be an important goal of the central banks of all countries.
+
+Central banks will also issue their own sovereign currencies and maintain a floating exchange rate relationship with general electronic currencies at each time:
+
+$$
+\varepsilon_ {d i g, i} ^ {t} = V \left(f _ {i}\right) / V \left(d _ {i}\right)
+$$
+
+Due to the different levels of inflation and real interest rates in different countries, according to purchasing power parity (PPP) theory, the exchange rate will change with the real interest rates and inflation rates at home and abroad, while the assumption of PPP can be more effectively realized by digital money because of its high liquidity and high acceptance of global sharing. At this time there is:
+
+$$
+\varepsilon_ {d i g, i} ^ {t} = \frac {1 + r _ {f , i} ^ {t}}{1 + r _ {d} ^ {t}} \frac {1 + \pi_ {f , i} ^ {t}}{1 + \pi_ {d} ^ {t}} \varepsilon_ {d i g, i} ^ {t _ {0}}
+$$
+
+Each country can manage the inflation and nominal interest rate of their sovereign currencies independently to realize the independence of monetary policy. However, the change of the interest and inflation rate of national currency will also lead to the exchange rate fluctuation between
+
+national currency and general digital currency. This kind of exchange rate fluctuation can promote the export of domestic commodities in national currency and realize different trade strategies.
+
+The difference between the real interest rate of each country and the others also leads to the difference of capital inflows of general digital currency capital to different countries.
+
+In underdeveloped areas or domestic projects with better investment opportunities, due to the relatively high real interest rate (eliminating risk premium) and the wide application of unified, universally recognized digital currency, capital will rapidly enter the country and occupying investment opportunities, which will help to distribute capital evenly among all more profitable projects and increase global investment efficiency.
+
+# 4 Empirical Model based on Algebraic Operation
+
+# 4.1 Empirical Study Based on Micro-model
+
+# 4.1.1 Theoretical Restatement
+
+Based on the theoretical model above, we can predict the future use of currency options in 130 countries. Since Zimbabwe is now one of the countries in the world that has given up its autonomous issuance of currency, its probability of accepting digital money will most likely to be $100\%$ when digital currencies emerge. So, we assume that Zimbabwe is now in the equilibrium with foreign currencies. Thus, the probability of Zimbabwe to adopt legal tender satisfies the expression below:
+
+$$
+\hat {\lambda} = \frac {(1 - p _ {o - f} - p _ {o - d}) \delta}{p _ {f - o}} + \frac {r c + p c _ {f - o}}{p _ {f - o} \delta (u - c)} + \frac {p _ {o - d} (V _ {d} - V _ {o} - c)}{p _ {f - o} (u - c)}
+$$
+
+After calculating this critical value $\hat{\lambda}$ , we use it to predict the use of legal currency in other countries. If the probability of accepting legal tender in a country is lower than that of $\hat{\lambda}$ , it means that one cannot reach the equilibrium of legal tender. So even if the country is fully using legal tender, it won't last long. Contrarily, if a country's acceptance of legal tender is equal to or greater than $\hat{\lambda}$ , it will end up in a legal tender equilibrium and will fully accept their legal tender in the future.
+
+Measuring the threshold for the probability of fully adopting digital currencies, we refer to the situation of U.S. Since dollar is the most widely accepted currency in the world today, we assume that the probability of acceptance of dollar by the American people is $100\%$ , and America is now in the legal tender equilibrium. On the other hand, since U.S. is the birthplace of plenty of the existing digital currencies, its attitude towards them should be rather objective. So, we use the probability of American people to accept digital currency as the threshold value of equilibrium with digital currencies. Thus, the probability of the adoption of digital currency in the United States satisfies the following expression:
+
+$$
+\hat {\mu} = \frac {(1 - p _ {o - f} - p _ {o - d}) \delta}{p _ {d - o}} + \frac {r c + p c _ {d - o}}{p _ {d - o} \delta (u - c)} + \frac {p _ {o - f} (V _ {f} - V _ {o} - c)}{p _ {d - o} (u - c)}
+$$
+
+Therefore, if the acceptance probability of digital currency in one country is lower than that of America $(\hat{\mu})$ , it means it cannot reach the equilibrium of digital currency and the digital currency cannot circulate in that country for a long time. If the probability of acceptance of digital currency in one country is equal to or greater than that of America $(\hat{\mu})$ , it will end up in the equilibrium of
+
+digital currency, and the digital currency will be fully accepted after a period of time.
+
+For the rest of the world, the probability of accepting legal tender or digital money is less than $100\%$ . Derived from previous theories, we can conclude that the expressions of acceptance of legal or digital currency in other countries satisfy:
+
+$$
+\lambda = \frac {\left(1 - p _ {o - f} - p _ {o - d}\right) \delta}{p _ {f - o}} + \frac {r c + p c _ {f - o}}{p _ {f - o} \delta (u - c)}
+$$
+
+$$
+\mu = \frac {(1 - p _ {o - f} - p _ {o - d}) \delta}{p _ {d - o}} + \frac {r c + p c _ {d - o}}{p _ {d - o} \delta (u - c)}
+$$
+
+By comparing $\lambda, \mu$ with the probabilistic threshold $\hat{\lambda}, \hat{\mu}$ , we can find out which countries will adopt a new digital currency in global circulation and which countries will not.
+
+# 4.1.2 Parameter Estimation and Result Solution
+
+In order to obtain the probability of acceptance of digital currency or legal tender in each country, we need to estimate the relevant parameters of the expression using real data. Among them $u$ indicates the benefit of each country through purchasing goods, in order to make it easier to calculate and without loss of accuracy, we assume it to be 1. Other parameters' estimations are shown as follow:
+
+Table 2 Estimation parameter interpretation
+
+| Parameter | Estimation method |
| r | Nominal interest rate |
| pcf-o | Legal tender inflation rate |
| pcd-o | Digital monetary inflation rate |
| c | 1 – national average of profit margin |
| δ | Gross domestic product logarithm |
| p0-f | Legal tender's share in the world monetary system |
| p0-d | Bitcoin's share in the world monetary system |
+
+After measuring the parameters, we take these parameters back into the corresponding expressions and we get the probability of each country's acceptance of digital money. Comparing them with the critical values, we got the attitude of 130 countries in the world in the adoption of the digital currency and the legal tender. We'll show this result later in the next section.
+
+To further verify the influence of these parameters and the reliability of the model results, we will then use Econometric Methods to verify the effect of the parameters in mathematical expressions on the acceptance of currencies.
+
+# 4.1.3 Empirical Model Based on Econometric Methods
+
+# Determinants of aggregate demand
+
+In this section, we will construct an empirical model and make regression analysis of the factors that affect the demand for digital money, as a verification and supplement to the results in the previous section. As mentioned earlier, public demand for digital money $M_{dig}^{d}$ can be broken down into transactional demand $M_{dig}^{business}$ and speculative demand $M_{dig}^{speculate}$ . What cannot be ignored is that risk factors will affect the efficiency of digital currency to assist the circulation of goods and the storage of value, thus affect the total monetary demand.
+
+We define the risk factor as $RI$ (RISK), so that the function of total monetary demand takes the following form:
+
+$$
+M _ {d i g} ^ {d} = \left(M _ {d i g} ^ {b u s i n e s s}, M _ {d i g} ^ {s p e c u l a t e}, R I\right)
+$$
+
+That is, the aggregate demand for digital money $M_{dig}^{d}$ is a function of transaction demand $M_{dig}^{busess}$ (without considering risk), speculative demand $M_{dig}^{speculate}$ (without considering risk), and risk $RI$ itself.
+
+# Determinants of transactional demand
+
+According to the Fisher equation $PY = M\nu$ , the transaction demand $M_{\text{dig}}^{\text{busess}}$ is subject to the price level $P$ , output level $Y$ and currency circulation speed $\nu$ of one country.
+
+Due to the high circulation of digital currency in the world, we make a reasonable assumption that the speed of circulation of digital currency $\nu$ in the world will be set, and its influence on the monetary demand of each country is a constant. As a result, the differences between countries in the demand for transactional currency are actually affected only by the level of output $Y$ and the level of prices $P$ . In the case of no risk, there should be the following function:
+
+$$
+M _ {d i g} ^ {b u \text {s i n e s s}} = M _ {1} (P, Y)
+$$
+
+# Determinants of speculative demand
+
+The current trading volume TR (TRADE) of digital currency will well express the public speculative demand for digital currency, and the degree of public interest IN (INTEREST) in digital currency can reflect the potential speculative demand. Thus, the following functional expressions exist for speculative demand $M_{\text{speculate}}^{\text{speculate}}$ for digital currency without risk:
+
+$$
+M _ {d i g} ^ {\text {s p e c u l a t e}} = M _ {2} (T R, I N)
+$$
+
+# Determinants of risk
+
+We unravel the risk factors: legal risk $RI_{1}$ , technical risk $RI_{2}$ and price risk $RI_{3}$ .
+
+The legal risk will be determined by the attitude of a country's authorities to digital currencies, which we call LE (LEGISLATION), it is an order variable (not allowed for digital currency, restricted development, strict control, unclear attitude, laissez-faire).
+
+Technical risk is influenced by transaction volume TR and the degree of network security SF (SAFETY) of a country: with the increase of trading volume of digital currency, the possibility of technical failure will increase, and the difficulty of secrecy will also increase. Therefore, we expect the transaction volume to have a positive correlation with the risk. The higher the level of network security in a county, the less the possibility of technical risk, so the two have negative correlation. The price risk is expressed entirely by the uncertainty of the price of digital currency, and we assume that this variable is $\Delta PR$ ( $\Delta PRICE$ ). So, we get the following expression:
+
+$$
+R I = R I (L E, T R, S F, P R)
+$$
+
+# - Construction of regression equation
+
+The aggregate demand equation is further deformed as follow:
+
+$$
+M _ {d i g} ^ {d} = \left(M _ {d i g} ^ {b u s i n e s s}, M _ {d i g} ^ {s p e c u l a t e}, R I\right) = \left(M _ {1} (P, Y), M _ {2} (T R, I N), (L E, T R, S F, P R)\right)
+$$
+
+Furtherly sorting out the equations, we get
+
+$$
+M _ {d i g} ^ {d} = (P, Y, T R, I N, L E, S F, P R)
+$$
+
+The variable $\mu$ calculated in the previous section, reviewing its definition, is the country's public acceptance of digital money, can largely measure the public's demand for it. To sum up, we construct the following multivariate linear regression model
+
+$$
+\mu_ {i} = \beta_ {0} + \beta_ {1} P _ {i} + \beta_ {2} Y _ {i} + \beta_ {3} T R _ {i} + \beta_ {4} I N _ {i} + \beta_ {5} L E _ {i} + \beta_ {6} S F _ {i} + \beta_ {7} P R _ {i} + O _ {i}
+$$
+
+The regression dependent variable is the public acceptance of digital money $\mu$ . The regression variables are price level $P$ , output level $Y$ , digital currency trading volume $TR$ , public concern degree $IN$ , national attitude $LE$ , network security degree $SF$ and price fluctuation of digital currency $PR$ . $O_i$ is the random error term. Corner $i$ indicates that each country will be an independent observation sample.
+
+# Results of regression estimation
+
+We used bitcoin as an approximate digital currency, (though many unsatisfactory aspects, but the best approximation so far), and the consumer price index (CPI) as the price level, and the GDP as the output level. Bitcoin transactions TR in all markets of each country in the last 24 hours (dollars; Coinhills.com) indicates trading volume, uses the Bitcoin search heat of google trends in 24h to indicate the degree of public concern, and uses the national ranking in the GCI (Global Network Security Index) to indicate the degree of cyber security. Finally, we use the value of bitcoin's percentage fluctuation against national currencies within 24 hours as PR. With the help of stata software, the ordinary least square method (OLS) is used to estimate the coefficients of the equation. The results are as follows:
+
+Table 3 Regression results
+
+| Variables | Constant | P | Y | TR | IN | LE | SF | PR |
| Result | 1.017*** | 0.00697* | 0.0106 | 0.0632 | 0.00469*** | 0.0113 | -0.0295 | -0.0185 |
| Variance | (0.0655) | (0.00413) | (0.00640) | (0.0480) | (0.000378) | (0.0101) | (0.0315) | (0.0285) |
| Observations | 53 | | | R-squared | 0.282 | |
+
+Standard errors in parentheses
+*** p<0.01, ** p<0.05, * p<0.1
+
+Due to the lack of data, we can only obtain the sample data from 53 countries, but the OLS estimate results are basically in line with expectations. The $r$ -squared estimated by regression is 0.282, which indicates that dependent variables can explain the demand for digital currency to a large extent. As an important influence variable of trading demand and speculative demand, $P$ , $IN$ is significant at the level of 0.1 and 0.01 respectively. The three variables that are risk measurement factors are statistically insignificant (which may be related to insufficient sample size). But they have significant economic implications: a country's relaxation of legal attitudes will boost the demand for digital money, and every retreat of a country's GCI ranking (the more secure the top the rank) will reduce the demand by $2.95\%$ . An increase of $1\%$ in price volatility will reduce demand by $1.85\%$ .
+
+# 4.1.4 The Final Results of the Empirical Model
+
+In the previous section we got the probability of each country's acceptance of digital money, and the regression model is used to verify and supplement the result. Comparing them with the critical values, we got the attitude of 130 countries in the world in the adoption of the digital currency and the legal tender. The result is shown as figure below:
+
+
+Figure 3: World Monetary Equilibrium
+
+From the chart, we can see that when the new digital currency appears, major economies such as the United States and the European Union will adopt a relatively friendly attitude. These countries tend to occupy a dominant position in the process of international trade. The more convenient the legal currency is the less of transaction costs and the more favorable their participation in the international trade. So as the digital currency. Compared with their companions from the other part of the world, these countries have greater advantage in the performance of legal tender, and the active adoption of digital currencies can also help them gain more trade benefits. Therefore, this part of the word will ensure their own legal independence of the adoption of digital currency while maintain fully acceptance of digital currencies.
+
+Eastern countries, such as Russia and India, as well as some countries in the Americas, will adopt a relatively conservative attitude and will not accept digital money. From a practical point of view, these countries have a high degree of control over their own currencies, and in order to avoid the impact of digital currencies on their own currencies, they tend to restrict the circulation of digital money at home.
+
+African countries, parts of the Asia and South America countries, will fully adopt digital currency and abandon their own legal tender. The reason for this may be that these governments have less control over their own currencies, and it is better for their economic development to accept the widely accepted universal digital currency instead of holding more volatile domestic currencies.
+
+# 4.2 Empirical Study Based on Macro-model
+
+Based on the macro model, we use Monte Carlo simulation to estimate the trading behavior of a country with dual monetary policy. We also classify the international trade market into two categories. One is the flat currency market. The operating mechanism of this currency market is the same as current currency market. Commodity trade is carried out by converting currencies at the exchange rates. The monetary policies of both countries will directly affect the exchange rates. Only after the implementation of monetary policy or interest rate policy can the two parties know the changes currencies' value and exchange rates. There is a big volatility in the exchange rate. The other is the digital currency market. The operation mechanism of the market follows the above macro-model. Countries use digital money to trade in this market. There is no currency exchange in the process of international trade. The monetary policy and interest rate policy of supranational financial institution will be announced ahead of time. Each country can adjust its own interest rate
+
+policy or monetary policy of flat currency according to the monetary policy or interest rate policy announced by supranational supranational financial institution, which is benefit to keep the exchange rate relatively stable.
+
+We fit the trade situation of the United States from 1994 to 2016 and get the corresponding fitting function to forecast the total trade volume of the United States in the next ten years. If there is a digital currency acceptable to the whole world at this time, the United States will trade in both the flat currency market and the digital currency market. The initial ratio is set randomly, and the annual trading ratio in the two currency markets is determined by the trading ratio in the previous year and the change of exchange rate.
+
+$$
+z _ {t} \left(z _ {t - 1}, \Delta \varepsilon_ {\text {d i g}}, \Delta \varepsilon_ {\mathrm {f}}\right) = \frac {D d _ {t} \times v (d)}{Y _ {t}}
+$$
+
+$z_{t}$ shows the proportion of the United States total trade in digital currency market to total trade in both currency market during the year of t; $\Delta \varepsilon_{\text{dig}}$ shows the fluctuation of the exchange rate of the digital currency, which obeys the normal distribution; $\Delta \varepsilon_{\text{f}}$ shows the fluctuation of the exchange rate of other flat currency, which refers to the fluctuation of the exchange rate in previous years. After simulating with different initial parameters, we get the changes of the digital currency market share in the next ten years. Through simulation, we find that the market share of digital currency will stabilize at about $65.21\%$ regardless of the initial trading proportion.
+
+
+Figure 4: Market Share of Digital Currency in the United State (stable state)
+
+But when the supranational central bank loses its regulatory capacity, the market share of digital currency will decline rapidly, even lower than the initial share.
+
+
+Figure 5: Market Share of Digital Currency in the United State (Imbalance state)
+
+# 4.3 Further Discussion and Application of Factors Outside the Model
+
+# - Impact on Commercial Banks
+
+We are concerned about the impact on the banking industry after the introduction of digital money into the market, the transaction form of digital money greatly improves the efficiency of search and matching, banks that had previously acted as agents of trade would gradually disappear from this service area, the main business of commercial banks, deposit and loan functions, will continue to exist, but there may be some trends:
+
+The scale effect of Internet finance will make commercial banks get rid of the coexistence of local banks such as community banks, city banks and large banks at the international level, the promotion of informatization will reduce the cost of big banks and accelerate the expansion of market share, some corners that could not be covered will be occupied by convenient transactions of digital currency.
+
+Banks will rely more on innovative financial derivatives. Digital currency transactions will only upgrade instruments, while a large number of financial services will still require banks to act as operators. Outside large banks and other financial institutions, the progress in digital currency transactions have reduced the costs of micro, informal lenders and made transactions more insidious, and the regulatory problems of financial derivatives will become more prominent.
+
+# - Exogenous Digital currency risk and Supervision
+
+The reliability and privacy of digital currency deserve public attention. At present, the service system only recognizes public key and private key based on block chain technology, and does not change the means of verification. On the one hand, it strengthens the pertinence of interconnection, but the lost account password will also lose the control of the digital currency in the account forever. The decentralization of the accounting method makes the artificial modification become difficult and cannot fully adapt to the needs of the complex reality. On the other hand, every transaction in the network will be published in the accounting system of all accounts. Once the identity of the account holder is discovered in reality, it will be difficult to solve the problem of leakage of transaction information. By contrast, the cost of national regulation of the identification of a particular population has increased, which will correspondingly increase the cost of holding digital
+
+currency $p c_{d}$ reduce the acceptance of digital money $\hat{\mu}$ .
+
+- Independence of monetary policy
+
+The two-track monetary system has made it more effective and faster to circumvent national monetary policy at the individual level. After the introduction of the universal digital currency, if quantitative easing has led to instability in the value of the national currency, foreign capital will soon transfer the currency. In the search and matching model established in this paper, the proportion of using digital currency increases rapidly. Eventually, it will lead to a decline in the proportion of settlements accepted by legal tender, resulting in a decline in the credibility of the government and the continued exclusion of the local currency from the payment market.
+
+State control over foreign investment
+
+The Infant Industry theory holds that developing countries need to implement trade protectionism in the early stage of Infant industries, and then participate in international competition when they are developed. Although this argument has been criticized by liberal economists, many countries follow. Countries often afraid of foreigners' control over domestic infant, arms and infrastructure industries which are highly related to national interests. Therefore, the global liquidity of digital money means that the state should abandon the protection of important industries from the perspective of foreign capital restriction and adopt direct administrative measures to intervene.
+
+Gaps in the economic base
+
+The establishment of the Eurozone is a great pioneering practice in the theory of optimal currency area, but the practice of the Eurozone in the past 20 years shows that the economic differences among countries cannot be compensated by the rapid flow of capital in the theoretical model. In model 3.3.2, the generation of globalized digital money is not fully applicable to the mechanism of capital flows to areas with higher nominal interest rates, because there may be a high-risk premium that cannot be directly observed in less developed areas. In fact, the capital entering the underdeveloped countries in the euro area is mainly for speculative purposes, which fails to fully promote the convergence effect of the underdeveloped economies.
+
+# 5 Conclusion
+
+# 5.1 Strengths and Weaknesses
+
+# 5.1.1 Strengths
+
+- The core of our model is the acceptability of digital money for individual and manufacturer. It is the key to reflect whether the accounting voucher produced with the block chain technology is used as currency. Therefore, our models are more flexible and compatible. From other analytical perspectives, we can add more factors into our model to perfect it. (for example, the exogenous factors in section 4.3).
+- Based on the DMP model, we explain the relationship between the flat currency of each country and the digital currency in global circulation. On this premise, we construct the influence mechanism that determines the country's attitude towards digital currency. It can explain some government monetary policies effectively.
+- Sufficient theoretical preparations are made for our models. Combining Fisher Equation with Keynes's Theory of Monetary Demand, we make scientific hypothesis on the flow mechanism of digital currency. Before the establishment of supranational currency system, we demonstrate the prerequisites for the use of digital currency in international trade. They reflect the preciseness of logic in the model.
+
+# 5.1.2 Weaknesses
+
+- The framework of the model is relatively large and the relationships of sub models are relatively loose, which is not conducive to linkage analysis.
+- Because the global decentralized digital financial market does not exist in the real world, our model has a relatively weak realistic foundation. It may lead to insufficient interpretation of real data by the model. At the same time, the theory of currency choice constructed at the macro level is relatively abstract. There is room for further improvement.
+
+# 5.2 Conclusion
+
+In this paper, we establish a trade-off model between digital currency and flat currency at the personal, domestic and international levels. In the domestic part, a dynamic equilibrium between the flat currency and the digital currency is determined by simplifying real market transactions. It determines an interval in which the ratio of using the two currencies tend to stabilize.
+
+Next, we make an empirical analysis for the model. We set up the model parameters according to the relevant economic theory and verify the reliability of the parameters by OLS regression. It helps us to judge the use of flat or digital currency in major countries around the world, when a new digital currency emerges. The results show that if the country has a dominant position in international trade and a more stable domestic economy, it is more likely to adopt a monetary policy about using both currencies. However, for the country who has a chaotic domestic economy and an unstable currency, it tends to abandon the original currency. (Or the proportion of people who recognize flat currency has fallen to a very low level) On the other hand, if a country has a firm attitude towards domestic monetary control, a lower degree of marketization or a more conservative concept, it will tend to recognize flat currency only.
+
+In the international model, we have verified that with the widespread recognition of digital currency and the existence of strict inflation controls, the risk of digital currency is manageable. And a smooth digital currency will play a major role in international trade. According to this conclusion, it can be confirmed that the central banks of various countries still have the power to issue currency. They also can adjust the proportion of payment between the two currencies and formulate the appropriate monetary policy, for example, the central bank can use quantitative easing to stimulate export. At the same time, the supranational monetary system will also force the country to limit its own liabilities and balance imports and exports. The model also demonstrates to some conclusion in the theory of optimal currency area. That is digital monetary capital will flow rapidly into projects with higher real interest rates, promoting growth in the entire economy.
+
+# 5.3 Future Work
+
+In the future, some factors have not been reflected in the model should be considered. We need to use the objective evaluation system to add these factors into the overall analysis framework to enhance the model's ability of explaining the reality. By calculating a large number of parameters in the supranational currency model, we can analyze the possibility of using a universal, decentralized digital currency in the international trade.
+
+We hope to integrate the existing loose models and use the same basic model to analyze the trade-offs about using different currencies between microeconomics and international trade. At the international level, we want to know whether the flat currency will exist a new evolution mechanism.
+
+# Reference
+
+[1]Bacchetta P, Van Wincoop E. A theory of the currency denomination of international trade[J]. Journal of International Economics, 2005, 67.
+[2]Beck T. Financial development and international trade: is there a link?[J]. Journal of International Economics, 2002, 57(1):107-131.
+[3]Cheng liu, Dong-feng Wang, Zhi-Wei Liu. Theoretical Analysis of the Optimum Currency Area of Regional Economic Integration [J]. Economic Survey, 2006, 2006(3):53-56. Chong Liu. Trade Development, Financial Development and Monetary Internationalization [D]. Jilin University, 2007.
+[4]Corbae D, Wright TR. Directed Matching and Monetary Exchange[J]. Econometrica, 2003, 71(3):731-756.
+[5]Ciaian P, Rajcaniova M, Kancs D. The Economics of Bitcoin Price Formation[J]. EERI Research Paper Series, 2014, 48(19):1799-1815.
+[6]Demopoulos G, Yannacopoulos N. Conditions for Optimality of a Currency Area[J]. Open Economies Review, 1999, 10(3):289-303.
+[7]Hao Chen. Economic Analysis of Bitcoin [D]. Zhejiang University, 2015.
+[8]Hao-yuan Sun, Zu-Yan Yang. Research on Competitiveness of Non-statutory Digital Money Based on Complete Competitive Market [J]. Shanghai Finance, 2016(9):27-34.
+[9]Hayek F. Denationalization of Money [M]. New Star Press, 2007.
+[10]Jianfeng-Liu. A Theory of Dual Currency Regions and International Public Money—One Common Area, Two Commodity Markets, Two Monetary Systems [J]. Economics (Quarterly), 2010, 9(3):985-1006.
+[11]Keynes M. The General Theory of Employment, Interest and Money [M] 2009.
+[12]Paul Krugman, “O Canada: A neglected nation gets its Nobel”. Slate, Oct 19, 1999.
+[13]Qian,Yao. Analysis of Digital Money Economy [J]. New Financial Review,2018(04):68-89.
+[14]Stephanie Lo and J. Christina Wang, “Bitcoin as Money?” Current Policy Perspectives, Federal Reserve Bank of Boston, 2014
+[15]Tschorsch F, Scheuermann B. Bitcoin and Beyond: A Technical Survey on Decentralized Digital Currencies[J]. IEEE Communications Surveys & Tutorials, 2016:1-1.
+[16]https://www.coinhills.com/
+[17]https://www.itu.int/dms_pub/itu-d/opb/str/D-STR-GCI.01-2017-R1-PDF-E.pdf
+
+# Appendix
+
+1. Total Trade Volume of the United State
+
+
+
+# 2. Fitting code
+
+```matlab
+a=load('trade.txt');
+b=polyfit(a(:,1),a(:,2),2);
+y(:,1)=2018:2027;
+y(:,2)=polyval(b,y(:,1));
+c(:,1)=1990:0.1:2027;
+c(:,2)=polyval(b,c(:,1));
+plot(c(:,1),c(:,2), 'r')
+hold on
+plot(a(:,1),a(:,2), 'b*');
+title('Total Trade Volume of the United State')
+xlabel('year')
+ylabel('Trade Volume')
+```
+
+# 3. Drawing code
+
+```matlab
+load state.mat
+n=length states);
+d=importdata('SJ.txt');
+data=d.data;
+textdata=d.textdata;
+geoname={'states.name'};
+n=length(data);
+mysymbolspec=cell(1,n);
+mycolormap=zeros(n,3);
+for i=1:n
+ count=data(i);
+ if count==4
+ mycolormap(i,:)=[0.74,0.99,0.79];
+ elseif count==3
+```
+
+mycolormap(i,:)=[1,0.89,0.52];
+elseif count $= = 2$ mycolormap(i,:)=[1,0.38,0];
+elseif count $= = 1$ mycolormap(i,:)=[0.5,0.5,0.5];
+end
+mycountry $\equiv$ textdata{i};
+geoidx $\equiv$ strmatch(mycountry, geoname);
+if numel(geoidx) > 0
+country_name $\equiv$ geoname(geoidx(1));
+mysymbolspec{i}={'name', char(country_name), 'FaceColor', my
+colormap(i,:)};
+end
+end
+figure
+ax $\equiv$ worldmap('world');
+setm(ax,'grid','off')
+setm(ax,'frame','off')
+setm(ax,'parallellabel','off')
+setm(ax,'meridianlabel','off')
+symbols $\equiv$ makesymbolspec('Polygon',{'default','FaceColor',[0.50.160.16],...
+'LineStyle', '-', 'LineWidth', 0.2,...
+'EdgeColor', [0 0 0]》,...
+mysymbolspec{:});
+geoshow states,'SymbolSpec', symbols);
\ No newline at end of file
diff --git a/MCM/2019/F/1916375/1916375.md b/MCM/2019/F/1916375/1916375.md
new file mode 100644
index 0000000000000000000000000000000000000000..d4e85e4bd4b5074b460a4e4d2234f6b175ce6299
--- /dev/null
+++ b/MCM/2019/F/1916375/1916375.md
@@ -0,0 +1,741 @@
+For office use only
+
+T1
+
+T2
+
+T3
+
+T4
+
+Team Control Number
+
+1916375
+
+Problem Chosen
+
+F
+
+For office use only
+
+F1
+
+F2
+
+F3
+
+F4
+
+# 2019 ICM Summary Sheet
+
+# the Future is Coming: the Revolution of Currency
+
+With the advent of Information Age, digital technology application like digital currency became prevalent and be of common use all over the world. Our study construct a model to represent a global decentralized digital financial system, mainly with the tools of Mathematical analysis and Economic Model. Based on our model, we analyze the choices of different countries and the long-term effects. Besides, we also put forward mechanisms for oversight of such a global digital currency and test our model's robustness.
+
+First, we choose key factors that would limit or facilitate the digital financial system, and we integrate these factors into the return and the cost. Then we build cost-return analysis model to identify the viability of the global decentralized digital financial market, which are simplified and rationalized under our assumptions. We quantify the return and the cost by combining Analytic Hierarchy Process (AHP) with Fuzzy Comprehensive Evaluation (FCE). We give different weights based on the importance of these factors to quantify the total $N1$ (net income). Also we give different weights based on the impact on the individual, national, and global to quantify the $N1$ at different levels.
+
+Second, we analyze different choices of different countries by their willingness and needs. We simplify different countries into small-scale country and large-scale country, then we make our analysis and draw conclusions in reference to Mundell Fleming's Open Economy Theory. Otherwise, we take whether a country would abandon their own currency into consideration. In this part, we introduce Impossible Trinity into our model, and we can come to the conclusion whether they abandon their own currency, a fixed exchange rate regime may be the most effective.
+
+Third, combining the analysis of our model with the reality, we put forward mechanisms for oversight of such a global digital currency system.
+
+Fourth, we extended our model to the long-term. We use the logistic model to simulate changes in the outlook for the banking industry. In the long run, the banking industry will almost lose all its on-balance sheet business, which means it may change to an investment intermediary. We transform our cost-benefit model to analyze the effects of the system on the local, the regional, and the global three levels. Furthermore, we take a view of the international relations between countries in the long term.
+
+Lastly, we test the stability and the sensitivity of our model, what is the next? We conclude the strengths and the weaknesses of our model. In addition, we write a policy recommendation for national leaders based on our work.
+
+# POLICY RECOMMENDATION
+
+# Honored President/ Prime Minister:
+
+Thanks for trusting us. Our team has designed a global digital currency system under the auspices of ICM. We guess maybe you have a mixed opinion about the system we built. Therefore, we feel obliged to recommend the optimal strategies to you to ensure the successful operation of the Digital Currency System in your country.
+
+Based on our analysis of the cost-return model, for both the country and the people, in terms of free capital flows and free access to global financial markets, new currency system has the largest weight. However, it also brings in greater risk cost because of the impaired independence of monetary policy. There is a large amount of cost, and the $NI$ function is affected by the gross national product and the speed of money circulation, then it will increase with the degree of the acceptance to the digital currency. We find something interesting, which is the country, gains the most marginal benefits in this system as the degree of acceptance increases in the long run, while the marginal benefits of the individual may decline, which reminds us of paying more attention to the maintenance of the safety. Here are our suggestion.
+
+Having a fixed exchange rate Accepting the new digital currency system will be desirable for both large and small economy in the long run. In our design, it is feasible to abandon or maintain sovereign currency in the digital currency system. If your country abandon sovereign currency, it means giving up sovereign monetary policy. And according to the Impossible trinity, this means that your country's exchange rate will be fixed; if your country maintain sovereign currency, a floating interest rate will bring unsustainable high inflation risk. Therefore, we recommend that your country have a fixed exchange rate to ensure that the inflation risk is manageable.
+
+Establish regulatory nodes and improve laws The digital currency financial system connects individuals and institutions around the world into a network, showing advantages like non-tampering and decentralization, so we recommend that you can work with other countries to establish a global regulatory node to detect crimes and facilitate tax. Besides, legal support is as important as technical support.
+
+Maintain good international relations In our system, exchanges and integration between countries will develop on an unprecedented scale. The zero-sum game that is viewed by a large number of supporters in the past will no longer reasonable, and your country can maximize the benefits of digital currency globalization only if you maintains good international relations with other countries.
+
+Robustness Of Our Model Our model is based on some assumptions that may differ slightly from circumstances of your nation. You and your team can formulate more specific operational strategies according to reality.
+
+We hope that our suggestions are useful for you, and the Digital Currency System will be the ideal blueprint for future development in the world.
+
+# Content
+
+1 INTRODUCTION 3
+
+1.1 Background 3
+1.2 Restatement of the Problem 3
+
+2 Assumptions and Variable Descriptions 4
+
+2.1Assumptions 4
+2.2 Terms, Definitions and Symbols 5
+
+3 BASIC MODEL ANALYSIS 6
+
+3.1 Measure of the Return about Currency Stability $R_{1}$ 6
+
+3.1.1 Model of Currency Value 6
+3.1.2 Measure of the Return about the Currency StabilityR1. 6
+
+3.2 Measure of the Return about the Output Growth $R_{2}$ 7
+3.3 Measure of the Return of Capital Availability $R_{3}$ 8
+
+3.3.1 Short-term Model 8
+3.3.2 Long-term Model. 8
+
+3.4 Total Model of Return 9
+3.5 Total Model of Cost 10
+3.6 Total Evaluation Model 11
+
+4 Choices of Different Countries 12
+
+4.1 Different Choices because of size 12
+
+4.1.1 Large Countries 12
+4.1.2 Small Countries 13
+
+4.2 Give Up National Currency or Not 14
+
+4.2.1 Give Up National Currency 14
+4.2.2 Not Give Up National Currency 14
+
+5 Imagination of the Regulatory Mechanism 16
+
+5.1 the Global Level 16
+5.2 the National Level 16
+5.3 the Individual Level 17
+5.4 Conclusion 17
+
+6 Dynamic Analysis 17
+
+6.1 Long-term Impact on the Banking Industry 18
+6.2 Long-term Impact on Different Regions 19
+6.3 Long-term Impact on International Relationship 20
+
+7 Model Testing 21
+8 Strengths and Weaknesses 21
+9 Reference. 22
+
+# 1. Introduction
+
+# 1.1 Background
+
+"What is needed is an electronic payment system based on cryptographic proof instead of trust, allowing any two willing parties to transact directly with each other without the need for a trusted third party."
+
+- Satoshi Nakamoto, , the creator and developer of bitcoin, quoted from his paper
+
+As nakamoto said, with the development of economic globalization, the existing payment system and monetary system are increasingly difficult to meet people's demand for production and trade. People are fed up with a currency controlled by a government and no longer willing to pay the costs of inflation and exchange rate fluctuations. Digital currency based on technologies such as blockchain is a good way to solve this problem.
+
+Cryptocurrency is a subset of digital currency with unique features of privacy, decentralization, security and encryption. The cryptocurrency represented by bitcoin is increasingly favored by economists and bankers. The development and application of cryptocurrency has become a frontier topic in financial research.
+
+In Venezuela, South America, the government's wrong monetary policy has brought extremely serious hyperinflation to the public, and people have begun to hold bitcoin to avoid the effects of inflation to protect their property from erosion. Over time, people's enthusiasm for digital currency has gradually evolved from the private to the official, and the recent extremely popular Central Bank Digital Currency is an excellent proof.
+
+# 1.2 Restatement of the Problem
+
+To help identify the viability and effects of a global decentralized digital financial market, we are required to construct a model that adequately represents this type of financial system. Then we aim to solve these problems as follows:
+
+Task 1. Identifying key factors that would limit or facilitate the growth, access, security, and stability of the global decentralized digital financial market from three levels: the individual, national, and global levels.
+
+Task 2. Considering the different needs of countries and their willingness to work with this new financial marketplace.
+
+Task 3. Modifying their current banking and monetary models and consider whether they should abandon their own currency or not.
+
+Task 4. Establishing a mechanism for oversight of such a global digital currency.
+
+Task 5. Extending our analysis to consider the long-term effects of such a system on current banking industry; the local, regional, and world economy; and international relations between countries.
+
+
+Figure 1: Overview of Our Work
+
+# 2 Assumptions and Variable Descriptions
+
+# 2.1 Assumptions
+
+We make the following basic assumptions in order to simplify the problem. Each of our assumptions is justified and consistent with the basic fact.
+
+The total number of digital currencies is fixed
+
+Since the programming node is determined, the total number of digital currencies is also fixed, that is to say, the number of digital currencies is limited.
+
+Most of the countries use the digital currency as legal tender
+- There is a international organizations like global central bank to manage the global digital financial system
+
+A public organization is necessary to lead public confidence. Currency system is first and foremost a social convention, which emerges to build trust among strangers in their economic transactions. We use Game Theory to analyze this process in the figure 2This trust mechanism can be maintained only if the trader is confident that the symbolic objects used now will be accepted by other traders in the future.
+A public organization is necessary to supervise the global digital financial system
+
+
+Figure 2: Game Theory analysis about the necessity of the Central Bank
+
+Only when the existing monetary system is trusted by people can the Nash equilibrium be reached, and the existence of a central bank that leads public confidence is necessary.
+
+# - The market environment is fully open and the elements are free to flow.
+
+# 2.2 Terms, Definitions and Symbols
+
+| Variable | Meaning |
| V | the transactions velocity of money |
| σ0 | the value of a basic basket of goods |
| σ | the purchasing power of money, the content of a basket of commodities included in a unit of currency |
| S(w,σ) | the return of currency stability |
| w | the degree of the acceptance to the digital currency |
| θ | the usage efficiency of capital under the digital currency system |
| K* | the capital use efficiency increased after the global digital currency used |
| K | the capital use efficiency before the global digital currency used |
| L | labor input |
| RT | the total return of our digital currency system |
| Ri | the return of currency stability/ output growth/ capital availability |
| Cf | The hidden costs of the digital currency system |
| Cr | The cost caused by risk of the system |
| NI | net income of the digital currency system |
| Pr0 | the profit of the banking industry at the initial moment |
+
+# 3 Basic Model Analysis
+
+# 3.1 Measure of the Return about Currency Stability $R_{1}$
+
+Our hypothesis assumes that the total number of digital currencies is fixed, but the need of the currency is constantly increasing. Finally, there must be deflation in the system. We try to set the value of the currency in terms of the level of productivity to determine the degree of the deflation. Finally, we want to use currency stability to measure the return of the digital currency system.
+
+# 3.1.1 Model of Currency Value
+
+First, we try to use the Equation of Exchange from Monetary Economics to define the value of each digital currency
+
+$$
+M V = P T \tag {1}
+$$
+
+Thus, $PT$ means the level of nominal expenditures and $M$ is considered a fixed parameter.
+
+The price index and the currency value are reciprocal, and we define a new parameter $\sigma_0$
+
+$$
+\sigma_ {0} = P \times \sigma \tag {2}
+$$
+
+- $\sigma_0$ -the value of a basic basket of goods
+- $\sigma$ -currency (the purchasing power of money, the content of a basket of commodities included in a unit of currency)
+
+Combine equation (1) and equation (2), we can have equation (3) by a simple calculation.
+
+$$
+\sigma = \frac {\sigma_ {0}}{M} \times \frac {T}{V} \tag {3}
+$$
+
+There are three sections for GDP (the goods and services it produces in a year). If Y is national income (GDP), then the three uses: consumption, investment, and government purchases can be expressed as
+
+$$
+Y ^ {-} = C (Y - T) ^ {-} + I (r) + G \tag {4}
+$$
+
+- $Y$ -the total national income or produces, $C(Y - \overline{T})$ -disposable income and $\overline{T}$ is the fixed tax, $I(r)$ -Investment, $\overline{G}$ -the fixed government purchases
+
+$$
+\left\{ \begin{array}{l} \sigma = \frac {\sigma_ {0} / M}{V (t)} \times Y \\ Y = C (Y - \bar {T}) + I (V) + \bar {G} \end{array} \right. \quad \frac {d I}{d r} < 0 \tag {5}
+$$
+
+It can be seen that the value of a currency $\sigma$ is only related to $r$ , $Y$ and $V(t)$ . It will increase with the growth of the currency and decrease with the acceleration of the currency, which is consistent with the actual financial operation environment. So maintaining the stability of the value of the currency is conducive to maintaining the stability of the global monetary system
+
+# 3.1.2 Measure of the Return about the Currency Stability $R_{1}$
+
+We define a new function (6) to measure the return about the currency stability $R_{1}$ as follow:
+
+$$
+S (w, \sigma) = (1 - \left| \frac {\sigma_ {t} - \sigma_ {t - 1}}{\sigma_ {t - 1}} \right|) \bullet w \tag {6}
+$$
+
+$S(w,\sigma)$ -the return of currency stability
+- $w$ -the degree of the acceptance to the digital currency
+
+# 3.2 Measure of the Return about the Output Growth $R_{2}$
+
+We believe that with the use of digital currency, the convenience of international liquidation will promote international capital flows faster. We introduce a new function $Y(V_{t},w)$ to measure the return about the Output Growth $R_{2}$
+
+Now we define a function to describe the free circulation rate of capital under new system
+
+$$
+\theta = f \left(V _ {t}\right) \quad \frac {d \theta}{d V _ {t}} > 0, \theta > 1 \tag {7}
+$$
+
+- $\theta$ - the usage efficiency of capital under the digital currency system
+
+Obviously, we can use $\theta K$ to represent the actual capital value $K^{*}$
+
+$$
+K ^ {*} = \theta K \tag {8}
+$$
+
+- $K^{*}$ - the capital use efficiency increased after the global digital currency used
+- $K$ -the capital use efficiency before the global digital currency used
+
+We are trying to introduce the Cobb Douglas Production Function to describe the impact of capital flow efficiency on output
+
+$$
+Y = F (K, L) = A K ^ {\alpha} L ^ {\beta} \quad (\alpha + \beta = 1, \alpha > 0, 1 > \beta > 0) \tag {9}
+$$
+
+- $Y$ -total production (the real value of all goods produced in a year or 365.25 days)
+- $K$ -capital input
+- $L$ -labor input
+- $A$ -total factor productivity
+
+It is easy to get the formulation (10) as follow by combining the formulation (8) and the formulation (9)
+
+$$
+Y ^ {*} = F (\theta K, L) = A (\theta K) ^ {\alpha} L ^ {\beta} \tag {10}
+$$
+
+$$
+Y ^ {*} = \theta^ {\alpha} Y \quad \theta > 1, \alpha > 0 \tag {11}
+$$
+
+$Y^{*} > Y_{0}$ Increasing returns to scale
+
+- $\alpha$ -Capital's share of output value
+
+- $\beta$ -Labor's share of output value
+
+The formulation (11) means that the free flow of capital on a global scale has led to an increase in total output
+
+# 3.3 Measure of the Return of Capital Availability $R_{3}$
+
+A major change must take place in payment method under the global digital monetary system. We consider its beneficial effects as the availability of capital. In the global digital currency system, the cost of capital will fluctuate within a reasonable range, for which we discuss both long-term and short-term situations. We want to use Economic Supply and Demand Model to display our analysis.
+
+# 3.3.1 Short-term Model
+
+In any regional market, if $r_0$ is greater than $r^*$ , this region will attract capital inflows from other regions, then the region's capital supply will exceed capital demand, hence the interest rates of local region will fall.
+
+
+Figure 3:interest rate analysis in short term
+
+# 3.3.2 Long-term Model
+
+Because the total number of digital currencies is fixed, interest rates may rise as demand for capital continues to rise. At this point; the central bank can change the speed of money circulation through policy regulation.
+
+
+Figure 4: interest rate analysis in long term
+
+Now we renew a part of the formulation (5) as formulation (12)
+
+$$
+\sigma_ {i} = \frac {\sigma_ {0} / M}{V (t)} \times Y \tag {12}
+$$
+
+When $V(t)$ falls, $\sigma_{i}$ increase
+
+The formulation (5) also can display as follow:
+
+$$
+P = \sigma_ {0} / \sigma_ {i} \tag {13}
+$$
+
+$$
+\frac {M}{P} = L (r, Y) \tag {14}
+$$
+
+$\frac{M}{P}$ is Inversely proportional with $r$ , and is positive proportional with $Y$
+
+The central bank can make nominal changes to from $M_{S}$ to $M^{*}$ through monetary policy adjustments, and $r_{0}$ fall to an equilibrium level $r^{*}$
+
+In summary, under the global digital currency system, the price of capital is influenced by a relatively stable $r^*$ , and $r^*$ fluctuates around the benchmark interest rate
+
+# 3.4 Total Model of Return
+
+Base on the analysis from section 4.1 to section 4.3, the total return of our digital currency system can display as function (15)
+
+$$
+R _ {T} = R _ {1} + R _ {2} + R _ {3} \tag {15}
+$$
+
+- $R_{T}$ -the total return of our digital currency system
+- $R_{1}$ - the return of currency stability
+- $R_{2}$ - the return of output growth
+- $R_{3}$ - the return of capital availability
+
+Then we think the total return of our digital currency system also can be divided into 3 parts, including the return of the individual, the return of nation and the return of the global. They can display as function (16)
+
+$$
+R _ {T} = \varphi_ {1} R _ {\text {i n d i v i d u a l}} + \varphi_ {2} R _ {\text {n a t i o n}} + \varphi_ {3} R _ {\text {g l o b a l}} \tag {16}
+$$
+
+- $R_{\text{individual}}$ - the return of the individual
+- $R_{\text{nation}}$ - the return of nation
+- $R_{\text{global}}$ - the return of the global
+
+Table 1: the membership of factors ${R}_{\text{individual }},{R}_{\text{nation }},{R}_{\text{global }}$
+
+| FactorsMembership | Rindividual | Rnation | Rglobal |
| Great(v1) | Small(v2) | Great(v1) | Small(v2) | Great(v1) | Small(v2) |
| R1 | 0.2 | 0.8 | 0.6 | 0.4 | 0.1 | 0 |
| R2 | 0.3 | 0.7 | 0.7 | 0.3 | 0.9 | 0.1 |
| R3 | 1 | 0 | 0.4 | 0.6 | 0.1 | 0.9 |
+
+$$
+R _ {\text {i n d i v i d u a l}} = \left( \begin{array}{l l} 0. 2 & 0. 8 \\ 0. 3 & 0. 7 \\ 1 & 0 \end{array} \right), \quad R _ {\text {n a t i o n}} = \left( \begin{array}{l l} 0. 6 & 0. 4 \\ 0. 7 & 0. 3 \\ 0. 4 & 0. 6 \end{array} \right), \quad R _ {\text {g l o b a l}} = \left( \begin{array}{l l} 1 & 0 \\ 0. 9 & 0. 1 \\ 0. 1 & 0. 9 \end{array} \right) \tag {17}
+$$
+
+$$
+A = \left(R _ {1}, R _ {2}, R _ {3}\right) = \left( \begin{array}{l l l} 0. 4 5 & 0. 3 2 & 0. 2 \end{array} \right) \tag {18}
+$$
+
+$$
+A \cdot R _ {\text {i n d i v i d u a l}} = (0. 3 9 5 \quad 0. 6 0 5) \tag {19}
+$$
+
+$$
+A \cdot R _ {\text {n a t i o n}} = (0. 5 9 5 \quad 0. 4 0 5) \tag {20}
+$$
+
+$$
+A \cdot R _ {\text {g l o b a l}} = (0. 7 8 5 \quad 0. 2 1 5) \tag {21}
+$$
+
+According to the principle of maximum membership degree, we can get the following results.
+
+$$
+R _ {T} = 0. 3 9 5 R _ {\text {i n d i v i d u a l}} + 0. 5 9 5 R _ {\text {n a t i o n}} + 0. 7 8 5 R _ {\text {g l o b a l}} \tag {22}
+$$
+
+# 3.5 Total Model of Cost
+
+Considering the possible cost of the global digital currency system, we break down the whole cost into two parts: fundamental cost $C_{\mathrm{f}}$ and risk cost $C_{r}$ . In this model, we also try to use the Fuzzy Evaluation Method(FCE) for quantitative analysis like the total model of return.
+
+- $C_{\mathrm{f}}$ - The hidden costs of the digital currency system, including digital currency acquisition (mining) cost, the invisible external cost of maintaining the operation of the system, etc.
+- $C_r$ -Including uncontrollable nature due to Internet technology, impaired independence and malicious manipulation
+
+$a_{1}$ -the Uncontrollable Internet, like personal accounts may be offended by theft, data leakage and system failure caused by technical failure of government or international organization systems
+$a_{2}$ - Impaired independence. The state has lost the independence of monetary policy to some extent. In the face of asymmetric shocks, individual countries cannot impel independent and effective monetary policies for macroeconomic regulation and controlling like before. Digital currency integration will also accelerate the transmission of global financial risks
+
+$a_{3}$ -Malicious manipulation.
+
+Based on the above considerations, we consider the risks at the individual, the national and the global level respectively, they are $C_{r_1}, C_{r_2}$ and $C_{r_3}$ . Through easy analysis, we could empower these indicators as follows:
+
+Table 2: the member ship of factors ${C}_{{r}_{1}},{C}_{{r}_{2}},{C}_{{r}_{3}}$
+
+| FactorsMembership | Cr1 | Cr2 | Cr3 |
| Great(v1) | Small(v2) | Great(v1) | Small(v2) | Great(v1) | Small(v2) |
| a1 | 0.9 | 0.1 | 0.7 | 0.3 | 1 | 0 |
| a2 | 0 | 1 | 1 | 0 | 0.6 | 0.4 |
| a3 | 0.7 | 0.3 | 0.8 | 0.2 | 1 | 0 |
+
+$$
+C _ {r _ {1}} = \left( \begin{array}{l l} 0. 9 & 0. 1 \\ 0 & 1 \\ 0. 7 & 0. 3 \end{array} \right), C _ {r _ {2}} = \left( \begin{array}{l l} 0. 7 & 0. 3 \\ 1 & 0 \\ 0. 8 & 0. 2 \end{array} \right), C _ {r _ {3}} = \left( \begin{array}{l l} 1 & 0 \\ 0. 6 & 0. 4 \\ 1 & 0 \end{array} \right), A = \left( \begin{array}{l l l} a _ {1} & a _ {2} & a _ {3} \end{array} \right)
+$$
+
+$$
+S _ {i} = C _ {r _ {j}} \cdot A \tag {23}
+$$
+
+$$
+S _ {1} = A \bullet C _ {r _ {1}} = \left( \begin{array}{l l} 0. 5 9 & 0. 4 1 \end{array} \right)
+$$
+
+$$
+S _ {2} = A \bullet C _ {r _ {2}} = \left( \begin{array}{l l} 0. 8 1 & 0. 1 9 \end{array} \right)
+$$
+
+$$
+S _ {3} = A \bullet C _ {r _ {3}} = \left( \begin{array}{l l} 0. 8 8 & 0. 1 2 \end{array} \right)
+$$
+
+According to the principle of maximum membership degree, we can get the following results.
+
+$$
+C = C _ {f} + 0. 5 9 C _ {r _ {1}} + 0. 8 1 C _ {r _ {2}} + 0. 8 8 C _ {r _ {3}} \tag {24}
+$$
+
+# 3.6 Total Evaluation Model
+
+Based on the total model of return and the total model of cost we did above, in order to identify the viability of the global decentralized digital financial market, we can define some new weights according to the importance for different levels: the individuals, the nations and the global. To evaluate the net income and determine the feasibility of our digital currency system.
+
+$$
+N I = \sum_ {k = 1} ^ {n} R _ {k} - \left(C _ {f} + \sum_ {i = 1} ^ {n} \sum_ {j = 1} ^ {n} \lambda_ {i} C _ {r _ {j}}\right) \tag {25}
+$$
+
+We use the AHP method, then get the weight of $R_{1}, R_{2}, R_{3}, a_{1}, a_{2}, a_{3}$ to NI as follow:
+
+Table 3: AHP Weight and Impact
+
+ | R1 | R2 | R3 | a1 | a2 | a3 | Impact |
| R1 | 1 | 1/3 | 4 | 1/4 | 3 | 1/2 | 0.1238 |
| R2 | 3 | 1 | 3 | 1/2 | 5 | 2 | 0.2484 |
| R3 | 1/4 | 1/3 | 1 | 1/5 | 2 | 1/3 | 0.0638 |
| a1 | 4 | 2 | 5 | 1 | 4 | 2 | 0.3472 |
| a2 | 1/3 | 1/5 | 1/2 | 1/4 | 1 | 1/3 | 0.0499 |
| a3 | 2 | 1/2 | 3 | 1/2 | 3 | 1 | 0.1669 |
+
+$$
+N I _ {i n d i v i d u a l} = 0. 3 9 5 [ 0. 1 2 3 8 S (Y, w) + 0. 2 4 8 4 Y (V _ {t}, w) + 0. 0 6 3 8 M (r, w) ] - C _ {f} - 0. 5 9 \times 0. 3 4 7 2 C _ {r _ {1}} (w)
+$$
+
+$$
+N I _ {\text {n a t i o n}} = 0. 5 9 5 \left[ \left(0. 1 2 3 8 S (Y, w) + 0. 2 4 8 4 Y \left(V _ {t}, w\right) + 0. 0 6 3 8 M (r, w) \right] - C _ {f} - 0. 8 1 \times 0. 0 4 9 9 C _ {r _ {2}} (w) \right.
+$$
+
+$$
+N I _ {g l o b a l} = 0. 7 8 5 [ 0. 1 2 3 8 S (Y, w) + 0. 2 4 8 4 Y (V _ {t}, w) + 0. 0 6 3 8 M (r, w) ] - C _ {f} - 0. 8 8 \times 0. 1 6 9 9 C _ {r _ {3}} (w)
+$$
+
+$NI$ -net income of the digital currency system
+
+# 4 Choices of Different Countries
+
+# 4.1 Different Choices because of size
+
+# Assumptions:
+
+We divide the country into large countries and small countries according to the size of the economy.
+
+> large countries: the volume of an economy that accounts for a considerable proportion of the world economy and can influence the world market, leading the formation of world market interest rates $r^*$
+> small countries: a small part of the open world market, so its impact on world interest rates is negligible. Small countries can only become the recipients of world interest rates $r^*$
+
+Different countries have different monetary transmission mechanisms, but they all have same demand for new digital monetary financial systems, such as stability and economic growth. Recalling that:
+
+$$
+Y = C (Y - T) + I (r) + G + N X \tag {26}
+$$
+
+$$
+N X = Y - C \left(Y - \bar {T}\right) - \bar {G} - I (r) \tag {27}
+$$
+
+$$
+Y - C \left(Y - \bar {T}\right) - \bar {G} = S \tag {28}
+$$
+
+# 4.1.1 Large Countries
+
+$$
+N X = S - I (r) \tag {29}
+$$
+
+$$
+N X = C F (r)
+$$
+
+$$
+S = I (r) + C F (r)
+$$
+
+In large countries, the interest rates are determined internally, the loanable capital is determined by the function of domestic and foreign investment on interest rates. They can be displayed as follow:
+
+$$
+T) + I (r) + \bar {G} + C F (\bar {r}) \tag {32}
+$$
+
+- $S$ - Loanable funds
+$CF(r)$ -Net capital outflow
+
+
+Figure 6: the analysis for large countries about their choices
+
+Both $I$ and $CF$ are negatively affected by $r$ . When $r$ rises, $Y$ will fall faster, and the increase in elasticity will cause $r$ not to rise excessively, then $Y$ will not fall too much, eventually $r$ will reach a desirable balanced level.
+
+For large countries, under the digital monetary financial system, interest rate stability and economic growth will be guaranteed, also their demand of capital will be met.
+
+# 4.1.2 Small Countries
+
+$$
+N X = S - I (r) \tag {33}
+$$
+
+
+Figure 7: the analysis for small countries about their choices
+
+When $S > I(r)$ , then $S - I(r) = NX > 0$ , excess loanable capitals will flow abroad, outcome for the people of the country. This is a desirable outcome for the people of
+
+When $S \leq I(r)$ , the country's loanable funds will all be internally consumed, which is also desirable for the public.
+
+In general, in small countries, the digital currency financial system makes interest rates relatively stable, the economy grows, and demand will be met.
+
+# 4.2 Give Up National Currency or Not
+
+# Assumptions:
+
+- There is only a small number of countries do not give up their original national currency, the country A is one of them, we assume that the exchange rate between the country's currency and the global currency is $\varepsilon_{i}$
+- In the long term, employment is sufficient and the amount of capital is fixed
+
+We are trying to introduce the Impossible Trinity Theory to analyze the choices of different countries under the digital currency system. As we all know, it is impossible for a country to complete the following three aims at the same time: Capital Mobility, Fixed Exchange Rate and Independent Monetary Policy. It is clear that the flow of the capital is free and the monetary policy is not independent in our digital currency system.
+
+
+Figure 5: Impossible Trinity Theory
+
+# 4.2.1 Give Up National Currency
+
+For countries who choose to abandon their national currency to become one part of the global digital currency system, their monetary policy is not independent, they keep a fixed exchange rate with digital currency. When trading with countries who use national currency, they use a basket of commodity values as exchange rate
+
+# 4.2.2 Not Give Up National Currency
+
+
+
+
+Figure 8: model of floating exchange rate
+
+- $NX$ -Net exports
+- $CF$ -Net capital outflow
+- $\varepsilon$ -Exchange rate
+
+In the short term, $NX$ will increase, and $Y$ also increase, then $\varepsilon$ keep falling, finally the price of goods will rise
+
+# Fixed Exchange Rate
+
+In this situation, monetary policy, trade, and global central bank regulation just have limited impact and its impact is close to invalid.
+
+
+
+
+Figure 9: model of fixed rate
+
+Since the central bank of country $A$ use the regulation to achieve the fixed exchange rate, it will cover the impact from imports and exports, the $Y$ level will not change. However, there are only two general ways for the central bank to regulate the exchange rate, namely foreign exchange reserves and pegged exchange rates. Due to the limitation of foreign exchange savings under the digital currency system, the influence of the regulation by foreign exchange savings will be close to valid. Therefore, the nation like $A$ country only can choose to peg the exchange rate as the only method to fix the exchange rate, so that it can stabilize the price level at a lower cost and avoid the problem caused by currency instability.
+
+# 5 Imagination of the Regulatory Mechanism
+
+In the global digital and monetary financial system we design, generalized, decentralized, and electronic are its biggest features, which is the biggest difference from the current legal currency system. This fundamental difference has also triggered our thinking about regulation.
+
+The current sovereign state monetary system is endorsed by national credit, and the currency is issued and regulated by central banks. All the Countries belong to the system naturally have a variety of regulatory measures to detect and observe the circulation and increase of money. In the global digital financial system, central banks are no longer the main body of electronic money issuance. The basic technical characteristics of electronic money bring people the natural trust, and the country's credit endorsement is no longer useful.
+
+At this point, the current regulatory system will no longer apply in the future. We have design a digital currency regulatory framework to ensure that this technological innovation will not become a cradle of crime. Our digital currency regulatory framework will be developed at the global, national and individual levels.
+
+# 5.1 the Global Level
+
+The first is the global level. In the future global digital and monetary financial system, the account system under the global central bank management makes everyone a node in the digital currency financial system, and its principle is similar to the node concept in Blockchain Technology. Unlike existing blockchain technology applications (such as Bitcoin), our system will have a God node that is a global central bank. The God node will have much higher authority than other individuals or institutional nodes. This node does not belong to any country, and it is shared by global sovereign states but operated and managed by an international organization similar to the United Nations. Its function is mainly to coordinate regulatory investigations and data analysis of transnational crimes. In order to ensure the super node is not abused, there must be a public international law to comprehensively manage its authority and management.
+
+# 5.2 the National Level
+
+The second is the national level. In the global digital and monetary financial system we design, the national regulatory authorities will also have their own secondary super nodes, whose rights and functions are similar to those of global super nodes, but the scope of authority is limited to the domestic market economy participants and their citizens. From this perspective, in order to ensure the effectiveness of national governance and the timeliness of crime prevention, the sub-super nodes of the national regulatory authorities will have the authority to penetrate the entire node in the national digital currency network, which can effectively track and interfere with illegal trading behaviors such as tracking money, laundering activities and freezing illegal digital currency assets. At this level, the removal of privileged nodes still requires legal protection as an aid to maintain the rational operation of the digital moneta and financial system at the national level.
+
+# 5.3 the Individual Level
+
+Finally, it is the personal level. In the system we designed, taxes and other things that are usually related with the efficiencies of the system will become more affordable and effective. This is because global and institutional accounts are connected in a network, and tax evasion will be easier to be detected by regulatory authorities. In addition, the bookkeeping characteristics of the digital currency will enable individuals to read historical transaction records within the scope of authority, then track better, to protect personal property, and supervise the capital flow procedures of relevant government departments.
+
+# 5.4 Conclusion
+
+In general, in our global digital financial system, as long as the relevant digital currency regulatory framework is implemented, it can ensure that there is no shortage of supervision at the individual, national and global levels. The technical framework and related laws we design will shape a transparent and efficient world. The flow of assets and data will all be difficult to escape from our supervision. And the efficiencies of the financial system is even better
+
+
+Figure 10: imagination of regulatory system
+
+# 6 Dynamic Analysis
+
+# 6.1 Long-term Impact on the Banking Industry
+
+In this section, we want to consider the long-term impact of such systems on the current banking industry; local, regional and world economies; and international relations with the international community
+
+# Assumptions:
+
+- before the promotion of the digital currency market ,the profit of the In-table business of
+
+banking industry grows at a fixed rate $g_{0}$ of growth
+
+$$
+P r = P r _ {0} \left(1 + g _ {0}\right), \frac {d P t}{d t} = g _ {0}, \Pr (0) = \Pr_ {0} \tag {34}
+$$
+
+- $P r_{0}$ - the rate of return to the banking industry at the initial moment, assuming it grows at a fixed rate $g$ of growth
+- after the promotion of the digital currency market, the profit of the In-table business of banking industry grows at a changing rate $g(w)$ of growth
+
+with the promotion of the digital currency market, the importance of the bank's on premise business will become smaller and smaller until the bank is transformed into an investment intermediary, and $g(w)$ decreased as $w$ increases
+
+$$
+\frac {d \Pr}{d t} = g (w) \bullet \Pr , \Pr (0) = \Pr_ {0} \tag {35}
+$$
+
+- in the long term, when $w = w_{m}$ , the bank in-table business income will no longer expand
+
+# Analysis
+
+Based on the above assumptions, we can define that $g(w) = g - mw \left( g > 0, m > \frac{w_m}{2} \right)$ , $m$ is a fixed number, and then we define $m = \frac{g}{w_m}$
+
+$$
+g (w) = g - \frac {g}{w _ {m}} \cdot w \tag {36}
+$$
+
+$$
+\frac {d \Pr}{d t} = \Pr \bullet g - \frac {g}{w _ {m}} \Pr \bullet w \tag {37}
+$$
+
+$$
+\frac {d \Pr}{d t} = \Pr \cdot g \left(1 - \frac {w}{w _ {m}}\right) \tag {38}
+$$
+
+Then, we try to use programming to simulate this formulation.
+
+
+Figure 11: long-term impact on banking system
+
+
+
+From the figure and the formulation, In the long run, the growth rate of the bank's business will drop to zero, and the bank will transform itself as an investment intermediary
+
+# 6.2 Long-term Impact on Different Regions
+
+To simplify the analysis, we consider the long-term impact of the local $E_{local}$ as the average of the sum of the effects of all individuals in the local. By the same token, we consider the long-term impact of the region $E_{religion}$ as the average of the sum of the effects of all countries within the hierarchy
+
+$$
+E _ {\text {l o c a l}} = \frac {\sum_ {i = 1} ^ {n} N I _ {\text {i n d i v i d u a l}}}{n}, E _ {\text {r e l i g i o n}} = \frac {\sum_ {i = 1} ^ {n} N I _ {\text {n a t i o n}}}{n} \tag {39}
+$$
+
+In the digital currency system, we can draw from the previous analysis that $r = r^{*}$ , and they are close to benchmark interest rate level $r_0$ , According to the production function:
+
+$$
+Y = F (L, K)
+$$
+
+In the long term, we can conclude that
+
+$$
+L = \bar {L}, K = \bar {K}, Y = \bar {Y}
+$$
+
+It is easy to know that $\nu_{t}$ and $w$ are positively correlated. So when we consider the extent of $w$ , we can assume $\lim_{t\to \infty}w = w_m$ in the long run
+
+Through partial derivative of $w$ in $N I_{\text{individual}}$ , $N I_{\text{nation}}$ and $N I_{\text{global}}$ , we get the following coefficient matrix
+
+$$
+\frac {\partial N I _ {\text {i n d i v i d u a l}}}{\partial w} = 0. 0 4 8 9 0 1 \frac {\partial S}{\partial w} + 0. 0 7 3 6 6 1 \frac {\partial Y}{\partial w} + 0. 9 7 1 8 3 \frac {\partial M}{\partial w} - 0. 2 0 4 8 4 \frac {\partial C _ {\mathrm {r} _ {1}}}{\partial w} \tag {40}
+$$
+
+$$
+\frac {\partial N I _ {\text {n a t i o n}}}{\partial w} = 0. 0 7 3 6 6 1 \frac {\partial S}{\partial w} + 0. 1 4 7 7 9 8 \frac {\partial Y}{\partial w} + 0. 0 3 7 9 6 1 \frac {\partial M}{\partial w} - 0. 0 4 0 4 1 9 \frac {\partial C _ {\mathrm {r} _ {2}}}{\partial w}
+$$
+
+$$
+\frac {\partial N I _ {\text {g l o b a l}}}{\partial w} = 0. 0 9 7 1 8 3 \frac {\partial S}{\partial w} + 0. 1 9 4 9 9 4 \frac {\partial Y}{\partial w} + 0. 0 5 0 0 8 3 \frac {\partial M}{\partial w} - 0. 1 4 9 5 1 2 \frac {\partial C _ {\mathrm {r} _ {3}}}{\partial w} \tag {42}
+$$
+
+The coefficient matrix is as follows:
+
+$$
+\left[ \begin{array}{l} - 0. 0 3 2 6 3 \\ 0. 2 1 9 0 0 1 \\ 0. 1 9 2 7 4 8 \end{array} \right] \Leftarrow \left[ \begin{array}{l l l l} 0. 0 4 8 9 0 1 & 0. 0 9 8 1 1 8 & 0. 2 5 2 0 1 & 0. 2 0 4 8 4 8 \\ 0. 0 7 3 6 6 1 & 0. 1 4 7 7 9 8 & 0. 0 3 7 9 6 1 & 0. 0 4 0 4 1 9 \\ 0. 0 9 7 1 8 3 & 0. 1 9 4 9 9 4 & 0. 0 5 0 0 8 3 & 0. 1 4 9 5 1 2 \end{array} \right] \tag {43}
+$$
+
+Our explanations are as follow:
+
+- In the long run, with the increase of $w$ , the marginal benefit obtained by the national level in the global digital monetary system is the largest, and from the perspective of the global and nation, the profitability brought by the international monetary system accounts for the largest share of marginal revenue. Our model is in line with the actual situation
+- From the perspective of risk cost, the national level has less risk than the global and individual levels, because the country's main risk cost is the potential risk of impaired monetary policy independence. In the long run, countries can reduce this effect through regional cooperation.
+- It is worth noting that the personal level of income has the smallest marginal benefit in the whole system and shows a small diminishing effect. Combined with the analysis of the reality, it is not difficult to find that the main utility of digital money to individuals, such as the availability of funds and the convenience of more flexible payment methods, tend to be short-term effects, while the risk that the digital currency system brings to individuals is a Long-term existence.
+
+Under our hypothesis, contacting with the impact on local, we can think that the marginal benefits brought by digital currency to the local in the long run are negligible, and we should consider more from the perspective of improving security to reduce the risk
+
+# 6.3 Long-term Impact on International Relationship
+
+With the continuous development of the global digital currency system, political relations will also change. Especially the international relations between countries. Based on the existing geopolitics, there are roughly three parties of study on the relationship between countries, realism, liberalism and constructivism. The realism views the problem from the perspective of Hobbesian zero-sum and it is valued by many countries. Its basic view is that the world is always in conflicts of states. What is beneficial to one entity is inevitably harmful to another entity, and there is no intermediate zone. There may be some opportunities for cooperation (for example, alliances such as NATO), but such cooperation is often short-lived and has a strong focus. John Mearsheimer, a professor at the University of Chicago, believes that in an anarchic world of countries without hierarchy between people, they are constantly looking for opportunities to gain power over competitors because they must rely solely on themselves for security. This is undoubtedly a pessimistic view, but it is of great inspiration to us.
+
+In the digital currency system we design, countries around the world will be better connected together through super-sovereign digital currencies, and all economic individuals in the world will become nodes exist in the global digital money network. Economic networking will not only promote economic exchanges, but also make political, cultural, diplomatic, military, and educational exchanges more frequently. As mentioned above, many countries still retain a rooted and zero-sum game in the future, and
+
+this kind of thinking will be strongly impacted in the digital currency world.
+
+common goal is based on the technical framework and legal guarantee, it beyond the trust of sovereignty. Because of this influence, international relations will continue to develop towards integration and cooperation. The confrontation between countries will be reduced. This is also in line with the tend of the current world economy. In the mainstream trend of politics, conflicts between countries will be reduced. At the same time, digital currency's advantages on anti-terrorist economic activities, anti-money laundering, anti-corruption are also expected to solve global sensitive issues, such as terrorist activities and ethnic separatist movements, which will also reduce regional friction and promote the international relations between countries.
+
+# 7 Model Testing
+
+We construct a model that represents global decentralized digital currency system. Our model based on a reasonable assumption and characteristics of the digital currency system, also combined with objective facts and economic principles, such as the impossible triangle theory. Since we don't rely on certain data in the process of building models, when the data changes, our results can be transformed into new results corresponding to reality. Regardless of we add some data into the model or remove some data out, the results of the model will not have large fluctuations, indicating that our model has good stability and weak sensitivity.
+
+We quantify the influencing factors of the income and cost in the global digital money system using the comprehensive fuzzy evaluation method (CFE) and the analytic hierarchy process (AHP). In the Consideration of the tools we take have strong subjectivity in the determination of weights, so we analyze the dynamic analysis in the analytic hierarchy. According to the results of computer test, we can find out that the function expression is more accurate when the factor w is increased.
+
+Based on the above stability and sensitivity analysis, our model is robust
+
+# 8 Strengths and Weaknesses
+
+- Strengths
+- We apply scientific methods in approximating our model parameters like Fuzzy Comprehensive Evaluation Method (FCE) and Analytic Hierarchy Process (AHP).
+Our model of the global digital currency system takes various factors into account.
+We build our model based on reliable economic theory and link with realistic feature of digital currency
+- According to the characteristics of variables in economics, we consider different situations in the long and short term.
+- We have put forward our own ideas on the regulatory mechanism of this system.
+We extend our model to the long run and analyze the impact from different levels and perspectives.
+
+- Weaknesses
+
+The value setting in pairwise-comparison criteria matrix of $AHP$ is a little subjective.
+- In the analysis of different countries, we have a rough classification of country types.
+- We build up assumptions and simplify the reality while establishing our model, but these simplifications may leave some errors.
+Some models are only theoretical inferences and lack of data to test.
+
+# 9 Reference
+
+[1] Paul Krugman. O Canada: A neglected nation gets its Nobel [J]. Slate, Oct 19, 1999.
+[2] Stephanie Lo and J. Christina Wang, Bitcoin as Money? [J]. Current Policy Perspectives, 2014.
+[3] Satoshi Nakamoto. Bitcoin: A Peer-to-Peer Electronic Cash System [J/OL]. www.bitcoin.org, 2008.
+[4] N. Gregory Mankiw. Macroeconomics, Ninth Edition [M]. New York: Worth Publishers, 2016.
+[5] Michael Bordo and Andrew Levin. Central Bank Digital Currency and the Future of Monetary Policy [J]. Economic Journal, 2017.
+[6] Paul A. Samuelson. An Exact Consumption-Loan Model of Interest with or without the Social Contrivance of Money [J]. The Journal of Political Economy, Vol. 66, No. 6, 1958.
+[7] Aleksi Grym. The great illusion of digital currencies [J]. BoF Economics Review, 2019.
+[8] N. Gregory Mankiw. Principles of Economics, Seventh Edition [M]. Bei Jing: Peking University, 2015.
+[9] Sun Ni, Wang Wei. Research on the Impact of Central Bank's Issue of Digital Money on Commercial Banks [J]. Times Finance, 2017(06): 100+104.
+[10] Wen Xinxiang, Zhang Bei. The Impact of Digital Money on Monetary Policy [J]. China Finance, 2016(17): 24-26.
+[11] Ming Haoyi, Zhu Yingying, Zhang Lei. The Impact of Internet Finance on Traditional Commercial Banks and Its Countermeasures [J]. Southwest Finance, 2014(11): 59-62.
+[12] Huang Tao. Analysis of International Risk Sharing Mechanism of Monetary Integration in East Asia: Based on the Expansion of the Second Generation Optimal Currency Area Theory [J]. Studies of International Finance, 2009(09): 87-96.
+[13] Yue Hua, Lou Dang. Game Analysis of Monetary Integration in East Asia [J]. Finance and Economics, 2005(06): 146-151.
+[14] Xie Ping, Shi Wuguang. Research on Digital Encrypted Money: A Literature Review [J]. Journal of Financial Research, 2015(01): 1-15.
\ No newline at end of file
diff --git a/MCM/2019/F/1916704/1916704.md b/MCM/2019/F/1916704/1916704.md
new file mode 100644
index 0000000000000000000000000000000000000000..e1d0eff7c6ff752eba4095e52d58a78cf7f8f450
--- /dev/null
+++ b/MCM/2019/F/1916704/1916704.md
@@ -0,0 +1,687 @@
+# 2019 Interdisciplinary Contest in Modeling (ICM) Summary Sheet
+
+(Attach a copy of this page to each copy of your solution paper.)
+
+# A New Era of World Finance:The Strategy For A Global Decentralized Digital Financial Market
+
+# Summary
+
+In the current situation, people attach greater importance to digital currency for the sake of convenient transactions. Therefore, based on the reality of the financial and economic situation, our goal is to propose a global decentralized digital financial system, and verify its feasibility in addressing the problem of lack of supervision and anonymity of digital currency at this stage. We divide the job into 3 phases.
+
+Firstly, we use the improved DSGE model to describe the system we establish. The model covers four sectors: family, firm, commercial bank and central bank. We consider 3 situations that the country completely abandons the original currency, does not completely abandon the original currency and the central bank does not issue digital currency. For various situations, we all obtain the financial and economic characteristics of the economic steady state. Our model is broad enough to accommodate various situations in different countries. We continue to study the macroeconomic effects of digital currency technology shocks.
+
+Secondly, we select 14 indicators to measure the key factors affecting the system, and divide them into four categories, namely, access factors, growth factors, stability factors and security factors. Based on these key factors, we propose a global regulatory mechanism. In addition, we focus on the risk of money laundering in digital currencies by establishing a KNN (k Nearest Neighbor) classifier model based on vector space model, which can assist a country to judge the risk of money laundering of digital currency.
+
+Finally, aiming at extending our analysis, we modify the SAR model to reflect the long-term effects of the new financial system. We selected the economic freedom indices of 163 countries, and then add space factors to study the spatial spillover effect of the central bank issuing digital currency. As a result, the emergence of a new monetary system model will gradually improve the perfection of the banking industry, the performance of global economy and the economic relationship with each country.
+
+In a nutshell, we simulated the economic steady-state constraints of the proportion of different digital currencies. The results show that when the central bank issues digital currency to completely replace the original currency, it can avoid the violent fluctuation of the economy and reach the steady state as soon as possible.
+
+# Contents
+
+1 Introduction 1
+
+1.1 Background 1
+1.2 Our work 1
+
+2Assumptions 1
+
+3 Model 1: Financial System Construction-DSGE Model 2
+
+3.1 Assumptions 2
+3.2 Variable Nomenclature 3
+3.3 Model 3
+3.4 Implementation and Results 6
+3.5 Future Discussion 6
+3.6 Sensitivity Analysis 7
+
+4 Key Factors: Principal Component Analysis (PCA) 7
+
+5 Model 2: Model of Construction Regulatory Mechanisms 9
+
+5.1 Overall Design 9
+5.2 Model: KNN(K Nearest Neighbor) classifier model based on Vector Space Model 10
+
+5.2.1 Assumptions 10
+5.2.2 Parameters 10
+5.2.3 Variable Nomenclature 11
+5.2.4 Model 11
+5.2.5 Implementation and Results 11
+5.2.6 Sensitivity analysis 13
+
+6 Model 3: Future Effects-Spatial Autoregressive Model 13
+
+6.1 Assumptions 13
+6.2 Variable Nomenclature 13
+6.3 Model 14
+6.4 Implementation and Results 15
+6.5 Sensitivity analysis 15
+
+7 Evaluation of Models 16
+
+7.1 Strengths 16
+7.2 Weaknesses 16
+
+8 Conclusion 16
+
+Policy Recommendation 18
+
+References 19
+
+# 1 Introduction
+
+# 1.1 Background
+
+The development of information technology and the skyrocketing price of digital currencies have effectively promoted the explosive growth of digital currencies. Up to now, there are 16,344 digital currencies in the world with a total market capitalization of US120,2 billion [1]. As a new currency form, digital currency has gradually reduced the transaction cost and improved the efficiency of trading. Almost every national central bank is actively constructing and developing legal digital currencies.
+
+The existence of defects such as instability of this currency, easy to escape from the supervision of the regulatory authorities, and easy to be used by criminal activities not only hindered the development of digital currency, but also exacerbated the instability of the world financial system. How to establish a coordinated and effective global digital financial market system and its corresponding regulatory system is the primary task of ensuring the standardization and healthy development of digital currency.
+
+# 1.2 Our work
+
+1. We first construct a model that adequately represents a viable global decentralized digital financial market system.
+2. Next, we find key factors that influence the access, growth, security and stability of the financial system at both the individual, national, and global levels.
+3. Besides, we establish a set of regulatory mechanisms for the financial system.
+4. Finally, we analyze the influence of the financial system on banking industry, economics and international relations, and then predict their long-term effects.
+
+
+Figure 1: Our Work
+
+# 2 Assumptions
+
+The following assumptions are all applied for all models in this paper:
+
+1. We won't ask every country to issue a unified digital currency.
+2. We won't ask every country to completely abandon the traditional currency and substitute digital currency for it.
+3. Digital currency is only issued by the central bank.
+
+# 3 Model 1: Financial System Construction-DSGE Model
+
+In this paper, we consider establishing a globally dispersed digital financial market. We do not require every country in the world to issue a unified currency, nor do we require the state to completely abandon the physical currency and replace it with a digital currency. What we hope to establish is a financial system in which digital currency is issued by the central bank. Of course, it is also a possibility that most economies are discussing at the present stage. What's more, we only consider the most general case, that is, digital currency is only issued by the central bank.
+
+Model in this section is based on the model designed by Barrdare and Kumhof (2016)[2] and Qian Y (2018) [3]. Using the Dynamic Stochastic General Equilibrium Model (DSGE), it is possible to analyze whether the introduction of digital currency by the central bank is feasible by exploring the impact of the introduction of legal digital currency on the country's macroeconomic effects. Since Qian Y (2018) [3] assumes that the central bank's digital currency completely replaces physical cash, that is, the country completely abandons the original currency, which deviates from the reality, this section will improve the DSGE model to establish a model that covers four departments-households, manufacturers, commercial banks and the central bank. On the basis of considering different situations-the country completely abandoning the original currency, not completely abandoning the original currency and not issuing digital currency, combined with economic reality, we analyze whether the central bank's issuance of digital currency is feasible for different countries and for the impact of the macro economy on countries.
+
+# 3.1 Assumptions
+
+1. Does not consider foreign currency deposit reserves, nor does it consider import and export factors.
+2. Digital currency and material currency as payment instruments can also be used as interest-bearing assets with the same interest rates.
+3. Sticky nominal price and sticky nominal wage.
+4. Consumer spending habits will not change for a while.
+5. From the perspective of banks and customers, all bank deposits are indistinguishable.
+6. Rational person assumptions, that is, for individuals, consumption brings positive effects, and work brings negative effects.
+7. There are two levels of vendors, the final vendor and the intermediate vendor. The intermediate manufacturer is in a state of monopolistic competition, and the final manufacturer is in a state of complete competition.
+8. Intermediate vendors rely entirely on external financing (bank deposits) for capital investment and hiring workers. The adjustment of the investment strategy of the intermediate manufacturer requires a corresponding cost.
+9. The bank deposit reserve is equal to the interest rate of the central bank's digital currency. Bank deposit reserve rate is the risk-free rate.
+10. There is an interest rate corridor mechanism. Commercial banks have two financing channels, which can be used to finance the public or borrow from the central bank.
+11. Commercial banks do not retain monetary funds.
+12. The interest rate pricing method of commercial banks is the risk-free rate plus the credit risk pricing of commercial banks. The bank deposit interest rate is lower than the commercial bank standing loan convenience rate.
+13. Monetary policy targets inflation.
+
+Table 1: Variable Nomenclature of Model 1
+
+| Abbreviation | Description | Abbreviation | Description |
| Pt | Nominal commodity price index | μ | Demand elasticity coefficient of intermediate products |
| ct | Actual consumption of the family | At | full factors production rate |
| Dt | Balance of bank deposits held by the family | γ | Elastic coefficient of capital output |
| Bt | The size of the central bank's digital currency held by the family | χ | Changes in the level of science and technology |
| Et | The size of the physical currency of the central bank held by the family | θt | Impact of technology |
| Wt | Nominal wage | kt | Firm-owned capital |
| nt | Actual labor supply | it(z) | Firm investment |
| Rt-1 | Bank deposit interest rate | RLt | Commercial bank loan interest rate |
| Rt-1 | Central bank digital currency interest rate and central bank real currency interest rate | B′t | Commercial bank loan to central bank |
| Gt | Firm profit dividend | Mt | Deposit reserve |
| dt | Actual bank deposit balance held by the family | RCt-1 | Deposit reserve ratio |
| bt | The size of the actual central bank digital currency held by the family | RTt-1 | Central bank loan interest rate |
| et | The size of the actual central bank's physical currency held by the family | σ | The cost of commercial banks in the process of conducting loan business |
| gt | Actual manufacturer profit dividend | wt | Commercial Bank Risk Management Capability |
| wt | Material currency | δ | Capital depreciation rate |
| ut | Current utility function | ρw | Commercial Bank Risk Management Capability Smoothing Index |
| φ | Negative utility of measurement work | ρv | Family holding currency smoothing index |
| φd | Measuring the negative utility of using bank deposits | ρr1 | Commercial Bank Credit Risk Premium S-moothing Index |
| αt | Proportion of bank deposits held by the family | ρr2 | Commercial Bank Credit Risk Premium S-moothing Index |
| vt | Proportion of household holding currency | ρ | Inflation adjustment factor |
| ℓvt | Proportion of digital currency held by the family | r1t | Commercial bank credit risk (with institution-al guarantee) |
| qt | The impact of the central bank's digital curren-cy | r2t | Commercial bank credit risk (removal of insti-tutional guarantee) |
| β | Interval discount factor | φP | Manufacturer price adjustment cost factor |
| z | Manufacturer's serial number | φk | Firm capital adjustment cost factor |
| yt(z) | Intermediate product | r10 | Phase 0 commercial bank credit risk (with insti-stutional guarantee) |
| yt | Final product | r20 | Phase 0 commercial bank credit risk (removal system guarantee) |
+
+# 3.2 Variable Nomenclature
+
+# 3.3 Model
+
+Family First of all, combined with the actual situation, we assume that the family's nominal budget constraints.
+
+$$
+P _ {t} c _ {t} + D _ {t} + B _ {t} + E _ {t} = W _ {t} n _ {t} + D _ {t - 1} R _ {t - 1} ^ {D} + B _ {t - 1} R _ {t - 1} + E _ {t - 1} R _ {t - 1} + G _ {t} \tag {1}
+$$
+
+Turn it into an actual budget constraint, that is, divide both sides by $P_{t}$ .
+
+$$
+c _ {t} + d _ {t} + b _ {t} + e _ {t} = w _ {t} n _ {t} + d _ {t - 1} R _ {t - 1} ^ {D} + b _ {t - 1} R _ {t - 1} + e _ {t - 1} R _ {t - 1} + g _ {t} \tag {2}
+$$
+
+At the same time, we assume the current utility function of the family.
+
+$$
+u _ {t} = \log c _ {t} - \phi n _ {t} - \frac {\phi_ {d}}{2} \left(d _ {t} - d _ {t - 1}\right) ^ {2} \tag {3}
+$$
+
+Among them, the first item represents the utility of actual consumption to consumers. The parameter $\phi$ measures the negative utility of the work, and the larger the value, the higher the negative effect. The third item describes the negative effects of changes in actual bank deposits on households. Since households use bank deposits to receive some restrictions and need to pay a certain "cost", the greater the fluctuation of bank deposits, the greater the negative effect on the family.
+
+To reflect the alternative advantages of central bank digital currency versus bank deposits, we introduce a second constraint:
+
+$$
+\alpha_ {t} c _ {t} \leq d _ {t} \tag {4}
+$$
+
+$$
+\alpha_ {t} = 1 - v _ {t} \tag {5}
+$$
+
+$$
+\ell v _ {t} = \rho_ {v} \ell v _ {t - 1} + (1 - \rho_ {v}) (\alpha + (1 - \ell) v _ {t - 1}) + q _ {t} \tag {6}
+$$
+
+Here, $v_{t}$ indicates the proportion of households holding money, $\ell v_{t}$ indicates the proportion of households holding digital currency, and $q_{t}$ indicates the impact of central bank digital currency. It can be seen
+
+that when $q_{t}$ is positive, an increase in $q_{t}$ will increase the amount of digital money held by household and reduce the size of bank deposits.
+
+Maximize the utility of the family based on (2) and (4):
+
+$$
+\max E _ {t} \sum_ {j = 0} ^ {\infty} \beta^ {j} u _ {t + j} \tag {7}
+$$
+
+$\beta$ represents the interplay discount factor for utility, and its value is between 0 and 1. The Lagrangian equation is solved to obtain its first-order condition, which will not be explained too much here.
+
+Firm According to the New Keynes DSGE framework, it is assumed that there are two firms, the final firm and the intermediate firm, and the final firms are in complete competition. They purchased the intermediate product $y_{t}(z)$ from the intermediate firms $z$ to produce the final product $y_{t}$ . $z$ is the manufacturer's consecutive number, and $z \in [0,1]$ . Formulating a production function, we get:
+
+$$
+y _ {t} = \left(\int_ {0} ^ {1} y _ {t} (z) ^ {\frac {\mu - 1}{\mu}} d z\right) ^ {\frac {\mu}{\mu - 1}} \tag {8}
+$$
+
+To solve the first-order condition of maximizing the profit of the final firms, you can get:
+
+$$
+y _ {t} (z) = \left(\frac {P _ {t} (z)}{P _ {t}}\right) ^ {- \mu} y _ {t} \tag {9}
+$$
+
+Among them, the total price index is defined as:
+
+$$
+P _ {t} = \left(\int_ {0} ^ {1} P _ {t} (z) ^ {1 - \mu} d z\right) ^ {\frac {1}{\mu - 1}} \tag {10}
+$$
+
+Assuming that all intermediate firms' production functions satisfy the Cobb-Douglas production function, they are:
+
+$$
+y _ {t} (z) = A _ {t} k _ {t} ^ {\gamma} (z) n _ {t} ^ {1 - \gamma} (z) \tag {11}
+$$
+
+Among them, $A_{t}$ represents total factor productivity, which is determined by the technological level of the overall economy:
+
+$$
+A _ {t} = e ^ {\left(x ^ {t} + \theta_ {t}\right)} \tag {12}
+$$
+
+$$
+\theta_ {t} = \rho_ {\theta} \theta_ {t} + \varepsilon_ {\theta t} \tag {13}
+$$
+
+Here $\chi > 0$ means that the level of technology continues to increase over time. $k_{t}$ represents capital, the corresponding depreciation rate is defined as $\delta$ , and intermediate firms can accumulate capital by investing $i_{t}(z)$ :
+
+$$
+k _ {t + 1} (z) = i _ {t} (z) + (1 - \delta) k _ {t} (z) \tag {14}
+$$
+
+It has been assumed that firms rely entirely on bank loans for capital investment and paying workers' wages, so there are:
+
+$$
+L _ {t} (z) = W _ {t} n _ {t} (z) + P _ {t} i _ {t} (z) \tag {15}
+$$
+
+Therefore, the corresponding firm pays $R_{t}^{L}L_{t}(z)$ , where $R_{t}^{L}$ is the bank loan interest rate during this period. In addition, price adjustment and capital adjustment require firms to pay a certain management cost. The price adjustment cost and capital adjustment cost are defined as follows:
+
+$$
+C _ {P} \left(P _ {t} (z)\right) = \frac {\phi_ {P}}{2} \left[ \frac {P _ {t} (z) - P _ {t - 1} (z)}{P _ {t - 1} (z)} \right] ^ {2} y _ {t} (z) \tag {16}
+$$
+
+$$
+C _ {k} \left(k _ {t} (z)\right) = \frac {\phi_ {k}}{2} \left[ k _ {t} (z) - k _ {t - 1} (z) \right] ^ {2} \tag {17}
+$$
+
+Therefore, the actual profit of the intermediate firm is:
+
+$$
+g _ {t} (z) = \frac {P _ {t (z)}}{P _ {t}} y _ {t} (z) - \frac {R _ {t} ^ {L} L _ {t} (z)}{P _ {t - 1}} - C _ {P} \left(P _ {t} (Z)\right) - C _ {k} \left(k _ {t} (Z)\right) \tag {18}
+$$
+
+The goal of the intermediate firm is to maximize the expected discounted value of each period of profit, which can be expressed as:
+
+$$
+\max E _ {t} \sum_ {j = 1} ^ {\infty} \psi_ {t + j, t + j - 1} g _ {t + j} \tag {19}
+$$
+
+Similarly, $n_t(z)$ and $k_t(z)$ can be solved using the Lagrangian equation to obtain the first-order condition of (19), which is also not explained in detail here.
+
+Commercial Bank Commercial banks have two financing channels, which can be financed through bank deposits of $D_{t}$ or loans to the central bank for $B^{\prime}_{t}$ . When the funds are raised, the commercial bank can choose to deposit the funds in the form of a deposit reserve at the central bank $M_{t}$ , and can also lend the funds to the company $L_{t}$ . Therefore, for the commercial banking sector, there are:
+
+$$
+L _ {t} + M _ {t} = D _ {t} + B ^ {\prime} _ {t} \tag {20}
+$$
+
+Turn it into an actual variable, and then it is expressed as:
+
+$$
+l _ {t} + m _ {t} = d _ {t} + b ^ {\prime} _ {t} \tag {21}
+$$
+
+For commercial banks, the actual profit is:
+
+$$
+h _ {t} = \frac {m _ {t - 1} R _ {t - 1} ^ {C}}{\pi_ {t}} + \frac {l _ {t - 1} R _ {t - 1} ^ {L}}{\pi_ {t}} - \frac {d _ {t - 1} R _ {t - 1} ^ {D}}{\pi_ {t}} - \frac {b _ {t - 1} ^ {\prime} R _ {t - 1} ^ {T}}{\pi_ {t}} - \sigma l _ {t} - \frac {w _ {t} d _ {t}}{m _ {t}} \tag {22}
+$$
+
+Among them, $R_{t-1}^{C}$ indicates the deposit reserve ratio, and $R_{t-1}^{T}$ indicates the interest rate to the central bank, which constitutes the lower and upper limits of the interest rate corridor. $\sigma l_{t}$ represents the cost of a commercial bank in the process of conducting a loan business. $\frac{d_{t}}{m_{t}}$ represents the deposit reserve ratio, which measures the commercial bank's Credit creation ability. $w_{t}$ reflects the risk management ability of commercial banks. The higher the value of it, the lower the risk management ability of commercial banks will be. Referring to Qian Y (2018) [3], we assume that $w_{t}$ is subject to:
+
+$$
+w _ {t} = \left(1 - \rho_ {w}\right) w + \rho_ {w} w _ {t - 1} + j _ {t} \tag {23}
+$$
+
+Because the deposit reserve and the central bank's digital currency are both central bank liabilities, we assume here that the two are equal, namely:
+
+$$
+R _ {t} ^ {C} = R _ {t} \tag {24}
+$$
+
+According to the interest rate pricing method of commercial banks, that is, the risk-free rate plus the credit risk premium of commercial banks, there are:
+
+$$
+R _ {t} ^ {D} = R _ {t} + r _ {1 t} \tag {25}
+$$
+
+$$
+R _ {t} ^ {L} = R _ {t} + r _ {1 t} + r _ {2 t} \tag {26}
+$$
+
+$$
+r _ {1 t} = \left(1 - \rho_ {r _ {1}}\right) r _ {1 0} + \rho_ {r _ {1}} r _ {1 t - 1} + \varepsilon_ {r _ {1 t}} \tag {27}
+$$
+
+$$
+r _ {2 t} = \left(1 - \rho_ {r _ {2}}\right) r _ {2 0} + \rho_ {r _ {2}} r _ {2 t - 1} + \varepsilon_ {r _ {2 t}} \tag {28}
+$$
+
+Maximize the expected value of the total discounted profits of commercial banks in each period:
+
+$$
+\max E _ {t} \sum_ {j = 1} ^ {\infty} \psi_ {t + j, t + j - 1} h _ {t + j} \tag {29}
+$$
+
+According to the Lagrangian equation, the optimal first-order condition is obtained.
+
+Central Bank For the central bank, it needs to maintain equilibrium of its balance sheet:
+
+$$
+B _ {t} + E _ {t} + M _ {t} = B ^ {\prime} _ {t} \tag {30}
+$$
+
+The central bank issued digital currency to create a price-based monetary policy tool for the central bank. Assuming that monetary policy targets inflation, the setting of $R_{t}$ is subject to the following rules:
+
+$$
+R _ {t} = (1 - \rho) \left[ \frac {1}{\beta} + \varphi_ {\pi} \left(\pi_ {t} - 1\right) \right] \rho R _ {t - 1} + \varepsilon_ {t} ^ {R} \tag {31}
+$$
+
+General Equilibrium Under the optimal constraints of the above four departments, the general equilibrium condition is met, that is, the total output is equal to the total supply:
+
+$$
+y _ {t} = c _ {t} + i _ {t} + C _ {P} \left(P _ {t} (Z)\right) + C _ {k} \left(k _ {t} (Z)\right) + \sigma l _ {t} + \frac {w _ {t} d _ {t}}{m _ {t}}
+$$
+
+# 3.4 Implementation and Results
+
+This model can be applied to various economies. Countries can determine each parameter according to its economic development status and national comprehensive development level, obtain constraints under steady-state conditions, and compare with the current situation in the country to determine whether it is feasible to issue digital currency. But for the sake of simplicity here, we consider the current situation of all countries as much as possible and reasonably determine the parameters. The current LIBOR (US) December interest rate is approximately $3\%$ , and the interest rate is used as the benchmark interest rate, and the inter-period discount factor $\beta$ is calibrated to 0.971. The proportion of total wages to GDP in the world varies widely between developed and developing countries. European and American countries can reach more than $50\%$ . African countries are generally below $20\%$ . In some countries, such as China, this value is significantly lower. Consider the global situation, set $\gamma$ to 0.7. Set $r_{10}$ and $r_{20}$ to $0.4\%$ and $2\%$ , depending on the difference between the one-year bank deposit rate, the one-year bank loan rate, and the deposit reserve. The calibration of other parameters is shown in the table below.
+
+Table 2: Parameter Calibration Table of Model 1
+
+ | Parameter Calibration Value |
| β | 0.7 | ρv | 0.8 |
| φ | 1 | ρw | 0.6 |
| α | 0.5 | ρ | 0.8 |
| γ | 0.7 | ρr1 | 0.8 |
| χ | 0.02% | ρr2 | 0.8 |
| δ | 2% | φP | 4 |
| w | 0.2 | φk | 4 |
| σ | 0.60% | r10 | 0.40% |
| μ | 5 | r20 | 2% |
+
+According to the parameter values set above, calculate the corresponding deposit reserve ratio and the proportion of household consumption to GDP under steady state conditions. When $\ell = 1$ , that is, the digital currency completely replaces the existing physical currency, the steady state values of this indicator are $17.8\%$ , $75\%$ , respectively. See Table 3 for details.
+
+Table 3: The Results
+
+| Value | Deposit reserve ratio(m/d) | Household consumption as a share of GDP(c/y) |
| l = 1 | 17.8% | 75% |
| l = 0.5 | 26% | 68.2% |
| l = 0 | 32.9% | 63% |
+
+In recent years, some major economies have lowered or cancelled the statutory deposit reserve ratio, but some countries, such as China, maintain a high statutory deposit reserve ratio. At present, most of the international reserves maintain a deposit reserve ratio of around $15\%$ . According to data released by the World Bank, the proportion of final consumption in GDP worldwide in 2017 was approximately $80\%$ . This paper assumes that foreign currency deposits and imports and exports are not considered. If this factor is considered, the steady-state indicators should be appropriately adjusted downward. Overall, the parametrically calibrated model can well match the characteristics of current world economic finance. It can also be seen from Table 3 that when $\ell = 1$ , the model is most consistent with the current situation, which means that at this stage, when the public holds all of the currency as a digital currency, it is more conducive to economic stability.
+
+Therefore, based on this model, we believe that the establishment of a central bank to issue a digital currency financial system is feasible for the current international economic situation.
+
+# 3.5 Future Discussion
+
+In the above model, we assume that the technical impact of the central bank issuing digital currency is $q_{t}$ . In the entire economic analysis, we set it to a positive number. Now we set it to an indeterminate number. Referring to the economic setting of Qian Y (2018) [3], we set the standard deviation of the central bank digital currency $q_{t}$ to 0.0006 and set $\ell$ to 1. The chart below shows the impact of $q_{t}$ on the overall macro economy.
+
+
+Figure 2: The impact of the technical shock of central bank digital currency on consumption and output
+
+
+
+Figure 2 shows that when a positive digital currency shock occurs, it will contribute to the increase of household consumption and the increase of economic output in the long run, which further proves that the model is feasible.
+
+# 3.6 Sensitivity Analysis
+
+We let the calibration value of each parameter fluctuate within $10\%$ of the original calibration value one by one. It turns out that this does not affect the conclusions we have drawn, so we believe that the model is robust.
+
+# 4 Key Factors: Principal Component Analysis (PCA)
+
+Since Model 1 only discusses the feasibility of realizing the issuance of digital currency by the central bank, which will have great significance for the country's economic growth and economic stability, in this section, we analyze the factors that influence the financial system at the individual, national and world levels.
+
+We selected 14 indicators to measure key factors in the access, growth, stability, and safety of the new financial system, and used principal component analysis (PCA) to screen out key variables. Finally, 11 of them are grouped into four categories, which represent the key factors of access, growth, stability and security of the financial system.
+
+PCA simplifies the coincidence variables and synthesizes them with only a few key factors. It can find several linear combinations containing the main information, where is no coincidence between the information of each linear combination. So we have the principal below:
+
+$$
+\left\{ \begin{array}{c} Y _ {1} = a _ {1} X = a _ {1 1} X _ {1} + a _ {1 2} X _ {2} + \dots + a _ {1 p} X _ {p} \\ Y _ {2} = a _ {2} X = a _ {2 1} X _ {1} + a _ {2 2} X _ {2} + \dots + a _ {2 p} X _ {p} \\ \dots \dots . \\ Y _ {m} = a _ {m} X = a _ {m 1} X _ {1} + a _ {m 2} X _ {2} + \dots + a _ {m p} X _ {p} \end{array} \right. \tag {33}
+$$
+
+$a_1a_1' = 1$
+- $Y_{i}, Y_{j}$ are irrelevant. ( $i \neq j, \quad i, j = 1, 2, \ldots, m$ )
+
+Based on this issue, we select countries with a population of more than 20 million in the world or countries with a total GDP of more than 100 billion US dollars according to international practice. These countries are divided into eight grades according to the per capita GDP of each country in 2017. Each grade randomly extracts 14 economic indicators of the same number of countries, and obtains a 14-dimensional random vector $X = (X_{1},X_{2},\dots \dots ,X_{14})$ . After PCA calculation, following the principle of selected principal components of the eigenvalues $>1$ and the cumulative contribution rate of $80\%$ , we get a linearly combined set of linear combinations that are less numerous and uncorrelated between vectors: $Y_{1},Y_{2},Y_{3},Y_{4}$ . They are the first, second, third, and fourth priciple components of the original variable metrics $X_{1},X_{2},\dots \dots ,X_{14}$ .
+
+- Access factors: inflation rate, government corruption index and government credit rating It can be seen from the coefficient of the principal component that the first principal component consists of inflation rate, government corruption index and government credit rating. One of the reasons for the emergence of digital currency is that the government's excessive use of money has led to excessive inflation. The issue of bitcoin is to solve the problem of excessive inflation. Therefore, the size of the inflation rate can affect the access factor. The government's wori ciency
+
+Table 4: Standard deviation, variance contribution rate, and cumulative contribution rate corresponding to the first four principal components of the normalized variable
+
+| Index | Comp.1 | Comp.2 | Comp.3 | Comp.4 |
| Standard deviation | 2.365112 | 1.4535807 | 1.2823521 | 1.08245266 |
| Variance contribution rate | 0.3995537 | 0.1509212 | 0.1174591 | 0.13369315 |
| Cumulative contribution rate | 0.3995537 | 0.5504749 | 0.6679339 | 0.80162705 |
+
+Table 5: The eigenvector corresponding to the first 4 principal components of the normalized variable
+
+| Influencing factor | Y1 | Y2 | Y3 | Y4 |
| Gross domestic product growth rate | 0.206143 | 0.201552 | -0.38098 | 0.079807 |
| Interest rate | 0.182811 | 0.11747 | -0.06887 | -0.04665 |
| Ratio of broad money to total reserves | -0.21747 | 0.023287 | 0.026979 | 0.388422 |
| Broad money(% of GDP) | -0.14679 | 0.412974 | 0.038546 | -0.05428 |
| Inflation rate | 0.276984 | 0.097393 | 0.223729 | -0.01165 |
| Government corruption index | -0.37868 | -0.0355 | 0.128694 | -0.16746 |
| Government credit rating | 0.38939 | -0.08538 | -0.12208 | -0.12838 |
| GDP | -0.21549 | 0.459926 | -0.23584 | -0.24366 |
| Current account | -0.30186 | -0.24827 | 0.013424 | 0.285172 |
| Budget | -0.03598 | -0.50062 | -0.32238 | -0.29023 |
| Population | -0.02716 | 0.346364 | 0.49569 | 0.047599 |
| Debt | -0.14336 | 0.28264 | 0.308497 | 0.199055 |
| Exchange rate | 0.057408 | -0.15336 | -0.23341 | -0.31339 |
| Unemployment rate | 0.215507 | 0.096393 | 0.459653 | 0.020225 |
+
+also influences the digital currency access factor, divided into government corruption index and government credit rating. The more corrupt the government is, the more it resists the digital currency-it will be out of government regulation or because of bureaucracy leading to high regulatory costs. What's more, if the government has high credit, the credit base is relatively strong and can support the issuance of credit currency.
+
+- Growth factors: broad money (% of GDP), gross domestic product, budget
+
+It can be seen from the principal component coefficients of Table 5 that the second principal component consists of broad money (as a percentage of GDP), gross domestic product, and budget. The broad money, M2 (quasi-currency), reflects the direct purchasing power and potential purchasing power of society, which can be used to measure the growth trend of future digital currencies. Gross domestic product is the premise and basis for future growth. Combined with the budget level, it can measure the future growth momentum.
+
+- Stabilizing factors: population, unemployment rate
+
+It can be seen from the coefficient of the principal component that the third principal component consists of population and unemployment rate. The stability of population growth or reduction is the basic foundation for the stability of digital currency issuance and frequency of its use. In general, the unemployment rate is an important factor in measuring the stability of a country's economic system. Therefore, it is a key factor in maintaining the stability of this new financial system.
+
+- Security factors: ratio of broad money to total reserves, exchange rate
+
+It can be seen from the coefficient of the principal component that the fourth principal component consists of the ratio of the broad money to the total reserve and the exchange rate. The ratio of broad money to total reserves reflects the level of a country's savings rate. Savings are the security of people's lives in a country when problems occur in the operation of the economic system, thus affecting the security of the financial system. Fluctuations in exchange rates can lead to insecurity in the financial system and have an impact on the security of the financial system.
+
+Therefore, in this section, we explore the key factors that influence the implementation of the financial system. We divide it into four categories, namely access factors, growth factors, stability factors and security factors. These aspects should be considered when countries consider adopting such a financial system.
+
+# 5 Model 2: Model of Construction Regulatory Mechanisms
+
+# 5.1 Overall Design
+
+Since digital currency as a means of payment and assets are active in people's trading and investment activities, the existence of risks will inevitably damage the interests of traders or investors to a certain extent. It mainly includes the risk of speculation caused by price fluctuations, the risk of illegal use of criminal activities such as money laundering, the illegal business risks of digital currency trading platforms, and the risk of wasting resources[7].
+
+For the regulation of digital currency, this paper believes that there are three ways. One is to decentralize regulatory resources and supervise each issuer. To a certain extent, this can solve problems such as illegal transactions in digital currencies. But how to use all limited issuers to supervise all issuers? This requires countries to rely on existing computer technology to use existing high-end information technology to supervise existing issuers. The second is the support of physical assets as a digital currency system. This physical asset should be widely accepted by all countries in the digital monetary system. This requires unifying the views of global sovereign states and establishing uniform digital monetary system rules, such as how to pay, how to issue, and how to liquidate. The third is the issuance of legal digital currency by the central bank as the issuer. The digital currency issued under the support of sovereign state credit does not have the characteristics of decentralization in the issuance. The transaction method still uses peer-to-peer transactions.
+
+This paper adopts the third regulatory mechanism, that is, the issuance of legal digital currency by the central bank. This is also the way that the world's sovereign countries are actively exploring. For example, China, the United Kingdom and other countries are studying the distribution path of designing legal digital currency. The issuance of digital currency by the central bank can avoid the impact of the ups and downs of monetary value on the national economy and financial system, and can quickly transmit monetary policy, reduce speculation at home and abroad and also combat illegal and criminal activities to a certain extent. In order to realize this concept, a more complete regulatory system must be established.
+
+Firstly, we need to establish the legal status of digital currency. The digital currency issued by the central bank and the credit of the central bank as a guarantee, the digital currency has certain versatility, but the top-level legal design still needs to be considered. All countries need to set the corresponding laws, regulations and system specifications and determine the essential attributes of digital currency according to the country's economic situation and the status quo of the financial system construction, coupled with the development of digital currency. Let the public understand the digital currency and learn to use digital currency, so that digital currency gradually replaces the existing physical currency. Besides, international organizations such as the World Bank, IMF, etc. should establish relevant international common standards to provide reference for each country.
+
+In addition, we need to establish a regulatory body. We have determined that the issuer of digital currency is the central bank, so the most appropriate regulator at this stage is still the central bank. First, because central banks in most countries have established relevant research departments, it is feasible to have a relatively clear understanding of digital currencies. Second, because the central bank is contracting the issuance of digital currency, the corresponding issuance strategy can be set. The issuance of digital currency by the central bank cannot be done overnight. It needs to be realized step by step. Otherwise, the influx of a large amount of legal currency will cause the collapse of the entire financial system. The most ideal way is to withdraw the physical currency while issuing the legal digital currency, and to maintain the stability of the money supply within the entire financial system while realizing the replacement of the physical currency by the digital currency to ensure the orderly and healthy development of the country. On the whole, only the central bank can shoulder this heavy responsibility.
+
+Moreover, we need to establish an account real name system. The central bank can use the direct issue currency to individuals and businesses, or choose to distribute it to individuals and businesses through commercial banks. We believe that depending on the country, countries can determine the method of distribution according to the current banking system and the issuance of physical currency.
+
+Finally, we hope to build a unified regulatory model on a global scale. Changes in a country's regulatory policies can cause fluctuations in the country's digital currency transactions, thereby affecting its value. Due to the international versatility of digital currencies, individual countries and institutions holding digital currency in other countries will be at risk. Therefore, governments of various countries should have the concept of global development, enhance the awareness of international cooperation, and explore international cooperation in digital currency regulation. Give full play to the important role
+
+of international organizations in the global unified regulatory system, build a global unified regulatory framework, and urge countries to share transaction data to achieve more standardized devi
+
+digital currency.
+
+Although such a regulatory system can more effectively prevent the problems of digital currency at this stage, it cannot fully explain that the regulatory mechanism can eliminate various risks, such as money laundering risks. Therefore, we have established the following models to assist the state in judging the risk of money laundering after the issuance of legal digital currency, thereby improving the regulatory mechanism.
+
+# 5.2 Model: KNN(K Nearest Neighbor) classifier model based on Vector Space Model
+
+This section uses the money laundering risk level assessment system built by HM Treasury to obtain a certain amount of origin data, and uses these origin data as a training set, applying KNN(K nearest neighbor) classifier model based on Vector Space Model in machine learning to learn risk level classification decision criterion automatically, and classify the risk levels of each money laundering methods that needs to be assessed. Finally, build a mechanism to monitor global digital currencies. It means that when other countries have accumulated sufficient origin data through the use of the UK's money laundering risk assessment system, they can use our model and their own indicators to determine the level of money laundering risk in their respective areas, without having to use the UK's risk assessment system. For the country, not only can it assess and monitor the risk of money laundering in its digital currency, it can also save the cost of long-term use of the UK's risk assessment system.
+
+# 5.2.1 Assumptions
+
+1. Contiguity hypothesis: Money laundering methods with the same level of risk constitute a class. The same type of money laundering methods will constitute an adjacent area, and different types of adjacent areas do not overlap each other.
+2. In the national financial system, each method of money laundering is independent and have no effect on each other.
+3. In the national financial system, each money laundering method occurs with the same probability.
+
+# 5.2.2 Parameters
+
+We divide the national financial system into twelve thematic areas: Banks, Accountancy service providers, Legal service providers, Money service businesses, Trust or company service providers, Estate agents, High value dealers, Retail betting(unregulated gambling), Casinos (regulated gambling), Cash, New payment methods (e-money), Digital currencies. One money laundering method exists in each area, which means there are twelve money laundering methods in total. We choose Total vulnerabilities score, Total likelihood score, Structural risk, Risk with mitigation grading as four main factors to measure the risk level of money laundering in each area. Each factor can be determined by several detectable indicators.
+
+A. Total vulnerabilities score
+
+It's defined as a score that measures the degree of damage done by money laundering behavior in each area. The higher the score is, the weaker the ability of each area to resist the destruction of money laundering. In the quantitative assessment and research process, the following three factors will have a significant impact on the results.
+
+a. The capacity to move money internationally given the nature of the funds (i.e. cash, e-money)
+b. The speed or volume of money movement through firms in the sector given the nature of the funds.
+c. The level of compliance within the sector.
+
+B. Total likelihood score
+
+It's defined as a score that measures the likelihood of this area reporting to law enforcement agencies when the money laundering event occurs in each area. The higher the score is, the higher the professionalism of practitioners in the area is. In the quantitative assessment and research process
+
+the following three factors will have a significant impact on the results.
+
+a. The size of the sector or area.
+b. The likelihood that the sector will report suspicious activity to law enforcement, as indicated by the level of SAR submission by the sector.
+c. Law enforcement agencies' existing knowledge of money laundering through the sector.
+
+# C. Structural risk
+
+After rating Total vulnerabilities score and Total likelihood score, the system automatically generates a score that measures the structural risk in each area. The higher the score is, the higher the likelihood of money laundering in the field is or the greater the number of money laundering incidents are.
+
+# D. Risk with mitigation grading
+
+It's defined as a score that measures the law enforcement's ability of handling the money laundering event when a message of money laundering event is obtained. The higher the score is, the stronger the ability of law enforcement to successfully handle incidents is.
+
+# 5.2.3 Variable Nomenclature
+
+Table 6: Variable Nomenclature of Model 2
+
+| Abbreviation | Definition | Abbreviation | Definition |
| i | Money laundering method in the area i | ki | Law enforcement agenciesafr existing knowledge level of money laundering through the area i |
| Tvs i | Score that measures the degree of damage done by money laundering behavior in area i | Sr i | Structural risk of area i |
| a | Capacity to move money internationally | Rwmg | Risk with mitigation grading |
| M | The volume of money movement | vi | Vector of money laundering method in the area i |
| V | The speed of money movement | k | kNN algorithm parameters |
| Q | Number of items to be sold | j | Risk level class (j=High,Medium,Low) |
| P | Unit commodity price | li | The level of compliance in the area i |
| Tls i | Score that measures the likelihood of area i reporting to law enforcement agencies when the money laundering event occurs in area i | size i | The size of area i |
| ri | The likelihood that area i will report suspicious activity to law enforcement | Sr i | Structural risk score for domain i |
+
+# 5.2.4 Model
+
+$$
+T v s _ {i} = f (a, M, I _ {i}) \tag {34}
+$$
+
+$$
+T I s _ {i} = F \left(\text {s i z e} _ {i}, r _ {i}, k _ {i}\right) \tag {35}
+$$
+
+$$
+S r _ {i} = G \left(T v s _ {i}, T I s _ {i}\right) \tag {36}
+$$
+
+In the formula(34), $M$ can also be replaced with $\frac{PQ}{V}$ .
+
+In view of the different financial systems of different countries, different functional relationships (f, F, G) will occur.
+
+# 5.2.5 Implementation and Results
+
+Vector space model represents each money laundering method as a vector combined by real numeral components $\vec{v}_i = (Tvs_i, TIs_i, Sr_i, Rwmg)$ . Each component corresponds to one evaluation index. We obtained origin data of twelve thematic areas in 2015 from the UK's money laundering risk assessment system. The digital currency money laundering method belongs to Low class. We selected eleven areas except the digital currency area as training sets, downgrading the eleven four-dimensional vectors
+
+
+Figure 3: Data visualization
+
+into two-dimensional vectors. Then we projected them onto a 2D plane, and observed data distribution (Figure 3).
+
+Calculate the distances between all points in the training set and the current point in the test set and sort them in increasing order of distances, select the k points with the smallest distance from the current point in the test set, and store them in the data structure Sk. Determine the frequency of occurrence pj of the category in which the first k points are located. Returns the category with the highest frequency of the first k points as the predicted classification of the current point. We use the Euclidean distance when judging the distance between the point and the point in the space.
+
+
+Figure 4: Classification result graph
+
+Figure 4 gave an example of $k = 3$ . The red line indicates the classification boundary, and the three categories are represented by $+$ , $\bigcirc$ , and $^*$ . We can find that the three points closest to the digital money laundering method represented by $\hat{\alpha}$ belong to the Low class, so the probability of the digital currency money laundering method belonging to each category is P (High class | digital currency money laundering mode) = 0, P (Medium class | Digital currency money laundering method) = 0, P (Low class | Digital currency money laundering method) = 1. The result given by the classifier is that the digital currency money laundering method belongs to the Low class, which is consistent with the real result, indicating that our classifier is accurate.
+
+We can also find that the UK's 2015 digital currency money laundering method has a low level of risk, which is at the same risk level as Casinos, High value dealers, and Retail betting. Their risk structures are similar. Therefore, the regulation of digital currency can draw on the supervision of Ca, High
+
+value dealers, and Retail betting. Suppose we have obtained the four indicators of the digital money laundering method in a certain year. We can still use the classifier to judge the risk level of the digital money laundering method in the year, and compare it with the previous period to get the trend of the risk of money laundering in digital currency for better supervision.
+
+# 5.2.6 Sensitivity analysis
+
+# 1. Sensitivity to k value
+
+The value of k in kNN often depends on the experience or knowledge of the classification problem itself. k generally takes odd numbers to reduce the possibility of multiple primary classes coexisting. $k = 3$ and $k = 5$ are commonly used values, but k also takes a larger value between 50 and 100, which also depends on the size of the training set sample size.
+
+# 2. Sensitivity to the sample size of the training set
+
+The sample size of the training set is constantly updated. We can judge the risk level of the money laundering method that needs to be evaluated and then add it to the training set. Expanding the sample size of the training set for the next evaluation will be more accurate.
+
+# 6 Model 3: Future Effects-Spatial Autoregressive Model
+
+In this section, we predict the effect of the introduction of digital currency on the future development of the world's long-term overall economic system, including the effect on the banking industry, the effect of global economic trends, and the effect of trends in inter-state relations. Since the economic effect is globally linked and related to its historical development level, the key to our prediction model is to solve the effect of the "contagious" economic linkage between countries and the effect of the previous economic level on the future. We applied a spatial autoregressive model (SAR) to predict future developments.
+
+# 6.1 Assumptions
+
+1. The behavior of the country in the current period is often affected by its previous behavior (direct benefit).
+2. The current performance of the country is often affected by the actions of other neighboring countries, and will also potentially affect all other countries (indirect benefits).
+3. The influence of unobservable factors on the dependent variables is analyzed by simplifying into spatial factors.
+4. When calculating the distance between the country and the country, use the location of the capital of the country as the point of calculation of the space unit.
+5. We fully trust the various economic freedom indices in the annual reports issued by The Wall Street Journal and the American Heritage Foundation. In addition, countries or regions with more economic freedom will have higher long-term economic growth and prosperity than countries with less economic freedom.
+6. The relationship between countries only considers the economic level, so it is simplified into a trade relationship.
+7. The error terms of all models satisfy the normal independent and identical distribution.
+
+# 6.2 Variable Nomenclature
+
+Based on the annual report of the Economic Freedom Indices and the distance between the capitals of the world, we selected 163 countries with comprehensive data for spatial autoregressive prediction analysis.
+
+Table 7: Variable Nomenclature of Model 3
+
+| Abbreviation | Description |
| B | Financial freedom (affecting the efficiency of the banking system) |
| E | Overall freedom(affecting economic development) |
| T | Trading freedom(affecting international trade) |
| ρ | Spatial autocorrelation coefficient |
| β | Independent coefficient |
| W | 163×163 spatial weight matrix |
| σ | Random disturbance |
+
+When digital currency is newly added to the national monetary system, it will inevitably affect the efficiency of the banking system at the national, regional and global levels and the independence of government intervention. The change of the monetary system of any country will lead to the idea that its neighboring countries will also change their monetary system. From the perspective of modeling, the effect of the future development of the banking industry in various countries is the dynamic feedback effect from other countries, which will in turn affect the development of the global banking industry. In the same way, the trend of global economic development and the development trend of international relations will dynamically influence the surrounding areas with the introduction of digital currency or digital currency's continuous improvement in a country's internal economic system.
+
+# 6.3 Model
+
+The process of building the model is as follows:
+
+First, we establish a distance-based spatial weight matrix for 163 countries:
+
+$$
+W = \left[ \begin{array}{c c c} W _ {1, 1} & \dots & W _ {1, 1 6 3} \\ \vdots & \ddots & \vdots \\ W _ {1 6 3, 1} & \dots & W _ {1 6 3, 1 6 3} \end{array} \right] \tag {37}
+$$
+
+So we get the initial distance-based spatial weight matrix $W$ . After calculating $w_{ij}$ , the spatial weight matrix is normalized by row, and the spatial weight matrix after row normalization is $\tilde{W}$ .
+
+Considering our previous assumptions, we need to introduce the lag explanatory variables $B_{t-1}$ , $E_{t-1}$ , $T_{t-1}$ , and establish the following improved spatial autoregressive model:
+
+Model A: Forecasting future banking industry:
+
+$$
+B = \rho_ {1} W B + B _ {t - 1} \beta_ {1} + \varepsilon_ {1} \tag {38}
+$$
+
+Model B: Forecasting future economic development in countries:
+
+$$
+E = \rho_ {2} W E + E _ {t - 1} \beta_ {2} + \varepsilon_ {2} \tag {39}
+$$
+
+Model C: Forecasting future trade relations between countries:
+
+$$
+T = \rho_ {3} W T + T _ {t - 1} \beta_ {3} + \varepsilon_ {3} \tag {40}
+$$
+
+The parameter estimates based on the model for interpreting the variable matrices $B_{t-1}, E_{t-1}, T_{t-1}$ take the following form:
+
+$$
+\hat {y} _ {1} ^ {(1)} = \left(I _ {k} - \hat {\rho} _ {1} W _ {1}\right) ^ {- 1} B _ {t - 1} \hat {\beta} _ {1} \tag {41}
+$$
+
+$$
+\hat {y} _ {2} ^ {(1)} = \left(I _ {k} - \hat {\rho} _ {2} W _ {2}\right) ^ {- 1} E _ {t - 1} \hat {\beta} _ {2} \tag {42}
+$$
+
+$$
+\hat {y} _ {3} ^ {(1)} = \left(I _ {k} - \hat {\rho} _ {3} W _ {3}\right) ^ {- 1} T _ {t - 1} \hat {\beta} _ {3} \tag {43}
+$$
+
+Considering the impact of a country's introduction of digital currency as an official currency on the global economic system, Tunisia is selected as an observation based on countries in the world.
+
+digital currency has been established as the official currency (In 2015, Tunisia officially established the digital currency as the official currency. In the following years, the economic freedom indices of Tunisia increased significantly. We assume that the increase in the indices is more or less derived from the issuance of legal digital currency). Increase Tunisia's Financial freedom, Overall freedom, and Trading freedom by $10\%$ to get $B_{t - 1}^{\prime},E_{t - 1}^{\prime},T_{t - 1}^{\prime}$ , then estimate and analyze the global influence:
+
+$$
+\hat {y} _ {1} ^ {(2)} = \left(I _ {k} - \hat {\rho} _ {1} W _ {1}\right) ^ {- 1} B _ {t - 1} ^ {\prime} \hat {\beta} _ {1} \tag {44}
+$$
+
+$$
+\hat {y} _ {2} ^ {(2)} = \left(I _ {k} - \hat {\rho} _ {2} W _ {2}\right) ^ {- 1} E _ {t - 1} ^ {\prime} \hat {\beta} _ {2} \tag {45}
+$$
+
+$$
+\hat {y} _ {3} ^ {(2)} = \left(I _ {k} - \hat {\rho} _ {3} W _ {3}\right) ^ {- 1} T _ {t - 1} ^ {\prime} \hat {\beta} _ {3} \tag {46}
+$$
+
+# 6.4 Implementation and Results
+
+Using the 2017 and 2018 economic freedom index data, the model is estimated using MATLAB, and the coefficients of the model are as follows:
+
+Table 8: Model estimation coefficient
+
+| coefficient | ˆy1(1) | ˆy1(2) | ˆy2(1) | ˆy2(2) | ˆy3(1) | ˆy3(2) |
| βi | 0.911459 | 0.917817 | 0.98891 | 0.989744 | 1.010008 | 1.010584 |
| ρi | 0.088811 | 0.081985 | 0.022328 | 0.020279 | -0.00734 | -0.00848 |
+
+The estimates of $\hat{y}_1^{(2)} - \hat{y}_1^{(1)},\hat{y}_2^{(2)} - \hat{y}_2^{(1)},\hat{y}_3^{(2)} - \hat{y}_3^{(1)}$ excludes Tunisia's data (since the direct effect is significantly greater than the indirect effect, we only consider the indirect effect here) and sort it into Table 8.
+
+As can be seen from the Figure 5, after Tunisia used the digital currency as the official currency, most of the indicators of each country showed an increase (the result of the subtraction was mostly greater than 0). Therefore, after the digital currency is used as the official currency, it has an indirect spillover effect on the countries of the world. It has a promoting effect on the efficiency of the banking system in various countries of the world, economic growth and the promotion of trade freedom between countries.
+
+
+Figure 5: Eliminate the impact of direct effect data on the indicators of countries around the world
+
+
+
+
+
+# 6.5 Sensitivity analysis
+
+Considering that the emerging level of digital currency may not be as high as $10\%$ for a country, it is chosen to test again with a growth of $5\%$ .
+
+Figure 6 is the result of a test with a growth of $5\%$ . It can be found that although the economic development trend has the emergence of individual countries and the growth of the three indicators is not as obvious as before. The trend of indicator growth is roughly the same at the sum total is greater than zero, which contributes to the overall prosperity of the world economy.
+
+Based on this model, we can conclude that the emergence of a new monetary system model will gradually improve the perfection of the banking industry (including the adoption of emerging digital currencies). In addition, it can increase economic freedom to increase GDP and national economic vitality, and increase the degree of freedom and openness of imports and exports of goods and services between countries.
+
+
+
+
+Figure 6: Sensitivity test of the SAR model (after removing the data of the direct effect)
+
+
+
+# 7 Evaluation of Models
+
+# 7.1 Strengths
+
+- In Model 1, we introduced the digital currency holding ratio, which considers whether the central bank gives up the original currency when issuing digital currency. The model design is flexible and can be adjusted at any time according to demand.
+- In Model 1, we consider the four departments of households, manufacturers, commercial banks and central banks, which can systematically summarize the development of the real world and have certain practical significance.
+- Model 2 is mature in theory and simple in thinking. It can be used for both classification and regression, and can be used for nonlinear classification. The training time complexity is lower than the support vector machine classifier.
+- In Model 3, we creatively propose the use of spatially measured knowledge to analyze the spatial spillover effects of the issuance of digital currency by a central bank of a country to other countries. It is of practical significance to study the impact of central bank issuing digital currency on a global scale based on geographical factors.
+- In Model 3, based on data availability, we considered 163 countries and obtained a large amount of data from multiple sources, and the final result was highly credible.
+
+# 7.2 Weaknesses
+
+- In Model 1, although we try to find relevant literature and related data to ensure the rationality of parameter determination, but because the values of some parameters are subjective, it may affect the final result.
+- In Model 1, we did not consider foreign currency deposits, nor did we consider the impact of import and export factors on the overall economic stability, which may result in inaccurate results.
+- When the sample size of model 2 is large, the amount of calculation will be too large, and the time taken for calculation will become correspondingly larger.
+- In Model 3, we believe that the economic freedom indices of countries around the world in the annual reports published by The Wall Street Journal and the American Heritage Foundation are accurate and can reasonably reflect the actual situation of the countries concerned. But this may also affect the accuracy of the entire model to some extent.
+
+# 8 Conclusion
+
+From the analysis of our proposed DSGE model, it is feasible to establish a global decentralized digital financial market. We proposed a financial system in which digital money was issued by the central bank, and found that the existing economic and financial characteristics basically met the economic steady state after the central bank issued the digital currency, so we think the system is feasible. Moreover, when the public holds all of the currency as a digital currency, that is, the origin currency
+
+is completely abandoned, the system is most feasible. In the follow-up study, we found that when the original currency is completely abandoned, the positive digital currency shock can achieve long-term economic growth of the national economy and support the feasibility of the system. From the spatial autoregressive model we finally established, when the central bank issues digital currency, it can produce a significant positive spatial spillover effect. In the long run, the issuance of digital currencies by central banks has positive implications for future banking development, global economic growth and international relations.
+
+# Policy Recommendation
+
+Dear national leaders,
+
+Upon the request of International Currency Marketing, our team has devised empirically and quantitatively supported models that allow us to make various policy recommendations centered on a new digital financial market. The most critical suggestion we give is to establish a global decentralized digital financial system in which the central bank issues digital currency. Then we will give the corresponding mechanism we establish. Our recommendations are based on precise modeling and computer simulation based on real-world data; therefore we are confident of our proposals.
+
+First, we recommend all countries establish a financial system for the central bank to issue digital currency based on our improved DSGE model. After all, our model is feasible and universal through our rigorous analysis. In our analysis, when the proportion of digital currency held by the family is 1, that is, when the public holds all the money as a digital currency, it is more conducive to economic stability. Therefore, we strongly recommend that countries issue digital currencies as soon as possible and make digital currencies more widely available in every country in soon.
+
+In addition, we suggest that Leaders improve on specific factors that we have identified through principal component analysis that influence the access, growth, stability and security of the new emerging financial systems, including inflation level, degree of government corruption and government trustworthiness; broad money, gross domestic product, budget level; population, unemployment rate; ratio of broad money to total reserves, exchange rate, etc.
+
+What's more, we recommend that countries judge the level of money laundering risk in the digital currencies area in the national financial system based on the KNN classifier model we have established. National Leaders could assess the level of money laundering risk based on the results of the classifier. At the same time, national leaders can also improve the specific factors based on the level of risk of money laundering affecting digital currencies, including the size of the sector, the level of compliance within the sector, the professionalism and technical level of the employees, and the force of law enforcement agencies, etc.
+
+Finally, we advise countries that first issue digital currencies gradually drive and encourage their neighboring countries to develop digital currencies. Countries that first issue digital currencies gradually drive and encourage their neighboring countries to develop digital currencies. Using our spatial autoregressive model to analyze the impact of the issued digital currency countries on various economic indices of countries around the world, the results fully prove that the discovery of digital currency can affect neighboring countries and even all countries in the world. The closer the spatial distance is, the greater the positive impact. Therefore, we hope that the improved new financial system of all countries in the world can form "from point to face" and gradually influence the global economy and promote the common prosperity of the world economy.
+
+To better facilitate the development of economy all over the world, please consider our policy recommendations. We hope it sincerely that the advice can help national leaders to establish a better financial system.
+
+Yours sincerely,
+
+Team 1916704
+
+# References
+
+[1] CoinMarketCap.[EB/OL].[2019-1-25].
+https://coinmarketcap.com/charts/
+[2] Barrdear J. and Kumhof M. The Macroeconomics of Central Bank Issued Digital Currencies[R]. Bank of England, Staff Working Paper No. 605.
+[3] Qian Y. Analysis of the economic effects of legal digital currency: theory and evidence. International Finance Research 2019, (01), 16-27
+[4] Qian Y. Experimental research on the central bank digital currency prototype system. Journal of Software, 2018, 29(9): 2716-2732
+[5] DENG HE, KARL HABERMEIER, ROSS LECHOW, et al. Virtual currencies and beyond: Initial considerations [J]. IMF working paper, 2016 (1).
+[6] Chen Jian, Zhao Xue. The status quo of digital currency development and its international experience and enlightenment. China's prices 2018, (11), 44-47
+[7] Yang Wei. Research on the Risks and Supervision Strategies of Digital Currency Trading. China Management Informationization 2018, 21(22), 100-101
+[8] HM Treasury. UK national risk assessment of money laundering and terrorist financing. [EB/OL].[2019-1-25].
+
+https://www.gov.uk/government/publications/uk-national-risk-assessment
+
+t-of-money- laundering-and-terrorist-financing
\ No newline at end of file
diff --git a/MCM/2019/F/2019_ICM_Expert_Com/2019_ICM_Expert_Com.md b/MCM/2019/F/2019_ICM_Expert_Com/2019_ICM_Expert_Com.md
new file mode 100644
index 0000000000000000000000000000000000000000..0913f9d04609d02682078836368d63e0e051b12b
--- /dev/null
+++ b/MCM/2019/F/2019_ICM_Expert_Com/2019_ICM_Expert_Com.md
@@ -0,0 +1,51 @@
+# Expert's Commentary: The Digital Currency Problem
+
+Carolina Mattsson
+Network Science Institute
+Northeastern University
+177 Huntington Ave. 10021
+Boston, MA 02115
+
+# Introduction
+
+We don't often consider how we pay one another, even as we use various payment systems built around different currencies to participate in the economy every day. The payment infrastructure—like any infrastructure—is something that we rely on deeply, but tend to notice only when something goes awry.
+
+Payment systems are unevenly adopted, which we might suddenly notice when the restaurant where we've been eating at mentions that they don't accept credit cards. Even modern payment systems involve considerable friction, which we might notice when a transfer takes several days to reach our bank account. Currencies also fluctuate in value, as we might discover when a landlord decides to raise the rent. We tend to notice instances of fraud when a payment card issuer blocks use of our card.
+
+While currency fluctuations and security concerns are an occasional nuisance to individuals, they are taken quite seriously at the national level. Banks and other payment processors report billions in fraudulent transactions every year, and identifying money laundering is a top concern for law enforcement agencies. Central banks tend to consider maintaining monetary stability to be their main role, although how they go about that varies from country to country.
+
+But even at the national and global levels, currency issues rarely make the news unless there is a major problem. For instance, both the Zimbabwean dollar and the Venezuela bolívar have experienced hyperinflation in the 21st century. Fears of deflation in the Eurozone have affected both monetary policy by the European Central Bank and international relations among European countries for much of the last decade.
+
+The UMAP Journal 40 (2-3) (2019) 243-245. ©Copyright 2019 by COMAP, Inc. All rights reserved. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice. Abstracting with credit is permitted, but copyrights for components of this work owned by others than COMAP must be honored. To copy otherwise, to republish, to post on servers, or to redistribute to lists requires prior permission from COMAP.
+
+In recent years, on the other hand, payment infrastructure has come into the spotlight for another reason. In 2009, an entity calling themselves Satoshi Nakamoto issued a novel decentralized digital currency: Bitcoin. The source code underlying the set of decentralized cryptographic accounting protocols was also published, allowing others to set up their own system as well. The price of Bitcoin rose unsteadily in the years that followed, and alternative cryptocurrencies proliferated.
+
+The idea of a universal decentralized digital currency captured the public imagination sometime in 2017, leading to large increases in the prices of many of the more established cryptocurrencies. While the bubble did burst, the idea itself may be here to stay. Bitcoin brought clear novelty to the staid world of payment infrastructure and revealed how much we do not yet understand about our monetary systems.
+
+# Formulation and Intent of the Problem
+
+This year's ICM™ problem invited students to consider our global monetary system and model a particular change: introducing a universal, decentralized, digital currency. Teams were asked to identify the viability and effects of introducing such a global digital payment system. They were asked to submit a solution of no more than 20 pages, a summary, and a short policy recommendation for national leaders.
+
+The details of the introduced currency, and how to model the system as a whole, were left up to the teams. Instead of any particular tasks, students were introduced to a range of aspects of the problem that they could consider: growth, access, security, and stability of this new digital currency. Teams were encouraged to take neither the current system of national currencies, nor a techno-optimistic future, for granted.
+
+Solutions were evaluated based on the team's understanding of the problem, the soundness of their modeling approach, and to what extent their policy statements reflected their mathematical model.
+
+# Solving the Problem
+
+What makes this problem difficult (and studying money so fascinating) is that there is no established model of monetary systems that considers individual, national, and global dynamics together. The adoption of new payment methods, the impact of digital currencies, international monetary policy, financial inclusion, and countering illicit financial activity are largely studied separately. Modeling the introduction of a new digital currency involves pulling several of these strands together.
+
+As a triage judge and final judge, I was impressed by the ambition and variety in modeling approaches taken by teams. Some teams built
+
+sophisticated models of the adoption of the currency by individuals, and other teams focused on the official recognition of the currency by nations. The most successful teams were able to define and model how the new policies would affect the economy. I was excited to see teams extend the Mundell-Fleming trilemma1, which describes inherent limitations on monetary policy by central banks, to apply also to a global decentralized currency and inform their policy recommendations. I was even more excited to see sound approaches that I had not considered, such as defining currency choice as a matching process and calibrating its equilibrium using real-world data.
+
+Participating teams took on a highly complex, interdisciplinary, and impactful modeling problem, with no right answer; and all who submitted a solution are to be commended. I hope this year's ICM participants learned as much from preparing their solutions to the problem as I did from reading and judging them.
+
+# Reference
+
+Majaski, Christina. 2019. Trilemma definition. https://www.investopedia.com/terms/t/trilemma.asp.
+
+# About the Author
+
+Carolina Mattsson is a Ph.D. candidate in Network Science at Northeastern University. She is an NSF Graduate Research Fellow using her dissertation to develop network analysis tools and modeling frameworks for payment systems. She works extensively with collaborators in industry to apply her methods towards improving mobile money systems. Carolina holds a B.S. in Physics and a B.A. in International Relations from Lehigh University.
+
+
\ No newline at end of file
diff --git a/MCM/2019/F/2019_ICM_Judges_Com3/2019_ICM_Judges_Com3.md b/MCM/2019/F/2019_ICM_Judges_Com3/2019_ICM_Judges_Com3.md
new file mode 100644
index 0000000000000000000000000000000000000000..43a3e839fe6b9a8a67eb52044528772be9517fdd
--- /dev/null
+++ b/MCM/2019/F/2019_ICM_Judges_Com3/2019_ICM_Judges_Com3.md
@@ -0,0 +1,169 @@
+# Judges' Commentary: Digital Currency
+
+Chris Arney
+
+COMAP
+
+Bedford, MA
+
+arneyicm@gmail.com
+
+Kathryn Coronges
+
+Northeastern University
+
+Boston, MA
+
+# Introduction
+
+For this year's ICM™ policy problem, teams investigated a question that is frequently in the news: Can the benefits of an international digital currency be sustained? Some experts believe that a universal decentralized digital currency with internal security such as blockchain can make markets more efficient by eliminating barriers and overhead costs in the flow and storage of money. With digital currencies becoming more widely available, citizens can now use them like traditional currencies to buy and sell goods. Further, digital currencies are not affected by national borders, making them useful for international business transactions. It is clear that digital money has many transactional benefits for the global marketplace.
+
+Some governments, however, view the lack of regulation and anonymity around these currencies as too risky. Reputations of some existing cryptocurrencies have already been adversely affected by their use in illicit transactions, such as tax sheltering, money laundering, and purchasing illegal merchandise.
+
+However, a universally-accepted currency could enable efficient global financial markets and may protect assets against regional inflation fluctuations and artificial manipulation of currencies by national governments. If a universal digital system became available, what would happen to the current banking systems and traditional national currencies?
+
+The UMAP Journal 40 (2-3) (2019) 247-257. ©Copyright 2019 by COMAP, Inc. All rights reserved. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice. Abstracting with credit is permitted, but copyrights for components of this work owned by others than COMAP must be honored. To copy otherwise, to republish, to post on servers, or to redistribute to lists requires prior permission from COMAP.
+
+The 2019 social policy problem (Problem F) asked teams to identify key factors that would limit or facilitate digital currency growth, access, security, and stability at both the individual and national levels. The teams' analysis had to consider the long-term effects of such a system on the world economy and on relations between countries. Teams were required to use the results of their models to write a one-page policy recommendation for national leaders on the possible benefits and concerns of adopting an internationally recognized currency system.
+
+Benefits of such a digital currency to individuals and nations may include:
+
+- There can be increased privacy and security through blockchain encryption.
+- There would be reduction in overhead cost associated with centralized financial management and crossing national borders.
+- Transactions are instantaneous, avoiding delays from banking policies or exchange rules.
+- Transactions do not require formally held bank accounts and can be made with just an email address or a thumbprint.
+- Global currency could potentially lead to a more equitable monetary system, whereby access and earnings are better distributed across the world.
+
+Concerns of universal, decentralized, digital currency could include:
+
+- Blockchain security and the overall complexity and computational cost of the security for cryptocurrencies may be unsustainable.
+- International agreement on the rules and regulation of a global monetary system would be extremely cumbersome and difficult to achieve.
+- National banks could be threatened and pitted against the global banking system.
+
+There were many issues and questions that teams considered during their modeling of the problem. These included:
+
+- What are the main factors that determine whether individuals or a nation will adopt digital currencies?
+- What are the factors that determine the adoption of a digital currency system for individuals or nations?
+- Could individuals or nations across the world ever trust one universal international currency?
+- Is blockchain an adequate and effective system to build trust in a global digital currency?
+
+- What would be the financial challenges of a global decentralized digital financial market?
+- What are the factors that could cause volatility in this market? How can they be reduced?
+- How will a digital currency affect traditional markets and economies?
+- Can fiat currencies and digital currencies coexist?
+- How can current national currencies be exchanged for new digital currency?
+- What are the sales and tax implications of assets sold using digital currency?
+- How should debt and loans be handled and who should hold the debt?
+- Should currency value ever be added to or removed from the system in order to stabilize the market?
+- How should future monetary events be decided and controlled, if they are needed?
+- What roles would the United Nations, International Monetary Fund, World Bank, World Trade Organization, and national governments play in regulating digital currency?
+- What are the considerations of having governments involved in the regulation of the digital market?
+- How would the global digital market affect developed vs. underdeveloped nations?
+- How can deleterious effects be minimized?
+- Are there ethical considerations for adopting a system that may benefit citizens of some countries over others?
+- What would be the long-term effects if traditional banking were to become obsolete?
+- Are global markets more or less stable than centralized, regional monetary systems?
+- Could there be an event that damaged the entire digital market destroying the world's digital currency?
+
+The teams built mathematical models to account for these various factors and used their models to justify policy recommendations. Policies often referred to system growth, personal access, security, privacy, and stability of the system. Some of the factors in teams' models are shown in Figure 1 from the report by Central South University, China (Team 1910285).
+
+
+Figure 1. Factors considered in the model by Team 1910285 from Central South University, China.
+
+The strongest papers discussed how their model supported particular policies, as well as the assumptions and limitations of their models.
+
+Teams usually began by researching the many nuances of current and proposed digital- and crypto-currency systems. Many teams used standard economic models and principles even if they were changing some of the assumptions to model a digital currency. Teams built models that were either entirely digital or else mixed, with digital currencies alongside traditional currencies. Some teams spent most of their modeling on capturing and ameliorating the negative aspects of a broadly-adopted digital currency. Nearly every team concluded that a fully universal digital currency was not possible. Very few teams felt that a decentralized (non-regulated) digital currency could entirely replace traditional banking, given the current state of countries' monetary systems.
+
+Many teams clustered or classified countries, suggesting different kinds
+
+of digital systems for different kinds of countries. Often, the teams focused on a specific class of countries to develop a coherent model.
+
+The contest judges felt that some teams gave too much emphasis to Bitcoin or another specific digital currency or in creating and building their own form of currency. Another issue that concerned the judges was that some teams assumed that like traditional currencies, the supply of digital currency was controllable and, if necessary, infinite. The result was that teams incorporated multilevel regulatory and control systems that ultimately turned the universal currency back to one that is managed at the national level. Teams often returned to the idea that some kind of centralized system is needed to regulate the economy, with digital currencies being feasible only in specific instances in international business and trade transactions. Figure 2 shows the functions of a central bank in the model produced by Central South University, China (Team 1910285).
+
+
+Figure 2. Functions of the central bank accounts in the model by Team 1910285 from Central South University, China.
+
+# Discussion of Outstanding Papers
+
+The four strongest papers, rated as Outstanding, used an array of modeling techniques and analytic methods to deal with both issues of scalability (relevant for both small and large countries and full or partial conversions to digital currency) and dynamics (short-term and start-up effects along with the long-term issue of stability). The teams were able to explain why they selected those particular models and often incorporated concepts and theories from economics to explain their results. Importantly, these papers effectively used their models to provide meaningful policy recom
+
+mendations. Summaries of the four Outstanding team reports follow.
+
+# Sun Yat-sen University, Guangzhou, Guangdong China (Team 1904381):
+
+# "Digital Currency System is Coming"
+
+This team assumed that national banks would gradually adopt digital currency as the country's primary system by selling digital currency to its citizens. The team built a virtual country, central bank, and digital currency to test their system. They then modeled the impact of the new digital currency and its interaction with other existing currencies. The team used their model to explore how the adoption of digital currency would affect gross domestic product, the foreign exchange market, and money market functions (both at the individual and national levels). They tested two different exchange rate systems (floating and fixed) and found that the systems would achieve equilibrium when proper policies were followed. They used their model to track the effects on international capital flow and found that decentralized digital currency would be more efficient when barriers to currency flow are removed. They concluded that implementing such a currency globally would also enhance the world's prosperity.
+
+They proposed a United Nations-affiliated organization that they named the World Digital Money Bank to regulate the global digital currency. Their work indicated significant potential for more adoption of a universal digital currency, but their model did not lead to significant decentralization, since several layers of regulatory controls would be needed for the new currency system to be effective. Their currency system showed strengths in handling a variety of economic shocks and maintaining various currency exchange systems. For instance, the team's model took into account that digital currency systems enable users to send currency instantly via email addresses or fingerprints. These peer-to-peer payment systems provided by the companies would enable money transfers in seconds without the need for bank or currency verification.
+
+As the team indicated, digital transactions will always exceed cash and check transactions because they are not affected by bank policies, national boundaries, citizenship, debt, or other socioeconomic factors. Therefore, due to the influence of exogenous factors such as political confrontations and policies, countries with a digital system could face significant international capital flows. An example of this is when a country is in a war, it would be advantageous for individuals who hold traditional currency within that country to exchange it for global digital money to ensure its safety. The team saw their model's weakness was that their system needed a variety of policy tools to resolve imbalances and achieve stability. Their model was also limited in scope because they considered only the interactions between two countries.
+
+# Southwestern University of Finance and Economics (School of Economics), Chengdu, Sichuan, China (Team 1905127): "General Digital Currency Circulation Model"
+
+This team considered digital currency systems from three aspects: individual transactions, national regulations, and world trade. In considering the low likelihood that individuals would adopt digital currencies on their own, they suggested that the government could encourage its acceptance through positive intervention strategies.
+
+They used data from 130 countries to model the likelihood that a country would adopt digital currencies. The results showed that if the country has a stable domestic economy and a dominant position in international trade, it is more likely to adopt a hybrid monetary policy with both traditional and digital currencies. Their models indicated that even without policy intervention, developed economies would likely accept the coexistence of two currencies (digital and traditional), and eventually those countries are likely to abandon their traditional currency.
+
+Alternatively, for countries with chaotic domestic economies and unstable currencies, it makes sense to abandon the original currency and completely adopt digital forms of currency. However, countries with firm attitudes towards domestic monetary control and more conservative concepts about marketization tended to keep their traditional currency only.
+
+This team developed a "Long-term Government Behavior Model" that proposed domestic taxation at different levels in the traditional currency and in digital currencies. The team concluded that the proposed taxation policies would motivate governments to incentivize the use of digital currency for their citizens. Yet, the governments will still need to regulate digital currency to some extent, to eliminate the risk of illegal transactions. The team considered the effects of the currency model on individual, state, and global systems (see Figure 3).
+
+
+Figure 3. Relationships of money systems to users of a digital currency system, from Team 1905127 from Southwestern University of Finance and Economics (School of Economics), China.
+
+Their model indicated that by reducing barriers to transactions across nations, digital currency will become the main medium for international trade. However, due to differences in developmental rates and regulations, traditional currencies will still have to be retained and monetary policies
+
+will still need to be formulated by each country. There was still a need for central banks to have some control the balance of trade and capital flow for their country. This helped countries formulate sound and positive economic development programs, promote the flow of capital, and tap the most potential growth point for their economic investment.
+
+To address the need for regulation, the team proposed a "Supranational Monetary System Model" in which digital currency is controlled by an international group. If the entire world adopts digital currency with the supranational regulators in place, their model predicted that global digital currency will stabilize and become relatively robust.
+
+With the proposed digital currency system, the biggest challenge for governments would be the regulation and control of illegal fund-raising and black-market transactions. For example, central banks could use quantitative easing to stimulate exports, or the supranational monetary system could force the country to limit its own liabilities and balance imports and exports. The team's model verified a theory of optimal currency where digital monetary capital would flow rapidly into projects with higher real profit rates, promoting growth in the entire world economy.
+
+# Southwestern University of Finance and Economics (Institute of Economic Mathematics), Chengdu, Sichuan, China (Team 1916375):
+
+"The Future is Coming: The Revolution of Currency"
+
+This team used cost and income functions to build a digital currency model. They used an Analytic Hierarchy Process to consider individual, national, and global factors in whether a country would adopt a new global digital currency. They estimated likely choices for different countries, while classifying them as having either small or large economies. Their model implemented a fixed exchange rate for introducing digital currency into a country. Their digital currency financial system was designed to connect individuals and institutions around the world so that currency exchange and trade between countries would be easier and more efficient. In analysis of the system over time, they used a logistic model to simulate potential changes, showing that the banking industry may lose much of its service business as the hybrid digital-traditional currency system matured.
+
+Like many teams, their model included mechanisms for oversight and regulation of the digital currency system and a gradual introduction of the system to citizens. In their model design, most countries maintained a traditional sovereign currency while adding a digital currency for optional use, citing the reluctance that many countries would have in losing control over their monetary policies. In their policy recommendation, they suggested that countries should use a fixed exchange rate to ensure that the inflation risk is manageable for both the traditional and new digital cur
+
+rerencies. They also recommended a global regulatory system that detects crimes, monitor taxes and is responsible for enforcing banking laws that would stabilize the monetary systems.
+
+# Central University of Finance and Economics, Beijing, China (Team 1916704):
+
+# "A New Era of World Finance: The Strategy for a Global Decentralized Digital Financial Market"
+
+This team developed a global decentralized digital currency system that addressed the lack of regulation and openness in digital currencies. They used a macroeconomic model to account for factors that would affect digital currency adoption for individuals, businesses, commercial banks, and national central banks. They selected 14 measures from four categories: access factors, growth factors, stability factors, and security factors.
+
+The team developed a model broad enough to accommodate various economic scenarios. They considered three cases:
+
+- a country adopts a digital currency and completely abandons its original currency;
+- a country adopts a digital currency but only partially abandons the traditional currency; and
+- the country does not issue any digital currencies, but allows digital currencies from non-government sources.
+
+They used data from 163 countries to test each situation, where they estimated the economic steady state and measured the financial and economic characteristics of the resulting system. Their model indicated that adopting digital currency in all three scenarios would gradually improve the performance of the global economy and the economic relationship between countries. Their model also suggested that the first scenario was ideal for many nations—the country's central bank completely replaces the original currency with a digital currency. Their model showed that this switch also avoided violent fluctuations of the economy and eventually reached a fair and efficient monetary system.
+
+# Conclusion
+
+The 2019 social policy problem challenged teams to understand the economic concepts related to national currencies and the unique considerations that emerge from a new decentralized, international digital currency. Teams had to build a viable model from their understanding of these economics that was flexible enough to consider various policy scenarios.
+
+In addition, developing recommendations for policies based on the results of their models was extremely difficult. Many teams had innovative and useful ideas for parts of the problem, especially the issues of digital currency regulation and security, but were often unable to connect their model to policy recommendations within the time constraints. (See the "Expert Commentary" [Mattson 2019] in this issue for an explanation of why this was such a significant challenge.) The four Outstanding teams seemed to fulfill those tasks the best while explaining the steps and assumptions they needed to build and use their model.
+
+This kind of complex policy problem solving is a demanding task that is performed by public and private analysts and modelers throughout the world. The judges believe that there is a great benefit for young modelers to develop the skills that are needed to complete this year's policy challenge. They congratulate all the teams that selected this problem and wish ICM modelers well in analyzing complex issues and making valuable contributions to help decision-makers develop good social policies.
+
+# References
+
+Mattson, Carolina. 2019. Expert's commentary: The Digital Currency Problem. The UMAP Journal of Undergraduate Mathematics and Its Applications 40: 243-245.
+
+# About the Authors
+
+
+
+Chris Arney graduated from the U.S. Military Academy and served in the U.S. Army for 30 years. His Ph.D. in mathematics is from Rensselaer Polytechnic Institute. For 29 years, he taught mathematics and network science at the U.S. Military Academy. He is the founding director of the ICM and served as its director for 21 years. Before that, he was the associate director of the Mathematical Competition in Modeling for 9 years.
+
+Dr. Kate Coronges is the Executive Director of the Network Science Institute at Northeastern University. She provides research and administrative leadership to the Institute. Her research focuses on social structures, dynamics of teams and communities, and the impact of these dynamics on communication patterns, behaviors, and performance. Previously, Dr. Coro
+
+
+
+nges ran the U.S. Army's research portfolios in Social and Cognitive Networks and in Social Informatics, and served as an Assistant Professor in the Dept. of Behavioral Sciences and Leadership at the U.S. Military Academy. Dr. Coronges received a Ph.D. in Health Behavior Research and a Master's in Public Health from the University of Southern California, and graduated from UC Santa Cruz with a Bachelor's in Molecular, Cellular and Development Biology. She has been the head judge for the ICM Policy Problem for 4 years.
\ No newline at end of file
diff --git a/MCM/2019/Problems/2019_ICM_Problem_D/2019_ICM_Problem_D.md b/MCM/2019/Problems/2019_ICM_Problem_D/2019_ICM_Problem_D.md
new file mode 100644
index 0000000000000000000000000000000000000000..a8bd6436ad038214145dc3a67759dc618bf43567
--- /dev/null
+++ b/MCM/2019/Problems/2019_ICM_Problem_D/2019_ICM_Problem_D.md
@@ -0,0 +1,42 @@
+# 2019 ICM
+
+# Problem D: Time to leave the Louvre
+
+The increasing number of terror attacks in France[1] requires a review of the emergency evacuation plans at many popular destinations. Your ICM team is helping to design evacuation plans at the Louvre in Paris, France. In general, the goal of evacuation is to have all occupants leave the building as quickly and safely as possible. Upon notification of a required evacuation, individuals egress to and through an optimal exit in order to empty the building as quickly as possible.
+
+The Louvre is one of the world's largest and most visited art museum, receiving more than 8.1 million visitors in 2017[2]. The number of guests in the museum varies throughout the day and year, which provides challenges in planning for regular movement within the museum. The diversity of visitors -- speaking a variety of languages, groups traveling together, and disabled visitors -- makes evacuation in an emergency even more challenging.
+
+The Louvre has five floors, two of which are underground.
+
+
+Figure 1: Floor plan of Louvre[3]
+
+The 380,000 exhibits located on these five floors cover approximately 72,735 square meters, with building wings as long as 480 meters or 5 city blocks[3]. The pyramid entrance is the main and most used public entrance to the museum. However, there are also three other entrances usually reserved for groups and individuals with museum memberships: the Passage Richelieu entrance, the Carrousel du Louvre entrance, and the Portes Des Lions entrance. The Louvre has an online application, "Affluences" (https://www.afluences.com/lovvre.php), that provides real-time updates on the estimated waiting time at each of these entrances to help facilitate entry to the museum. Your team might consider how technology, to include apps such as Affluences, or others could be used to facilitate your evacuation plan.
+
+Only emergency personnel and museum officials know the actual number of total available exit points (service doors, employee entrances, VIP entrances, emergency exits, and old secret entrances built by the monarchy, etc.). While public awareness of these exit points could provide additional strength to an evacuation plan, their use would simultaneously cause security concerns due to the lower or limited security postures at these exits compared with level of security at the four main entrances. Thus, when creating your model, your team should consider carefully when and how any additional exits might be utilized.
+
+Your supervisor wants your ICM team to develop an emergency evacuation model that allows the museum leaders to explore a range of options to evacuate visitors from the museum, while also allowing emergency personnel to enter the building as quickly as possible. It is important to identify potential bottlenecks that may limit movement towards the exits. The museum emergency planners are especially interested in an adaptable model that can be designed to address a broad set of considerations and various types of potential threats. Each threat has the potential to alter or remove segments of possible routes to safety that may be essential in a single optimized route. Once developed, validate your model(s) and discuss how the Louvre would implement it.
+
+Based on the results of your work, propose policy and procedural recommendations for emergency management of the Louvre. Include any applicable crowd management and control procedures that your team believes are necessary for the safety of the visitors. Additionally, discuss how you could adapt and implement your model(s) for other large, crowded structures.
+
+Your submission should consist of:
+
+One-page Summary Sheet,
+- Your solution of no more than 20 pages, for a maximum of 21 pages with your summary.
+- Judges expect a complete list of references with in-text citations, but may not consider appendices in the judging process.
+- Note: Reference list and any appendices do not count toward the 21-page limit and should appear after your completed solution.
+
+# References:
+
+[1] Reporters, Telegraph. “Terror Attacks in France: From Toulouse to the Louvre.” The Telegraph, Telegraph Media Group, 24 June 2018, www.telegraph.co.uk/news/0/terror-attacks-france-toulouse-louvre/.
+[2] “8.1 Million Visitors to the Louvre in 2017.” Louvre Press Release, 25 Jan. 2018, presse.louvre.fr/8-1-million-visitors-to-the-louvre-in-2017/.
+[3] “Interactive Floor Plans.” Louvre - Interactive Floor Plans / Louvre Museum / Paris, 30 June 2016, www.louvre.fr/en/plan.
+
+[4] “Pyramid” Project Launch – The Musée du Louvre is improving visitor reception (2014-2016).” Louvre Press Kit, 18 Sept. 2014, www.louvre.fr/sites/default/files/dp_PYramidide%2028102014_en.pdf.
+[5] “The ‘Pyramid’ Project - Improving Visitor Reception (2014-2016).” Louvre Press Release, 6 July 2016, presse.louvre.fr/the-pyramid-project/.
+
+# Glossary:
+
+Bottlenecks – places where movement is dramatically slowed or even stopped.
+
+Emergency personnel – people who help in an emergency, such as guards, fire fighters, medics, ambulance crews, doctors, and police.
\ No newline at end of file
diff --git a/MCM/2019/Problems/2019_ICM_Problem_E/2019_ICM_Problem_E.md b/MCM/2019/Problems/2019_ICM_Problem_E/2019_ICM_Problem_E.md
new file mode 100644
index 0000000000000000000000000000000000000000..a903cc19d1837967540c79f7bb3e903a971a7ad2
--- /dev/null
+++ b/MCM/2019/Problems/2019_ICM_Problem_E/2019_ICM_Problem_E.md
@@ -0,0 +1,42 @@
+# 2019 ICM
+
+# Problem E: What is the Cost of Environmental Degradation?
+
+Economic theory often disregards the impact of its decisions on the biosphere or assumes unlimited resources or capacity for its needs. There is a flaw in this viewpoint, and the environment is now facing the consequences. The biosphere provides many natural processes to maintain a healthy and sustainable environment for human life, which are known as ecosystem services. Examples include turning waste into food, water filtration, growing food, pollinating plants, and converting carbon dioxide into oxygen. However, whenever humans alter the ecosystem, we potentially limit or remove ecosystem services. The impact of local small-scale changes in land use, such as building a few roads, sewers, bridges, houses, or factories may seem negligible. Add to these small projects, large-scale projects such as building or relocating a large corporate headquarters, building a pipeline across the country, or expanding or altering waterways for extended commercial use. Now think about the impact of many of these projects across a region, country, and the world. While individually these activities may seem inconsequential to the total ability of the biosphere's functioning potential, cumulatively they are directly impacting the biodiversity and causing environmental degradation.
+
+Traditionally, most land use projects do not consider the impact of, or account for changes to, ecosystem services. The economic costs to mitigate negative results of land use changes: polluted rivers, poor air quality, hazardous waste sites, poorly treated waste water, climate changes, etc., are often not included in the plan. Is it possible to put a value on the environmental cost of land use development projects? How would environmental degradation be accounted for in these project costs? Once ecosystem services are accounted for in the cost-benefit ratio of a project, then the true and comprehensive valuation of the project can be determined and assessed.
+
+Your ICM team has been hired to create an ecological services valuation model to understand the true economic costs of land use projects when ecosystem services are considered. Use your model to perform a cost benefit analysis of land use development projects of varying sizes, from small community-based projects to large national projects. Evaluate the effectiveness of your model based on your analyses and model design. What are the implications of your modeling on land use project planners and managers? How might your model need to change over time?
+
+Your submission should consist of:
+
+One-page Summary Sheet,
+- Your solution of no more than 20 pages, for a maximum of 21 pages with your summary
+- Judges expect a complete list of references with in-text citations, but may not consider appendices in the judging process.
+- Note: Reference list and any appendices do not count toward the 21-page limit and should appear after your completed solution.
+
+# References:
+
+Chee, Y., 2004. An ecological perspective on the valuation of ecosystem services. Biological Conservation 120, 549-565.
+Costanza, R., d'Arge, R., de Groot, R., Farber, S., Grasso, M., Hannon, B., Limburg, K., Naeem, S., O'Neill, R.V., Paruelo, J., Raskin, R.G., Sutton, P., van den Belt, M., 1997. The value of the world's ecosystem services and natural capital. Nature 387, 253-260.
+Gómez-Baggethuna, E., de Groot, R., Lomas, P., Montesa, C., 1 April 2010. The history of ecosystem services in economic theory and practice: From early notions to markets and payment schemes. Ecological Economics 69 (6), 1209-1218.
+Norgaard, R., 1 April 2010. Ecosystem services: From eye-opening metaphor to complexity blinder. Ecological Economics 69 (6), 1219-1227.
+Richmond, A., Kaufmann R., Myneni, R., 2007, Valuing ecosystem services: A shadow price for net primary production. Ecological Economics 64, 454-462.
+Yang, Q., Liu, G., Casazza, M., Campbell, E., Giannettia, B., Brown, M., December 2018. Development of a new framework for non-monetary accounting on ecosystem services valuation. Ecosystem Services 34A, 37-54.
+
+# Data sources:
+
+US based data: https://www.data.gov/ecosystems/
+Satellite data: https://www.ncdc.noaa.gov/data-access/satellite-data/satellite-data-access-datasets
+
+# Glossary:
+
+Biodiversity - refers to the variety of life in an ecosystem; all of the living organisms within a given area.
+Biosphere - the part of the Earth that is occupied by living organisms and generally includes the interaction between these organisms and their physical environment.
+Ecosystem - a subset of the biosphere that primarily focuses on the interaction between living things and their physical environment.
+Ecosystem Services – the many benefits and assets that humans receive freely from our natural environment and a fully functioning ecosystem.
+Environmental Degradation – the deterioration or compromise of the natural environment through consumption of assets either by natural processes or human activities.
+
+Mitigate – to make less severe, painful, or impactful.
+
+Valuation - refers to the estimating or determining the current worth of something.
\ No newline at end of file
diff --git a/MCM/2019/Problems/2019_ICM_Problem_F/2019_ICM_Problem_F.md b/MCM/2019/Problems/2019_ICM_Problem_F/2019_ICM_Problem_F.md
new file mode 100644
index 0000000000000000000000000000000000000000..74e08229393bfb5d9e161bc2ba6538baf886ce37
--- /dev/null
+++ b/MCM/2019/Problems/2019_ICM_Problem_F/2019_ICM_Problem_F.md
@@ -0,0 +1,45 @@
+# 2019 ICM
+
+# Problem F: Universal, Decentralized, Digital Currency: Is it possible?
+
+Digital currency can be used like traditional currencies to buy and sell goods, except that it is digital and has no physical representation. Digital currency enables its users to make transactions instantaneously and without any concern for national borders. Cryptocurrency is a subset of digital currency with unique features of privacy, decentralization, security and encryption. Cryptocurrencies have exploded in popularity in various parts of the world; moving from an underground cult interest to a globally accepted phenomenon. Bitcoin and Ethereum, both cryptocurrencies, have grown in value, while investors are projecting rapid growth for other cryptocurrencies such as Dogecoin or Ripple. In addition to digital and cryptocurrencies, there are also new digital methods for financial transactions that enable users to instantaneously exchange money with nothing more than an email address or a thumbprint. Peer-to-peer payment systems offered by companies like PayPal, Stripe, Venmo, Zelle, Apple Pay, Square Cash, and Google Pay offer virtual movement of money across the globe in seconds without ever having to verify the transaction through a bank or currency exchange. Digital transactions outpace cash and check transactions because they are not delayed by banking policies, national borders, citizenship, debts, or other social-economic factors. These new currency systems decentralize financial transactions, leaving many to consider a world where traditional banking may become obsolete.
+
+Concerns about security of cryptocurrencies worry both citizens and economic analysts. These concerns have constrained its growth in some communities. On the other hand, much of the popularity of cryptocurrency is due to its departure from traditional overly-restrictive security and debt measures that rely on oversight by large banks and governments. These oversight institutions are often expensive, deeply bureaucratic, and sometimes corrupt. Some experts believe that a universal, decentralized, digital currency with internal security like blockchain can make markets more efficient by eliminating barriers to the flow of money. This is particularly important in countries where the majority of citizens do not have bank accounts and are unable to invest in regional or global financial markets. Some governments, however, view the lack of regulation around these currencies and their anonymity as too risky because of how easily they can be used in illicit transactions, such as tax sheltering or purchasing illegal merchandise. Others feel that a secure digital currency offers a more convenient and safer form of financial exchange. For instance, a universally accepted currency would enable truly global financial markets and would protect individual assets against regional inflation fluctuations and artificial manipulation of currency by regional governments. If alternative digital systems become more established, there will be many questions about how digital currency will affect current banking systems and nation-based currencies.
+
+Your policy modeling team has been employed by the International Currency Marketing (ICM) Alliance to help them identify the viability and effects of a global decentralized digital financial market. ICM Alliance has asked you to construct a model that adequately represents this type of financial system, being sure to identify key factors that would limit or facilitate its growth, access, security, and stability at both the individual, national, and global levels. This requires you to consider the different needs of countries and their willingness to work with this new financial marketplace and modify their current banking and monetary models. It may or may not require them to abandon their own currency, so that adds a level of complexity to the market model. You are not to choose an existing digital currency, but discuss the strategies for adoption, and problems in implementation of, a general digital currency. You should also include the mechanisms for oversight of such a global digital currency. The ICM Alliance has asked you to extend your analysis to consider the long-term effects of such a system on the current banking industry; the local, regional, and world economy; and international relations between countries.
+
+ICM requests a report of your modeling and analysis, and a separate one-page policy recommendation for national leaders, who hold mixed opinions about this effort. The policy recommendation should offer rationale for the parameters and dynamics included in your model and reflect the insights you gained from your modeling. Your polices might address, for example, growth, reach, access, security, and stability of the system.
+
+Your team's submission should consist of:
+
+One-page Summary Sheet,
+One-page policy recommendation for national leaders,
+- Your solution of no more than 20 pages, for a maximum of 22 pages with your summary and policy recommendation.
+- Judges expect a complete list of references with in-text citations, but may not consider appendices in the judging process.
+- Note: Reference list and any appendices do not count toward the 22-page limit and should appear after your completed solution.
+
+# References:
+
+Paul Krugman, “O Canada: A neglected nation gets its Nobel”. Slate, Oct 19, 1999. https://slate.com/business/1999/10/o-canada.html
+
+Stephanie Lo and J. Christina Wang, "Bitcoin as Money?" Current Policy Perspectives, Federal Reserve Bank of Boston, 2014. https://www.bostonfed.org/publications/current-policy-perspectives/2014/bitcoin-as-money.aspx or https://www.bostonfed.org/-/media/Documents/Workingpapers/PDF/cpp1404.pdf
+
+# Glossary:
+
+Anonymity – the state of being unnamed or unidentified; the state of being anonymous.
+
+Blockchain – the record keeping technology that can document transactions between two parties in a verifiable and permanent way; a digital database containing information that can be shared and simultaneously used across a large publicly accessible and decentralized network.
+
+Cryptocurrency – a digital or virtual currency that uses cryptography (protecting information through the use of codes) for security.
+
+Digital Currency – [digital money, electronic money, electronic currency] is a type of currency in digital (electronic) versus physical (coins, paper) form.
+
+Illicit – illegal or dishonest.
+
+Fluctuations - variations or oscillations; rises and falls.
+
+Monetary – relating to money or finances, or to the mechanisms by which money is supplied to and circulates in the economy.
+
+Nation-based currencies – [national currencies] a system of money issued by a central bank and in common use within a particular nation or group of nations; examples are United States dollar (USD), Chinese renminbi (RMB or CNY), European Euro (EUR), British pound sterling (GBP), and Japanese yen (JPY).
+
+Underground cult – hidden or mysterious group of people sharing an excessive devotion toward a particular person, belief, or thing.
\ No newline at end of file
diff --git a/MCM/2019/Problems/2019_MCM_Problem_A/2019_MCM_Problem_A.md b/MCM/2019/Problems/2019_MCM_Problem_A/2019_MCM_Problem_A.md
new file mode 100644
index 0000000000000000000000000000000000000000..78a0914d41e4431a86bc195a80b46ea49a9a9875
--- /dev/null
+++ b/MCM/2019/Problems/2019_MCM_Problem_A/2019_MCM_Problem_A.md
@@ -0,0 +1,28 @@
+# 2019 MCM
+
+# Problem A: Game of Ecology
+
+In the fictional television series Game of Thrones, based on the series of epic fantasy novels A Song of Ice and Fire[1], three dragons are raised by Daenerys Targaryen, the "Mother of Dragons." When hatched, the dragons are small, roughly $10\mathrm{kg}$ , and after a year grow to roughly $30 - 40\mathrm{kg}$ . They continue to grow throughout their life depending on the conditions and amount of food available to them.
+
+For the purposes of this problem, consider these three fictional dragons are living today. Assume that the basic biology of dragons described above is accurate. You will need to make some additional assumptions about dragons that might include, for example, that dragons are able to fly great distances, breath fire, and resist tremendous trauma. As you address the problem requirements, it should be clear how your assumptions are related to the physical constraints of the functions, size, diet, changes, or other characteristics associated with the animals.
+
+Your team is assigned to analyze dragon characteristics, behavior, habits, diet, and interaction with their environment. To do so, you will have to consider many questions. At a minimum, address the following: What is the ecological impact and requirements of the dragons? What are the energy expenditures of the dragons, and what are their caloric intake requirements? How much area is required to support the three dragons? How large a community is necessary to support a dragon for varying levels of assistance that can be provided to the dragons? Be clear about what factors you are considering when addressing these questions.
+
+As with other animals that migrate, dragons might travel to different regions of the world with very different climates. How important are the climate conditions to your analysis? For example, would moving a dragon between an arid region, a warm temperate region, and an arctic region make a big difference in the resources required to maintain and grow a dragon?
+
+Once your dragon analysis is complete, draft a two-page letter to the author of A Song of Ice and Fire, George R.R. Martin, to provide guidance about how to maintain the realistic ecological underpinning of the story, especially with respect to the movement of dragons from arid regions to temperate regions and to arctic regions.
+
+While your dragon analysis does not directly apply to a real physical situation, the mathematical modeling itself makes use of many realistic features used in modeling a situation. Aside from the modeling activities themselves, describe and discuss a situation outside of the realm of fictional dragons that your modeling efforts might help inform and provide insight?
+
+Your submission should consist of:
+
+One-page Summary Sheet,
+Two-page letter,
+- Your solution of no more than 20 pages, for a maximum of 23 pages with your summary and letter.
+Note: Reference list and any appendices do not count toward the 23-page limit and should appear after your completed solution.
+
+NOTE: You should not make use of unauthorized images and materials whose use is restricted by copyright laws. Please be careful in how you use and cite the sources for your ideas and the materials used in your report.
+
+# Reference
+
+1. Penguin Random House (2018). A Song of Ice and Fire Series. Retrieved from https://www.penguinrandomhouse.com/series/SOO/a-song-of-ice-and-fire/.
\ No newline at end of file
diff --git a/MCM/2019/Problems/2019_MCM_Problem_B/2019_MCM_Problem_B.md b/MCM/2019/Problems/2019_MCM_Problem_B/2019_MCM_Problem_B.md
new file mode 100644
index 0000000000000000000000000000000000000000..98c52b9bede0a32d6a7755ae6c0d1175156af593
--- /dev/null
+++ b/MCM/2019/Problems/2019_MCM_Problem_B/2019_MCM_Problem_B.md
@@ -0,0 +1,89 @@
+# 2019 MCM Problem B: Send in the Drones: Developing an Aerial Disaster Relief Response System
+
+Background: In 2017, the worst hurricane to ever hit the United States territory of Puerto Rico (see Attachment 1) left the island with severe damage and caused over 2900 fatalities. The combined destructive power of the hurricane's storm surge and wave action produced extensive damage to buildings, homes, and roads, particularly along the east and southeast coast of Puerto Rico. The storm, with its fierce winds and heavy rain, knocked down 80 percent of Puerto Rico's utility poles and all transmission lines, resulting in loss of power to essentially all of the island's 3.4 million residents. In addition, the storm damaged or destroyed the majority of the island's cellular communication networks. The electrical power and cell service outages lasted for months across much of the island, and longer in some locations. Widespread flooding blocked and damaged many highways and roads across the island, making it nearly impossible for emergency services ground vehicles to plan and navigate their routes. The full extent of the damage in Puerto Rico remained unclear for some time; dozens of areas were isolated and without communication. Demands for medical supplies, lifesaving equipment, and treatment strained health-care clinics, hospital emergency rooms, and non-governmental organizations' (NGOs) relief operations. Demand for medical care continued to surge for some time as the chronically ill turned to hospitals and temporary shelters for care.
+
+Problem: Non-governmental organizations (NGOs) are often challenged to provide adequate and timely response during or after natural disasters, such as the hurricane that struck the United States territory of Puerto Rico in 2017. One NGO in particular – HELP, Inc. - is attempting to improve its response capabilities by designing a transportable disaster response system called "DroneGo." DroneGo will use rotor wing drones to deliver pre-packaged medical supplies and provide high-resolution aerial video reconnaissance. Selected drones should be able to perform these two missions – medical supply delivery and video reconnaissance – simultaneously or separately, depending on relief conditions and scheduling. HELP, Inc. has identified various candidate rotor wing drones that it would like your team to consider for possible use in designing its *DroneGo fleet* (see Attachments 2, 3).
+
+DroneGo's pre-packaged medical supplies, called medical packages, are meant to augment, not replace, the supplies provided by local medical assistance organizations on-site within the country affected by the disaster. HELP, Inc. is planning on three different medical packages referred to as MED1, MED2, and MED3. Drones will carry these medical packages within drone cargo bays for delivery to selected locations (see Attachments 4, 5). Depending on the specific drone being used to transport medical supplies, it may be possible that multiple medical packages can be transported in a single drone cargo bay. Note that drones must land on the ground to offload medical supplies from the drone cargo bays. The video capability of the drones will provide high-resolution video of damaged and serviceable transportation road networks to HELP, Inc.'s command and control center for ground-based route planning.
+
+HELP, Inc. will use International Standards Organization (ISO) standard dry cargo containers to quickly transport a complete DroneGo disaster response system to a particular disaster area. The individual shipping containers for all drones in the DroneGo fleet, along with all required
+
+medical packages, must fit within a maximum of three of the ISO cargo containers to be delivered to a single location, or up to three different locations if three cargo containers are used in the disaster area. Each shipping container's contents should be packed in order to minimize any need for buffer materials for unused space. Table 1 shows the dimensions of an ISO standard dry cargo container.
+
+| Table 1. Standard ISO Container Dimensions |
| Exterior | Interior | Door Opening |
| Length | Width | Height | Length | Width | Height | Width | Height |
| 20'
+Standard
+Dry
+Container | 20' | 8' | 8'6" | 19'3" | 7'8" | 7' 10" | 7'8" | 7'5" |
+
+HELP, Inc. is asking your team to use the 2017 situation in Puerto Rico to design a DroneGo disaster response system that will fit within the containers noted while meeting the anticipated medical supply demands during a potential similar future disaster scenario. It is possible that the demand requirements of this scenario may exceed the capabilities of the drone fleet your team identifies. If this occurs, HELP, Inc. wants to clearly understand any tradeoffs that it must make for implementing solutions to address these shortcomings.
+
+# Part 1. Develop a DroneGo disaster response system to support the Puerto Rico hurricane disaster scenario.
+
+Consider the background information, the requirements identified in the problem statement, and the information provided in the problem attachments to address the following.
+
+A. Recommend a drone fleet and set of medical packages for the HELP, Inc. DroneGo disaster response system that will meet the requirements of the Puerto Rico hurricane scenario. Design the associated packing configuration for each of up to three ISO cargo containers to transport the system to Puerto Rico.
+B. Identify the best location or locations on Puerto Rico to position one, two, or three cargo containers of the DroneGo disaster response system to be able to conduct both medical supply delivery and video reconnaissance of road networks.
+C. For each type of drone included in the DroneGo fleet:
+
+a. Provide the drone payload packing configurations (i.e. the medical packages packed into the drone cargo bay), delivery routes and schedule to meet the identified emergency medical package requirements of the Puerto Rico hurricane scenario.
+b. Provide a drone flight plan that will enable the DroneGo fleet to use onboard video cameras to assess the major highways and roads in support of the Help, Inc. mission.
+
+# Part 2. Memo
+
+Write a 1–2 page memo to the Chief Operating Officer (CEO) of HELP, Inc. summarizing your modeling results, conclusions, and recommendations so that she can share with her Board of Directors.
+
+Your MCM team submission should consist of:
+
+One-page Summary Sheet,
+One- to Two-page memo to the HELP, Inc. CEO
+- Your solution of no more than 20 pages, for a maximum of 23 pages with your summary and memo.
+- Note: Reference list and any appendices do not count toward the 23-page limit and should appear after your completed solution.
+
+# Attachments:
+
+1. Map of Puerto Rico
+2. Potential Candidate Drones for DroneGo Fleet Consideration (with Drone payload capability)
+3. Drone Cargo Bay Packing Configuration/Dimensions by Type
+4. Anticipated Medical Package Demand
+5. Emergency Medical Package Configuration/Dimensions
+
+
+Attachment 1: Map of Puerto Rico
+
+Attachment 2: Potential Candidate Drones for DroneGo Fleet Consideration (with Drone Payload Capability)
+
+ | Shipping Container Dimensions | Performance Characteristics/Capabilities | Configurations Capabilities |
| Drone | Length (in.) | Width (in.) | Height (in.) | Max Payload Capability (lbs.) | Speed (km/h) | Flight Time No Cargo (min) | Video Capable | Medical Package Capable | Drone Cargo Bay Type* |
| A | 45 | 45 | 25 | 3.5 | 40 | 35 | Y | Y | 1 |
| B | 30 | 30 | 22 | 8 | 79 | 40 | Y | Y | 1 |
| C | 60 | 50 | 30 | 14 | 64 | 35 | Y | Y | 2 |
| D | 25 | 20 | 25 | 11 | 60 | 18 | Y | Y | 1 |
| E | 25 | 20 | 27 | 15 | 60 | 15 | Y | Y | 2 |
| F | 40 | 40 | 25 | 22 | 79 | 24 | N | Y | 2 |
| G | 32 | 32 | 17 | 20 | 64 | 16 | Y | Y | 2 |
| H Tethered | 65 | 75 | 41 | N/A | N/A | Indefinite | N | N | N/A |
+
+*Note that cargo bays are affixed to the drone and that drone must be on the ground to offload cargo. See Attachment 3 for Drone Cargo Bay Type Configuration/Dimensions.
+
+Attachment 3: Drone Cargo Bay Packing Configuration/Dimensions by Type
+
+| Drone Cargo Bay Type | Length (in) | Width (in) | Height (in) | |
| 1 | 8 | 10 | 14 | Top Loaded |
| 2 | 24 | 20 | 20 | Top Loaded |
+
+Attachment 4: Anticipated Medical Package Demand
+
+| Delivery Location | Emergency Medical Packages ** |
| Location Name | Latitude | Longitude | Requirement | Quantity | Frequency |
| Caribbean Medical Center | 18.33 | -65.65 | MED 1 | 1 | Daily |
| Jajardo | | | MED 3 | 1 | Daily |
| Hospital HIMA | 18.22 | -66.03 | MED 1 | 2 | Daily |
| San Pablo | | | MED 3 | 1 | Daily |
| Hospital Pavia Santurce | 18.44 | -66.07 | MED 1 | 1 | Daily |
| San Juan | | | MED 2 | 1 | Daily |
| Puerto Rico Children's Hospital | 18.40 | -66.16 | MED 1 | 2 | Daily |
| Bayamon | | | MED 2 | 1 | Daily |
| | | MED 3 | 2 | Daily |
| Hospital Pavia Arecibo | 18.47 | -66.73 | MED 1 | 1 | Daily |
| Arecibo | | | | | |
+
+**See Attachment 5 for Emergency Medical Packages 1, 2, and 3 Configurations/Dimensions.
+
+Attachment 5: Emergency Medical Package Configuration/Dimensions
+
+| Emergency Medical Package Configuration |
| Package ID | Weight (lbs.) | Package Dimensions (in.) (L × W × H) |
| MED 1 | 2 | 14 × 7 × 5 |
| MED 2 | 2 | 5 × 8 × 5 |
| MED 3 | 3 | 12 × 7 × 4 |
+
+# Glossary:
+
+Cargo Container (Shipping Container): a large rectangular container with doors on the ends for loading and packing, and made of material suitable for shipping, storing, and handling in many weather and climate conditions.
+
+Drone (Unmanned Aerial Vehicle, UAV): a flying robot that can be remotely controlled or fly autonomously through software-controlled flight plans in their embedded systems that work in conjunction with onboard sensors and GPS.
+
+Drone Cargo Bay: For rotor wing drones, this is an externally carried "box" used to transport materials. For this problem, the drones under consideration have one of two types (sizes) of cargo bays. Note that each drone must land for the medical packages to be unloaded from the bay at its destination.
+
+**Drone Fleet:** a set of drones for a particular mission or purpose. For this problem, the total set of drones by type (A to H) and Payload Capability (Visual and Medical) needed to meet the requirements of HELP, Inc.
+
+Drone Payload Packing Configuration: how the drone payload bays are packed. For this problem, how the medical packages being transported by a drone are packed inside the drone cargo bay.
+
+Medical Package: a predetermined set of medical supplies packed in a single container. For this problem, there are three Medical Package Configurations (MED1, MED2, MED3) available for transport by a drone from a deployed cargo container location to the demand location.
+
+Non-governmental Organization (NGO): Usually non-profit and sometimes international organization independent of government and governmental organizations that is active in humanitarian, educational, healthcare, social, public policy, human rights, environmental and other areas in attempts to affect change.
+
+Payload Capability: the carrying capacity of an aircraft or launch vehicle, usually measured in terms of weight. For this problem, the capability/capacity of the drone to carry medical packages.
\ No newline at end of file
diff --git a/MCM/2019/Problems/2019_MCM_Problem_C/2019_MCM_Problem_C.md b/MCM/2019/Problems/2019_MCM_Problem_C/2019_MCM_Problem_C.md
new file mode 100644
index 0000000000000000000000000000000000000000..929100b5a638b19311316bb353817e7bfedb192e
--- /dev/null
+++ b/MCM/2019/Problems/2019_MCM_Problem_C/2019_MCM_Problem_C.md
@@ -0,0 +1,72 @@
+# 2019 MCM
+
+# Problem C: The Opioid Crisis
+
+**Background:** The United States is experiencing a national crisis regarding the use of synthetic and non-synthetic opioids, either for the treatment and management of pain (legal, prescription use) or for recreational purposes (illegal, non-prescription use). Federal organizations such as the Centers for Disease Control (CDC) are struggling to "save lives and prevent negative health effects of this epidemic, such as opioid use disorder, hepatitis, and HIV infections, and neonatal abstinence syndrome."1 Simply enforcing existing laws is a complex challenge for the Federal Bureau of Investigation (FBI), and the U.S. Drug Enforcement Administration (DEA), among others.
+
+There are implications for important sectors of the U.S. economy as well. For example, if the opioid crisis spreads to all cross-sections of the U.S. population (including the college-educated and those with advanced degrees), businesses requiring precision labor skills, high technology component assembly, and sensitive trust or security relationships with clients and customers might have difficulty filling these positions. Further, if the percentage of people with opioid addiction increases within the elderly, health care costs and assisted living facility staffing will also be affected.
+
+The DEA/National Forensic Laboratory Information System (NFLIS), as part of the Drug Enforcement Administration's (DEA) Office of Diversion Control, publishes a data-heavy annual report addressing "drug identification results and associated information from drug cases analyzed by federal, state, and local forensic laboratories." The database within NFLIS includes data from crime laboratories that handle over $88\%$ of the nation's estimated 1.2 million annual state and local drug cases. For this problem, we focus on the individual counties located in five (5) U.S. states: Ohio, Kentucky, West Virginia, Virginia, and Tennessee. In the U.S., a county is the next lower level of government below each state that has taxation authority.
+
+Supplied with this problem description are several data sets for your use. The first file (MCM_NFLIS_Data.xlsx) contains drug identification counts in years 2010-2017 for narcotic analgesics (synthetic opioids) and heroin in each of the counties from these five states as reported to the DEA by crime laboratories throughout each state. A drug identification occurs when evidence is submitted to crime laboratories by law enforcement agencies as part of a criminal investigation and the laboratory's forensic scientists test the evidence. Typically, when law enforcement organizations submit these samples, they provide location data (county) with their incident reports. When evidence is submitted to a crime laboratory and this location data is not provided, the crime laboratory uses the location of the city/county/state investigating law enforcement organization that submitted the case. For the purposes of this problem, you may assume that the county location data are correct as provided.
+
+The additional seven (7) files are zipped folders containing extracts from the U.S. Census Bureau that represent a common set of socio-economic factors collected for the counties of these five states during each of the years 2010-2016 (ACS_xx_5YR_DP02.zip). (Note: The same data were not available for 2017.)
+
+A code sheet is present with each data set that defines each of the variables noted. While you may use other resources for research and background information, THE DATA SETS PROVIDED CONTAIN THE ONLY DATA YOU SHOULD USE FOR THIS PROBLEM.
+
+# Problem:
+
+Part 1. Using the NFLIS data provided, build a mathematical model to describe the spread and characteristics of the reported synthetic opioid and heroin incidents (cases) in and between the five states and their counties over time. Using your model, identify any possible locations where specific opioid use might have started in each of the five states.
+
+If the patterns and characteristics your team identified continue, are there any specific concerns the U.S. government should have? At what drug identification threshold levels do these occur? Where and when does your model predict they will occur in the future?
+
+Part 2. Using the U.S. Census socio-economic data provided, address the following questions:
+
+There are a good number of competing hypotheses that have been offered as explanations as to how opioid use got to its current level, who is using/abusing it, what contributes to the growth in opioid use and addiction, and why opioid use persists despite its known dangers. Is use or trends-in-use somehow associated with any of the U.S. Census socio-economic data provided? If so, modify your model from Part 1 to include any important factors from this data set.
+
+Part 3. Finally, using a combination of your Part 1 and Part 2 results, identify a possible strategy for countering the opioid crisis. Use your model(s) to test the effectiveness of this strategy; identifying any significant parameter bounds that success (or failure) is dependent upon.
+
+In addition to your main report, include a 1-2 page memo to the Chief Administrator, DEA/NFLIS Database summarizing any significant insights or results you identified during this modeling effort.
+
+Your submission should consist of:
+
+One-page Summary Sheet,
+One- to Two-page memo,
+- Your solution of no more than 20 pages, for a maximum of 23 pages with your summary and memo.
+- Note: Reference list and any appendices do not count toward the 23-page limit and should appear after your completed solution.
+
+# Attachments:
+
+2019_MCMProblemC_DATA.zip - Includes seven zip folders and the NFLIS_Data file.
+
+ACS_10_5YR_DP02.zip
+
+ACS_11_5YR_DP02.zip
+
+ACS_12_5YR_DP02.zip
+
+ACS_13_5YR_DP02.zip
+
+ACS_14_5YR_DP02.zip
+
+ACS_15_5YR_DP02.zip
+
+ACS_16_5YR_DP02.zip
+
+MCMNFLIS_Data.xlsx
+
+# Glossary:
+
+analgesic - pain relieving medication
+
+county – (in the U.S.) an administrative or political subdivision of a state; a region having specific boundaries and some level of governmental authority.
+
+heroin – an illegal, euphoria producing, highly addictive analgesic drug processed from morphine (a naturally occurring substance extracted from the seed pods of certain varieties of poppy plants).
+
+non-synthetic opioids – a class of drugs made from extracting chemicals in opium leaves, e.g. morphine, codeine, heroin.
+
+opioids - pain relieving drugs that are often highly addictive
+
+socio-economic factors – factors within a society that describe the relationship between social and economic status and class such as education, income, occupation, and employment.
+
+synthetic opioid - man-made opioids
\ No newline at end of file
diff --git a/MCM/2020/B/2007698/2007698.md b/MCM/2020/B/2007698/2007698.md
new file mode 100644
index 0000000000000000000000000000000000000000..fd5dff5767f6f49b225762b7e394c0c6d22a3086
--- /dev/null
+++ b/MCM/2020/B/2007698/2007698.md
@@ -0,0 +1,967 @@
+# A Simulation Based Assessment of Sandcastle Foundation
+
+Summary
+
+Sandcastle building is a common way to recreating for beach goers. Sand lovers always rack their brains to build a stronger castle and take pride in it. Still, sandcastle is inevitably eroded by the waves and tides. Therefore, how to establish a stable foundation is of great significance to the duration of sand castles.
+
+In order to explore the most stable three-dimensional geometric shape, we establish a periodic sand-water cell automaton model to experiment with the most likely multiple geometric shapes. We discretize the sand base into a three-dimensional geometry consisting of a stack of rigid sand cells and water cells. Based on the knowledge of engineering mechanics and the feasibility in practice, we select five types of inertial frustum which has significant characteristics: triangular frustum, square frustum, six-arris frustum, conical frustum, ellipse frustum and so on for simulation experiments. The optimum geometric shape we obtain is triangle frustum.
+
+In the model, we formulate the state transition rules through multivariate analysis based on multi-criteria judgments, and carry out quantitative calculations on the waves' sediment carrying and capillary phenomena between sand and water. We employ complex trigonometric functions to simulate and reproduce the tidal waves in three dimensions. Therefore, through regression analysis of the data obtained from multiple experiments on each frustum, we have obtained a reliable and optimal geometric shape result. Besides, it can be quantified and visualized.
+
+In the practice of building sand castles, it was found that different sand-to-water mixture ratios also played a crucial role in the sand foundations' stability. By using the sand-water cell automaton model of problem 1, we use the concentration gradient method to adjust the water-sand ratio and obtain a series of data points on the sand-to-water proportion and the sand-based stability. Then we use the least squares polynomial function approximation to fit the curve of these data. Therefore, we obtain an estimated function of sand-to-water ratio and sand-base stability. Then we can find that the optimal sand-to-water mixture proportion is 0.55.
+
+In order to study rain's effect on the result, we introduce a rainfall module based on the original model. It will work on the sandy base with the wave tide module. Similarly, we get a series of data for regression analysis. We find that the original best geometry does not the only one which perform well under rainfall conditions, and ellipse frustum is the another better geometry when it is rainy.
+
+Sensitivity analysis shows the strong robustness of our model. Meanwhile, we also propose some other strategies for increasing the stability of the sandy base. Subsequently, we summarize the experimental models and conclusions into plain language for publication on Fun in the Sun.
+
+In addition, our model is easy to implement and extend. By changing few parameters in our code, we can stimulate more complex conditions on the beach.
+
+# Contents
+
+# 1 Introduction 2
+
+1.1 Problem Background 2
+1.2 Literature Review 2
+1.3 Our work 3
+
+# 2 Preparation of the Models 3
+
+2.1 Analysis of Problems 3
+2.2Assumptions 4
+2.3 Notations 4
+
+# 3 The Optimal 3D Geometric Shape 5
+
+3.1 Model Preparation 5
+
+3.1.1 Model Principle 5
+3.1.2 Model Assumption 5
+3.1.3 Model Construction 6
+3.1.4 The Rules 6
+3.1.5 The Steps of the Algorithm 8
+3.1.6 Estimation of $P$ and $M$ 8
+
+3.2 Result 9
+
+# 4 The Optimal Sand-to-Water Mixture Proportion 11
+
+4.1 Model Preparation 11
+
+4.1.1 The Principle of Model 11
+4.1.2 The Steps of Algorithm 11
+
+4.2 Result 12
+
+# 5 The Optimal Shape in Rainy Day 13
+
+5.1 Modified CA Model 13
+
+5.1.1 Model Assumption 13
+5.1.2 Similarities and Difference from Basic Model 13
+5.1.3 The Steps of Algorithm 13
+
+5.2 Result 14
+
+# 6 Sensitivity Analysis 16
+
+# 7 Strengths and Weaknesses 17
+
+7.1 Strengths 17
+7.2 Weaknesses 18
+7.3 Prompt 18
+
+# 8 Strategies to Make sandcastle More Lasting 18
+
+# 9 Conclusion 19
+
+# Article 20
+
+# References 21
+
+# 1 Introduction
+
+# 1.1 Problem Background
+
+Playing is the nature of human, but it is not easy to get some kind of inspiration while playing. There are castles of various shapes on the beach, either simple or delicate. Even under the same condition, some castles can be maintained for a long time, while some castles can't withstand a wave and disappear without a trace. How to make our castles more durable is a question that most poeple are curious about. There are many factors which influence the firmness of sandcastles, such as sand-to-water mixture proportion, the type of sand, weather etc.
+
+In this paper, we attempt to explore a three-dimensional geometric model of a sandcastle foundation having the best stability. First, we need to build a mathematical model that analyzes the optimal three-dimensional geometry shape. Second, based on this model, we are required to consider the optimal sand-to-water mixture proportion to achieve the best adhesion between the sands. Furthermore, taking the impact of the weather into consideration, we need investigate the optimal 3D geometric shape once again.
+
+# 1.2 Literature Review
+
+Since the last century, the interaction between water and sediment has been the focus of scholars in related fields. They have carried out a large number of experiments and researches to explore water-sand interactions and their effects on stability.
+
+Sandpile problem. Mason, TG and Levine, AJ and Ertas, D and Halsey, TC (1999)[11] had studied the critical angle of wet sandpiles. Dumont, Serge and IGBIDA, Noureddine (2009) [4] based on implicit Euler discretization in time, improved formula in Prigozhin model.J.P. Bouchaud, J.-P. and Cates, M. E. and Prakash, J. Ravi and Edwards, S. F. (1995) [1] propose a new continuum description of the dynamic of sandpile surfaces and found a "spinodal" angle at which the surface of sandpile will be unstable. Dumont, Serge and Igbida, Noureddine (2011)[5] analysed this problem by using the collapsing model introduced by Evans.
+
+Sediment mathematical model. Emiroğlu, Mehmet and Yalama, Ahmet and Erdogdu, Yasemin (2015)[6] explored the ratio of the water and the clay/sand to study the material's satability. Then they found the optimal ratio was between 0.43 and 0.66. Gröger, Torsten and Tüzün, Ugur and Heyes, David M (2003)[8] used CDEM to measure od cohesion in wet granular materials and proved Rumpf's equation's general agreement.
+
+Slope stability. "Slope stability is one of the basic problems in geotechnical mechanics and engineering." Research on this topic is significant for river and traffic safety. After reviewing literatures, we find that the mainstream analysis method is still around the traditional three methods: limit equilibrium, limit analysis and numerical analysis method. These three methods are kind of a generalization from two-dimensional to three-dimensional space, and thus have various limitations(Gao, Wang & Zhang, 2009[15]). Furthermore, more and more literatures have begun to consider changes in
+
+slope stability under different weather conditions(Yeh, Lee & Cha Liu & Li, 2020[2]). In this paper, we employ cellular automata, etc.
+
+Sandcastle problem. Halsey, Thomas C and Levine, Alex J (1997)[9] thought the capillary force significantly affects the sandpiles' stability and the critical angle is costant in the limit of large systems. Then they analyzed the reason why sandcastle will fall. Coincidently, at the same year, Hornbaker, DJ and Albert, Réka and Albert, István and Barabási, A-L and Schiffer, Peter (1997) [10] expounded why sandcastles can stand and drew the conclusion that wetting liquid can change the properties of granular media resulting in a great increase of citicle angle. Fraysse, N and Thomé, H and Petit, L(1999) [7] also explored the influence of humidity on the castles' stability. Recent year, Pakpour, Maryam and Habibi, Mehdi and Møller, Peder and Bonn, Daniel(2012) [12] demonstrated how to build the perfect castle from the prospect of sandcastles' height.
+
+The sandcastle foundation has the same principle as the slope stability, and relate to above several problems. There are varying methods to deal with similar problems. Inspired by cell automaton, We hope to provide a new solution to the study of slope stability through the sandcastle foundation model.
+
+# 1.3 Our work
+
+Under the assumption that castles are built at roughly the same distance from the water on the same beach with the same type and amount of sand. We establish a model based on cellular automata to formulate the problem.
+
+Task 1 We use periodic cellular automaton to simulate the environment of the sandcastle to find the optimal 3D model. We suppose several most likely geometric shapes as alternative shape. Then we formulate State Transition Rules through multivariate analysis based on multi-criteria judgments. By running the cellular automaton several times, we explore the most stable shape of the sandcastle foundation.
+
+Task 2 We address the problem of optimal sand-to-water mixture proportion by fitting function of the lasting time and sand-to-water proportion. Based on the 3D geometric shape we sort out in Task 1, we adjust the ratio according to the concentration gradient method. Record the duration of the model with different sand-to-water ratio. The longest lasting sandy foundation's sand-to-water proportion is the target value we anticipate.
+
+$\tilde{\mathbf{a}}$ 3 Considering the effect of rainfall, we adjust our cellular automaton and repeat the procedure in Task 1. Then we find out the optimal 3D geometric shape in this case.
+
+# 2 Preparation of the Models
+
+# 2.1 Analysis of Problems
+
+Different from the analysis of the sandpile problem, the sand castle on the beach is the result of mixing water and sand. On one hand, with the degree of sand adhesion increases, the stability of the sand castle will increase. On the other hand, the sand castle will also be affected by external forces. Continuously being eroded by the waves and tides, will accelerate the destruction of the sand castle. Therefore, we need to find a model that can comprehensively consider the impact of two aspects on the sand castles.
+
+# 2.2 Assumptions
+
+We make the following assumptions about our Cellular Automaton Simulation Process:
+
+- The sandcastle foundation is only a mixture of sand and water, and all air has been exhausted. In reality, it is impossible for us to turn the inside of the sand pile into a vacuum with bare hands. For the accuracy of the experiment, the sandcastle foundation used is carefully designed so that all the air in the sand-water mixture can be considered exhausted.
+- The side of the sandcastle foundation is sloped. The stability of the triangle shows that the sloped side has higher stability.
+- Only the damaging effect of the waves on the surface of sandcastle foundation is considered. In fact, there are both waves and tides having an influence on the surface and structure. However, we do not consider the structural damage caused by the waves. Because people usually build sand castles at a certain distance from the sea, and the side of the slope is enough to greatly reduce the impact of the waves on the sandcastle.
+- The sandy base is stable. Sandcastle foundation will not collapse by the non-wave factor.
+- The waves will not change the water-sand mixture ratio of the sandcastle foundation, but will only corrode the foundation from the surface. The mixture of sand and water has a capillarity phenomenon, and the surface of the sand base can block most of the water from entering the interior.
+- Sea waves take sand from the surface of sandy bases with their maximum capacity for sand transport. The relationship between sediment content and sediment transport capacity is expressed by Dou Guoren's equation[3]
+
+$$
+\frac {\partial (h s)}{\partial t} + \frac {\partial (h v s)}{\partial x} + \alpha \omega (S - S _ {*}) = 0
+$$
+
+At the ideal state, we have
+
+$$
+\frac {\partial (h s)}{\partial t} + \frac {\partial (h v s)}{\partial x} = 0
+$$
+
+Hence, the sediment transport capacity is equal to the sediment content.
+
+- The beach sand is composed of natural sand, white bakelite sand and brown bakelite sand. The wave's sediment carrying capacity of volume $S_{v}$ is estimated at $55\%$ [13] on the beach.
+
+Additional assumptions are made to simplify analysis for individuals sections. These assumptions will be discussed at the appropriate locations.
+
+# 2.3 Notations
+
+The primary notations used in this paper are listed in table 1.
+
+Table 1: Notations
+
+| Symbol | Definition |
| Sv | Waves' sediment carrying of volume |
| F | Number of water cells adjacent to sand cells |
| Uj | Number of the water cells around the each surrounding sand cell |
| M, m | Sand-to-water proportion |
| P | Boundary conditions for the "fall" of sand cells |
| K | Instable factors |
| L | Cell space size |
| H | Maximum height of sandcastle foundation |
| d | Width of sandcastle foundation |
| Gi | Number of the cells on the top of the foulation |
| Gmin | =G0/2, Collapsed boundary conditions |
| tj | Lasting time of the sand foundation |
| σ | Stability Coefficient |
+
+# 3 The Optimal 3D Geometric Shape
+
+In this section, we will use cellular automata to simulate the interaction between sand and water. We do experiment with several simple geometries to find the most stable one of the sandy base.
+
+# 3.1 Model Preparation
+
+# 3.1.1 Model Principle
+
+Theoretically, the sandcastle base with a inclined side is the most stable. For a geometric shape, the arris are the most prominent features of the side. Therefore, We choose the most representative shape to do experiment, such as triangular, square, six-arris, conical, ellipse frustum and so on (shown in Figure 1). We carry out several experiments to study the influence of arris on the stability of sandy bases.
+
+Many complex problems can be modeled by cellular automata. Cellular automaton is essentially a dynamic system defined in a cell space composed of cells with discrete and finite states. According to certain local rules, these cells evolve in discrete time dimensions. The dynamic system has evolved in the time dimension has been widely applied to various fields of social, economic, military and scientific research.
+
+This model is a periodic cellular automaton model.
+
+# 3.1.2 Model Assumption
+
+- Both sand and water can be regarded as incompressible particles.
+- The sand and water can be mixed together in a certain ratio, and a relatively
+
+stable sand foundation can be built at the same time.
+
+- We do not take water evaporation into consideration.
+
+
+Figure 1: Geometric Shape to be Tested
+
+- The contact between the waves and the sandy base is mild and will not cause water and sand splashes.
+
+# 3.1.3 Model Construction
+
+We physically characterize the system in following aspects.
+
+- Cell is the most basic unit of cellular automata.
+- Cells can memorize storage status.
+- Each cell of the cellular automaton have three states, namely empty cell, water cell, and sand cell.
+- The state of any cell at the next moment is determined by its own state and the states of its 26 neighbors, with certain rules. This is shown in Figure2.
+
+
+Figure 2: The Schematic Diagram
+
+# 3.1.4 The Rules
+
+1. All cells cannot move upward and each time the cells can only move to the grid adjacent to itself.
+
+2. If there are $F$ ( $F \geq P$ ) water cells and $n$ sand cells adjacent to the sand cell, let this cell's "instable factor" to be $K$ . Then,
+
+$$
+K = F - \sigma \sum_ {j = 1} ^ {n} U _ {j}
+$$
+
+Where,
+
+$K$ is the stability of the sand cell considering the viscosity between sand and water $U_{j}$ is the number of the sand cells around the center sand cell
+
+3. If $K \geq P$ , the sand cell begins to "move downward" following the principle:
+
+- Sand cells can only move downward or horizontally, and preferentially in the direction downward and with the most water cells. The target position change into a sand cell, and the original position becomes a water cell.
+
+When the sand cell moves down, the water cell in the middle change first. If the middle one is not a water cell, both neighboring cell will change in the same probability.
+
+If there are no water cell below the sand cell, the sand cell will move to the water or empty cell on its right. Otherwise, move to the cell adhere.
+
+4. If there are less than $P$ water cells adjacent to the sand cell, the state of the cell holds still.
+
+
+Figure 3: Transition Rule of Sand and Water Cell
+
+5. If the water cell is adjacent to 15 or more sand cells, the cell keep constant.
+6. If there are empty cells below the water cell, the water cell preferentially moves to the empty cells below with equal probability; if not, the cell moves with equal probability to other empty cells. After the target cell becomes a water cell, the original cell becomes empty.
+7. "Sea waves" (composed of water cells) appear on the left side of the model with a certain pattern.
+
+neighboring cell can change in the same probability). The original water cell is converted into an empty cell.
+
+If there is no empty cell on the right side of the "sea wave", the water cell moves with equal probability in other directions. This process is repeated until all water cells meet the empty cells and cause a change in state.
+
+8. The sand cells and empty cells at the bottom no longer convert state.
+
+# 3.1.5 The Steps of the Algorithm
+
+Step 1 Initialize the cellular automata to make all cells empty.
+Step 2 Add a "sandcastle foundation" to the cellular automaton which at an appropriate distance from the cellular automaton.
+Step 3 Randomly assign the cells. Let the san-to-water proportion of the sandcastle foundation to be $M$ , and assign $i = 0$ .
+Step 4 At the left end of the cellular automaton, a "sea wave"(composed of water cells) is simulated with width $L$ and height $h$ . And the formula for $h$ is
+
+$$
+h = \left\{ \begin{array}{l l} 2 H \sin \left(\frac {2 \pi}{d} i\right) & , 2 k d < i < (2 k + 1) d \\ 0 & , (2 k + 1) d < i < (2 k + 2) d \end{array} \right. (k = 0, 1, 2, \ldots)
+$$
+
+Where
+
+$L$ is the cell space size $H$ is the maximum height of the sand base $d$ is the width of the sand substrate surface
+
+Step 5 Run the cellular automaton once and count the cells on the top of the tested geometry $G_{i}$ .
+
+Step 6 If $(2k + 1)d < i < (2k + 2)d$ , $(k = 0,1,2)$ , water cells at the bottom will infiltrate into the ground and disappear at a certain probability $v$ . $(v = \frac{H}{2d})$ .
+
+Step 7 If $G_{i} > \frac{G_{0}}{2}$ , set $i = i + 1$ , return to Step 4.
+
+Step 8 The value of output $i$ is the time that the sandy of the geometry persists.
+
+This algorithm's flowchat can be viewed in Figure4.
+
+# 3.1.6 Estimation of $P$ and $M$
+
+Suppose the waves' sediment carrying capacity is $S_{v}$ . While the number of water cell satisfies $\frac{p}{27} > = 1 - S_{v}$ , this cell begins to move downward.
+
+Here we suppose $S_v = 25\%$ . Plugging $S_v$ into the equation above, we obtain $P = 12$ .
+
+From information above, we know $M$ represent the ratio of sand cells to water cells. The optimal value of $M$ will be discussed in Chapter 4. Here we suppose $M = 3.5$ temporarily.
+
+
+Figure 4: The Schematic of Algorithm 1
+
+# 3.2 Result
+
+The result of stimulation is shown in Figure 5-9.
+
+Analysis of Result
+
+From the results of Matlab simulation, it is not difficult to see that the Triangular Frustum has been the longest lasting in the beating of waves. This seems to contradict our common sense. Because in daily life, the most common is a round-shaped sandy base. However, from the inspiration that nature, in order to reduce the stress, the formation of the returning geese is usually herringbone. In terms of fluid dynamics, this shape can reduce the force on the following geese by the leading geese. Therefore, this structure is also conducive to the stability of the second half of our geometry when the waves are washing sand base. Therefore, our sandy foundation can be built into a triangular shape that protrudes like the coast, and then our "castle" is built in the second half of the sandy base, so that our foundation will last longer and our castle will be even more Firm.
+
+In real life, we also have many applications of this conclusion. For example, the bow of a ship is always designed in the shape of an inverted triangle. These are all inspired by the principles of fluid dynamics. Then, in the process of playing, we can
+
+
+(a) Begin
+
+
+(b) In the Process
+
+
+(c)
+
+
+
+
+(a) Begin
+
+
+(b) In the Process
+
+
+(c) End
+
+
+(a) Begin
+
+
+Figure 6: Square Frustum
+(b) In the Process
+
+
+(c) End
+
+
+(a) Begin
+
+
+Figure 7: Six-arris Frustum
+(b) In the Process
+
+
+(c) End
+
+
+(a) Begin
+
+
+Figure 8: Conical frustum
+(b) In the Process
+Figure 9: Ellipse Frustum
+
+
+(c) End
+
+also apply these principles.
+
+# 4 The Optimal Sand-to-Water Mixture Proportion
+
+# 4.1 Model Preparation
+
+# 4.1.1 The Principle of Model
+
+We do experiment with different water-sand mixture proportion $m_j$ . After each experiment, we record the time $(t_j)$ it takes to make the sandy base disappear completely, and write down the data points $(m_j, t_j)$ . By fitting function curve of these points, we obtain the best fitting curve. Then we can calculate the optimum proportion of sand-to-water.
+
+# 4.1.2 The Steps of Algorithm
+
+Using the same model we determine in the Section 3.1.3, we test the stability of sandcastle foundation with different sand-to-water ratio.
+
+Step 1 Set $j = 1$ .
+Step 2 Initialize the cell automaton: Let $m_j = \frac{j}{2}$ and generate a cell space large enough. Assign all cells to be empty. At an appropriate distance from cell space, generate the sandcastle base of the best geometry according to the result of Section 3.1.3.
+Step 3 Randomly assign the value of sand-to-water ratio (satisty sand : water = $m_i$ ), and assign $i = 0$ .
+Step 4 At the left end of the cellular automaton, a "sea wave" is simulated with width $L$ and height $h$ . And the formula for $h$ is
+
+$$
+h = \left\{ \begin{array}{l l} 2 H \sin \left(\frac {2 \pi}{d} i\right) & , 2 k d < i < (2 k + 1) d \\ 0 & , (2 k + 1) d < i < (2 k + 2) d \end{array} \right. (k = 0, 1, 2, \ldots)
+$$
+
+Where
+
+$L$ is the cell space size $H$ is the maximum height of the sand base $d$ is the width of the sand substrate surface
+
+Step 5 Run the cellular automaton once and count the cells on the top of the tested geometry $G_{i}$ .
+
+Step 6 If $(2k + 1)d < i < (2k + 2)d$ , $(k = 0,1,2)$ , the water cells at the bottom will infiltrate into the ground and disappear with certain probability $v = \frac{H}{2d}$ .
+
+Step 7 If $G_{i} > \frac{G_{0}}{2}$ , set $i = i + 1$ , return to Step 4.
+
+Step 8 Let $t_j = i$ , record the data points $(m_j, t_j)$ . If $m_j < 40$ , set $j = j + 1$ , return Step 2.
+
+Step 9 Get the fit function of the curve by the fitting all the data points.
+
+
+Figure 10: The Schematic of Algorithm 2
+
+# 4.2 Result
+
+Using the perodic cellular automata, we collect the duration as a function of the sand-to-water ratio. The table below is data points generated from our experiment.
+
+Table 2: Duration of Different Water-sand proportion
+
+| water:sand | 0.0200 | 0.0500 | 0.0800 | 0.1100 | 0.1400 | 0.1700 | 0.2000 | 0.2300 | 0.2600 |
| time | 163 | 209 | 163 | 147 | 150 | 90 | 88 | 34 | 33 |
+
+Then we fit the above results with a ten-degree polynomial, and we get the following picture(Figure 11).
+
+
+Figure 11: Fitting Curve
+
+# Analysis of Result
+
+Through the concentration gradient method research on the water-sand ratio and stability, we have obtained a series of data. By approximating the fitting with a poly-
+
+nomial function based on the least squares, the fitting curve is obtained. The curve shows that the optimal water-to-sand ratio is around 0.05.
+
+# 5 The Optimal Shape in Rainy Day
+
+# 5.1 Modified CA Model
+
+# 5.1.1 Model Assumption
+
+- The capillary phenomenon of water and sand on the surface of the sand base is strong enough. Thence rainwater can only slowly affect the surface of the foundation without changing the internal structure of it.
+- Although the sand-to-water ratio changes gradually, the sand-to-water ratio does not affect the stability of the geometric shape, so the effect of sand-to-water ratio can be ignored.
+- Rainfall and waves affect sandy foundation together, but do not affect the penetration rate of water on the bottom surface.
+- The rainfall's intensity will not cause the sand foundation to collapse suddenly.
+
+# 5.1.2 Similarities and Difference from Basic Model
+
+# Difference
+
+A rainfall module was added to the original model. The impact of the waves on the sandy base still exists besides rain erosion.
+
+# Similarities
+
+Still choose the most representative triangular, square, six-arris and conical, ellipse frustum for experiments, and study the influence of different geometric upper surface on the stability of the sand foundation.
+
+# 5.1.3 The Steps of Algorithm
+
+Step 1 Initialize the cellular automata, namely generate a large enough cell space and initialize all cells to empty.
+Step 2 Add the "Sandcastle Foundation" to the cellular automaton which at an appropriate distance from the cellular automata.
+Step 3 Randomly assign the cells. Set the sand-to-water proportion of the sandcastle foundation to be $M$ , and assign $i = 0$ .
+Step 4 At the left end of the cellular automaton, a "sea wave"(water cells) is simulated with width $L$ and height $h$ . And the formula for $h$ is
+
+$$
+h = \left\{ \begin{array}{l l} 2 H \sin \left(\frac {2 \pi}{d} i\right) & , 2 k d < i < (2 k + 1) d \\ 0 & , (2 k + 1) d < i < (2 k + 2) d \end{array} \right. (k = 0,
+$$
+
+
+关注数学模型获取更多资讯
+
+
+Figure 12: The Schematic of Algorithm 3
+
+Where
+
+$L$ is the cell space size
+
+$H$ is the maximum height of the sand base
+
+$d$ is the width of the sand substrate surface
+
+Step 5 At the top of the cell space, the cell will transform itself into a water cell with a certain probability $u = \frac{H}{20d}$ .
+
+Step 6 Run the cellular automaton once and count the cells on the top of the tested geometry $G_{i}$ .
+
+Step 7 If $(2k + 1)d < i < (2k + 2)d$ , $(k = 0,1,2)$ , water cells at the bottom will infiltrate into the ground and disappear at a certain probability $v$ . Here we suppose $v = \frac{H}{2d}$ .
+
+Step 8 If $G_{i} > \frac{G_{0}}{2}$ , set $i = i + 1$ , return to Step 4.
+
+Step 9 The value of output $i$ is the time that the sandy of the geometry persists.
+
+This algorithm's flowchat can be viewed in Figure12.
+
+# 5.2 Result
+
+The result of stimulation is shown in Figure 13-17.
+
+# Analysis of Result
+
+From the results, we can see that when there is rainfall, the stability of the geometry changes. The advantage of the triangular prism to reduce the impact force is not obvious. The performance of the ellipse frustum is slightly better. Presumably because of the rainfall and the same amount of sand, the surface area of the triangular frustum is larger than that of the ellipse frustum. Therefore, when interference occurs from above, the ellipse frustum can reduce the force to a certain extent in the horizontal
+
+
+(a) Begin
+
+
+(b) Outcome
+
+
+Figure 13: Triangular Frustum
+(a) Begin
+
+
+(b) Outcome
+
+
+Figure 14: Square Frustum
+(a) Begin
+
+
+(b) Outcome
+
+
+(a) Begin
+Figure 16: Conical frustum
+
+
+Figure 15: Six-arris Frustum
+(b) Outcome
+
+
+(a) Begin
+Figure 17: Ellipse Cone
+
+
+(b) Outcome
+
+Triangular frustum has a strong ability to resist the waves, which to some extent offsets the influence brought by his edges and corners. Therefore, on rainy days, the triangular and ellipse frustums have considerable stability.
+
+# 6 Sensitivity Analysis
+
+Through the above analysis, we obtained the optimal shape of the sand base and the optimal sand-water ratio. At the same time, we presume many parameters in the model. In order to ensure the robustness of the model, we test the model from the following aspects.(Due to time constraints, analysis is only performed on the triangular prism.)
+
+First of all, the sediment carrying of the waves. Waves of different sizes can cause varying degrees' impacts on the surface of the sandy foundation. It will also affect the persistence of foundation. In the previous model, we obtained from the sediment transport equation of Dou.
+
+$$
+\frac {\partial (h s)}{\partial t} + \frac {\partial (h v s)}{\partial x} = 0
+$$
+
+Under ideal experimental conditions, the capacity of sediment transport is equal to the sediment content. And we set $S_{v} = 55\%$ .
+
+Second, the width of the sand base. According to common sense, items with a large base area stand more stable than those with a small base area. For sandy foundations, larger sandy foundations are good for dispersing the impact of the waves. It is also possible to last longer. This parameter's selection may have an influence on the selection of the optimal sandy model.
+
+Furthermore, the stability factor of sand. In the simulation process, we assumed the stability factor to be $\sigma$ , so that the instability factor of the sand was calculated based on it. Thereby, it is judged whether or not the sand will move according to $K$ . Depending on the stability factor, the calculated instability factors of the sand are also different. Then, the steady state of the sandy base may also change.
+
+Finally, the critical condition of the upper surface being destroyed. Since the castle is to be built on the mound, it is important to have a stable one. In the above experiment, when $G_{i} < \frac{G_{0}}{2}$ , it is considered that the surface of the sandy base has been damaged, and it is not qualified for building a castle. So $G_{\min} = \frac{G_{0}}{2}$ . When we change
+
+the value of $G_{\min}$ , the best sandy model may change.
+
+Our model design allows us to change these parameters. Next we develop a detailed analysis of following elements' impact on the model.
+
+- Sediment Capacity
+- Geometric Shape's Length Attribute
+- Unstable Factors
+- Sand Foundation Damage Judgment
+
+We record the results in each case with changes of $10\%$ , $5\%$ , $-5\%$ , and $-10\%$ . And the table 3 has shown the results.
+
+Table 3: Results of Sensitivity Analysis
+
+ | -10% | -5% | 0 | 5% | 10% |
| Sv | time out | 1684i | 875i | 796i | 615i |
| H | 636i | 726i | 875i | 1354i | time out |
| d | 702i | 835i | 875i | 926i | 953i |
| σ | 693i | 805i | 875i | 948i | 1062i |
| Gmin | 1242i | 966i | 875i | 801i | 762i |
+
+According to the above data, we can see these parameters' impact extend on the stability.
+
+The analysis results of the sediment capacity analysis indicate that the effect of the sediment capacity on the sand foundation cannot be underestimated. To build a stable sand foundation, choosing the appropriate sand is essential. We should try to find sand with strong adhesion, so as to relatively reduce the ability of the waves' sediment.
+
+The analysis results of the geometric properties of sand foundation show that establishing a wider sand foundation is also a good method to improve its stability. The wider the sand dunes, the more likely it is to disperse the impact of the waves. It is possible for the sandy base to persist longer.
+
+The analysis results of instability factors show that the adhesion between sand and water also has a huge impact on the stability of sand foundation.
+
+The analysis result of the sensitivity of sandy damage judgment shows that no matter what is used as the judgment basis, it does not affect the final result of the model.
+
+# 7 Strengths and Weaknesses
+
+# 7.1 Strengths
+
+- Our model is formulated on a certain theoretical basis. After consulting a lot of literature, we carefully selected the parameters of the model. In this way, we can make our model as close to reality as possible.
+- We carry out reasonable simplification, and establish a cellular automaton model to solve the problem. The results are consistent with actual practice, and have high credibility.
+
+- We take the reality conditions into account and test several basic geometric shape. Then we obtain a much meaningful model. What's more, the experimental results are all easy-to-implement geometric shapes with high Operability.
+- Though we did not do experiment on every shapes, our model has enough flexibility to test more conditions. We just need to alter a few parameters to do deeper research. For example, the model can test the stability of the sand foundation of any geometric shape, and what will be different if we built the castle a little farther from the sea.
+- The model can also be tested according to the type of sand used to find its optimal sand-to-water mixture ratio. As long as some parameters of this type of sand are known, we can meet the requirements of different sandcastle enthusiasts.
+
+# 7.2 Weaknesses
+
+- Limited to the limitation of equipment, the cellular automaton model is not quite exact, and the quality may not meet our higher expectations.
+- Ignore the impact of the enormous waves, so the applicability on the beach with violent waves needs to further improve the model.
+- Indeed, the assumptions may be not hold in some cases. There still be some controversy about our model.
+
+# 7.3 Prompt
+
+We tried to simulate the situation on the beach with simulation methods. However, due to time constraints, we made many assumptions about the real situation. For example, regardless of the evaporation of water, setting the frequency of the waves to a relatively fixed value, and so on. Next, we can try to release some assumptions, so that the sand castle can also change more vividly according to reality. We can explore more possibilities by changing some parameters of the program and adding more rules.
+
+# 8 Strategies to Make sandcastle More Lasting
+
+Based on the results of the sensitivity analysis and our discussions, we find out the strategies below to increase the stability of the sandy foundation.
+
+- Look for different sands on the beach. The stability of the sandcastle foundation formed by mixing different sands may show very different stability. If we could select different sand and admix some water, we can test the sand by hand. If the sand could be pressed into a ball and could be rolled back and forth without spreading out, then the sand is suitable.
+- Since it is a "castle", the site selection is also very important. We had better to choose a place some distance from the waves the high tide of the waves. But there may be a trouble that we need to pay more efforts to fetch water. To address this problem, we may dig a deep enough hole near your "construction site".
+
+- Try to tamp the sand to drain excess water. Too much water makes the sand flow around and is not easy to fix. Too little water makes the sand lack of stickiness. So we need to accurately grasp the proportion of sand and water. From the above conclusions we can know, the optimal water-sand ratio? . If we do not have the right tools to control the proportion, we may choose some sand that has been soaked in seawater in the place washed by the waves.
+- Ensure that each grain of sand is moist. When building a sandy foundation, we can get some inspiration from stirring cement. Build a $\hat{a}\check{A}IJannulus\hat{a}\check{A}I$ on the top of your foundation and pour water into the crater until it is as high as the outer edge. Afterwards, in order to accelerate the infiltration of water, we can continuously churn the sand with your hands until the water basically infiltrates. More importantly, during the construction of the castle, as the water evaporates, the sand on the surface will become more and more dry, so we need to constantly spray water on the castle surface. Or take other measures to ensure that the sand is moist.
+- Establish a larger sand foundation within the circumstances of the force. This can enhance its anti-interference ability. If possible, use stones, wooden boards, plastic plates, etc. to block the waves or reinforce the sandy surface.
+
+# 9 Conclusion
+
+In our experiments, we design a periodic sand-water cell automaton model. The model simulates the actual situation on the beach. Employing multivariate analysis methods, we designed the rules of state changes of various cells in cell space. First, we set up the sand-foundation module and the wave module, and simulate their interaction in cell space to find the best geometry. It is triangle frustum. We then use the least squares method to fit the curve of snad-to-water proportion and duration. Analyze the function image to determine the best water-sand ratio. Since it is outdoors, we have to consider the impact of the weather. After adding the rainfall module to the model, we found that the results of Problem 1 have some changes and the optimal shape is triangular and ellipse frustum. After sensitivity analysis, our results have good robustness and the results are credible. Although our model has good scalability and flexibility, there are certain defects in the model. Too many assumptions make the model still have a certain gap with the actual situation. However, we have discussed some practical ways to enhance the stability of sand castles. For example, how to choose the sand and construction site, how to achieve the sand-water ratio and so on. In the future, we still have the opportunity to continue to optimize our model.
+
+# Article
+
+# The Secret of Sandcastle Building
+
+
+Resource: www.photophoto.com
+
+"Oh, no! My 'castle' is wrecked by waves once again." A guy shouted.
+
+Have you ever been confused about how to make your sandcastle more lasting? Do you wonder what is the hardest sandcastle in the world? Now let me tell you the secret of it.
+
+Since many factors have an influence on the sandcastle. We design an automatic model to simulate the foundation of the sandcastle. By set the surroundings and materials, we invite computers to be our "designer". It can stimulate the surroundings on the beach automatically. After trying again and again, it can tell us
+
+which shape is most beneficial for our "castle" to hold longer.
+
+The first step, build the foundation of your castle. The shape of the sandcastle foundation determines the force it bears. Sandy bases come in various shapes. We find that there is a common ground of them--they all have sloped sides. Therefore, we pick several typical inertial frustum to test their stability. For example, triangular, square, six-arris, triangular and ellipse frustum and so on.
+
+By trying many times, our "designer" tells us the most lasting geometric shape is Triangular Frustum for this shape can minimize the impact on the second half of the sandy foundation. In our experiment, it can stand on the beach much longer than others. So it is suggested that you make a foundation like this.
+
+The next step is to admix sand with water. Since you know the best shape of sandacastle foundation, you may wonder how much water we should add to the sand.
+
+As it is known to all, sands are too loose to be fixed. It is all down to water that enable sand grain to "humble" each other. "Too much water and your sand will flow, too little and it will crumble." So, how much is appropriate? In our experiment, we determine the optimal sand-to-water mixture proportion by plotting the function graph of the duration and the sand-to-water ratio. And the results are shown in the figure below. As the image demonstrated, the best water-to-sand ratio is around 0.05.
+
+So far, we assume the weather is sunny. What if it is rainy? Does the "secret" still work? Again, we run our model but there should be some adjustments for our castle. To mimic the process of rain, we added raindrop on top of the sand. Then, we can find that the deformation process of the sand pile shows different characteristics. It is more likely to be washed over the sand. The optimal geometry this time is an Triangular Frustum or Ellipse Frustum.
+
+Although in rainy day you don't need to constantly do everything possible to keep the sand pile moist in this case, you need to be more careful about the water-sand
+
+
+Figure 18: Fitting Curve
+
+ratio.
+
+Besides, you try to admix different kinds of sand. While you are curving your castle, do not forget to keep your castle moist. But be cautious of adding too much water into your model. If possible, you even can make use of your feet to drain redundant water and tamper your foundation.
+
+Certainly there are many other strategies that you can apply to build your castle. But what I want to remind you is that practice makes perfect. If you want to become a master, you need to master more skills. Don't lose heart just because of failure. There are still numerous secrets of building castle waiting for you to discover.
+
+The next time you go to the beach, you may try these techniques out and see whether it is effective. "Real knowledge comes from practice." To be a real great master, you need to find the secret that suits you.
+
+
+Resource:www.51yuansu.com
+
+# References
+
+[1] J.-P. Bouchaud, M. E. Cates, J. Ravi Prakash, and S. F. Edwards. Hysteresis and metastability in a continuum sandpile model. Phys. Rev. Lett., 74:1982-1985, Mar 1995.
+[2] You-Liang Chen, Geng-Tun Liu, Ning Li, Xi Du, Su-Ran Wang, and Ragfig Az-zam. Stability evaluation of slope subjected to semismic effect combined with consequent rainfall. Engineering Geologe, 266, 202.
+[3] Guoren Dou, Fengwu Dong, and Xibing Dou. The sediment carrying capacity of waves and trides. Science Bulletin, pages 443-446, 1995. 11-1784/N.
+[4] Serge Dumont and Noureddine IGBIDA. On a dual formulation for the growing sandpile problem. European Journal of Applied Mathematics, 20(2):169-185, 2009
+
+[5] Serge Dumont and Noureddine Igbida. On the collapsing sandpile problem. 2011.
+[6] Mehmet Emiroğlu, Ahmet Yalama, and Yasemin Erdogdu. Performance of ready-mixed clay plasters produced with different clay/sand ratios. Applied Clay Science, 115:221-229, 2015.
+[7] N Fraysse, H Thomé, and L Petit. Humidity effects on the stability of a sandpile. The European Physical Journal B-Condensed Matter and Complex Systems, 11(4):615-619, 1999.
+[8] Torsten Gröger, Ugur Tüzün, and David M Heyes. Modelling and measuring of cohesion in wet granular materials. Powder Technology, 133(1-3):203-215, 2003.
+[9] Thomas C Halsey and Alex J Levine. How sandcastles fall. Physical Review Letters, 80(14):3141, 1998.
+[10] DJ Hornbaker, Réka Albert, István Albert, A-L Barabási, and Peter Schiffer. What keeps sandcastles standing? Nature, 387(6635):765-765, 1997.
+[11] TG Mason, AJ Levine, D Ertas, and TC Halsey. Critical angle of wet sandpiles. Physical Review E, 60(5):R5044, 1999.
+[12] Maryam Pakpour, Mehdi Habibi, Peder Møller, and Daniel Bonn. How to construct the perfect sandcastle. Scientific reports, 2(1):1-3, 2012.
+[13] Yangui Wang, Zhaoyin Wang, Qinghua Zeng, and Xiuzhen Lv. Experimental study and similar analysis of physical properties of simulated sand. Journal of Sediment Research, 1992.
+[14] Po-Tsun Yeh, Kevin Zeh-Zon, and Kuang-Tsung Chang. 3d effects of permeability and strength abistropy in the stability of weakly cemented rock slopes subjected to rainfall infiltaction. 266, 2020.
+[15] GAO Yufeng, WANG Di, and ZHANG Fei. Current research and prospects od 3d earth slope stability analysis methods. 43:456-464, 2015.
+
+# Appendix: Our Code
+
+function createSandWorld(long, width, height, ratio)
+close;
+space $=$ zeros(3\*width,2\*long,4\*height);
+space(:,1:end,1) $= 2$ .
+d=10;
+space $=$ createThreePyramid(space,long,height,.5,d,ratio);
+G=getSandNum(space,height);
+Gi=getSandNum(space,height);
+time $\coloneqq 0$ ; draw(space,time,Gi)
+while Gi>=(G/2)
+time $\coloneqq$ time+1; space $=$ createWave(space,time,height);
+draw(space,time,Gi)
+space $=$ moveWater(space);
+space $=$ permeate(space,time,height);
+draw(space,time,Gi)
+space $=$ moveSand(space);
+draw(space,time,Gi)
+Gi=getSandNum(space,height);
+end end function sixPyramid $\equiv$ createSixPyramid(B,L,H,s,d,p)
+dim $\coloneqq$ size(B);
+pos $\coloneqq$ sym('pos',[9,3]);
+pos(1,:)=[dim(1)-1-d,round(dim(2)/2-L/4),2];
+pos(2,:)=[pos(1,1),round(dim(2)/2+L/4),2];
+pos(3,:)=[round(pos(1,1)-sqrt(3)*L/4),round(pos(2,2)+L/4),2];
+pos(4,:)=[round(pos(1,1)-sqrt(3)*L/2),pos(2,2),2];
+pos(5,:)=[pos(4,1),pos(1,2),2];
+pos(6,:)=[pos(3,1),round(pos(1,2)-L/4),2];
+pos(7,:)=[pos(6,1),round(dim(2)/2-L\*s/2),H+pos(1,3)-1]; pos(8,:)=[round(pos(7,1)+sqrt(3)\*s\*L/4), round(pos(7,2)+3\*s\*L/4),H+pos(1,3)-1];
+pos(9,:)=[pos(7,1),round(dim(2)/2+L\*s/2),H+pos(1,3)-1];
+plane $\coloneqq$ sym('plane',[1,6]);
+plane(1)=getPlane(pos(1,:),pos(2,:),pos(8,:));
+plane(2)=getPlane(pos(2,:),pos(8,:),pos(3,:));
+plane(3)=getPlane(pos(4,:),pos(9,:),pos(3,:));
+syms x
+plane(4)=subs(plane(1),x,2\*pos(3,1)-x);
+plane(5)=getPlane(pos(5,:),pos(6,:),pos(7,:));
+plane(6)=getPlane(pos(1,:),pos(6,:),pos(7,:));
+z1=double(pos(8,3));z2=double(pos(1,3));
+x1=double(pos(4,1));x2=double(pos(1,1));
+y1=double(pos(6,2));y2=double(pos(3,2));
+for z=z1:-1:z2
+for x=x1:x2
+for y=y1:y2
+if eval(plane(1))>=z && eval(plane(2))>=z && eval(plane(3))>=z && eval(plane(4))>=z && eval(plane(5))>=z && eval(plane(6))>=z
+
+$\mathrm{B(x,y,z) = 1}$
+end
+end
+end
+end
+end
+B=insertWater(B,p);
+sixPyramid=B;
+end
+function circular $=$ createCircular(B,L,W,H,d,p)
+dim $\equiv$ size(B);
+a $\equiv$ W/2;b $\equiv$ L/2;c $\equiv$ dim(3)-1;
+pos $\equiv$ sym('pos',[4,3]);
+pos(1,:)=[dim(1)-1-d,ceil(dim(2)/2),2];
+pos(2,:)=[ceil(pos(1,1)-a),ceil(pos(1,2)+b),2];
+pos(3,:)=[ceil(pos(1,1)-2*a),pos(1,2),2];
+pos(4,:)=[pos(2,1),ceil(pos(1,2)-b),2];
+x1 $\equiv$ double(pos(3,1));x2 $\equiv$ double(pos(1,1));
+y1 $\equiv$ double(pos(4,2));y2 $\equiv$ double(pos(2,2));
+for z $\equiv$ H+1:-1:2
+for x=x1:x2
+for y=y1:y2
+if $z < = c + 1$ sqrt(c2\*((x-pos(2,1))2/a2+(y-pos(1,2))2/b2))
+ $\mathrm{B(x,y,z) = 1}$
+end
+end
+end
+end
+end
+B=insertWater(B,p);
+circular $\equiv$ B;
+end
+
+function fourPyramid $=$ createFourPyramid(B,L,W,H,s,d,p)
+pos $\equiv$ sym('pos',[4,3]);
+dim $\equiv$ size(B);
+pos(1,:)=[dim(1)-1-d,round((dim(2)-L)/2),2];
+pos(2,:)=[pos(1,1),pos(1,2)+L,2];
+pos(3,:)=[pos(1,1)-W, pos(2,2),2];
+pos(4,:)=[round((pos(1,1)+pos(3,1))/2+(W*s)/2),...
+round((pos(1,2)+pos(2,2))/2+(L*s)/2),...
+H+pos(1,3)-1];
+plane $\equiv$ sym('plane',[1,4]);
+plane(1)=getPlane(pos(1,:),pos(2,:),pos(4,:));
+plane(2)=getPlane(pos(2,:),pos(3,:),pos(4,:));
+syms x y
+plane(3)=subs(plane(1),x, pos(1,1)+pos(3,1)-x);
+plane(4)=subs(plane(2),y, pos(1,2)+pos(2,2)-y);
+z1=double(pos(4,3));z2=double(pos(1,3));
+x1=double(pos(3,1));x2=double(pos(1,1));
+y1=double(pos(1,2));y2=double(pos(3,2));
+for z=z1:-1:z2
+
+for $x = x1:x2$
+for $\mathrm{y} = \mathrm{y}1:\mathrm{y}2$
+if eval(plane(1))>=z && eval(plane(2))>=z && eval(plane(3))>=z && eval(plane(4))>=z
+ $\mathrm{B(x,y,z) = 1}$
+end
+end
+end
+end
+B=insertWater(B,p);
+fourPyramid=B;
+end
+
+function threePyramid $=$ createThreePyramid(B,L,W,H,s,d,p)
+dim $\equiv$ size(B);pos $\equiv$ sym('pos',[5,3]);
+pos(1,:)=[dim(1)-1-d,round(dim(2)/2),2];
+pos(2,:)=[pos(1,1)-W, pos(1,2)-round(L/2),2];
+pos(3,:)=[pos(2,1),pos(1,2)+round(L/2),2];
+pos(4,:)=[pos(1,1)+round((2/3)\*W\*(s-1)),pos(1,2),H+pos(1,3)-1];
+pos(5,:)=[pos(4,1)-W\*s, pos(4,2)+round((L\*s)/2),pos(4,3)];
+plane $\equiv$ sym('plane',[1,3]);
+plane(1)=getPlane(pos(1,:),pos(2,:),pos(4,:));
+plane(2)=getPlane(pos(1,:),pos(3,:),pos(4,:));
+plane(3)=getPlane(pos(3,:),pos(2,:),pos(5,:));
+z1=double(pos(4,3));z2=double(pos(1,3));
+x1=double(pos(3,1));x2=double(pos(1,1));
+y1=double(pos(2,2));y2=double(pos(3,2));
+for z $\equiv$ z1:-1:z2
+for x=x1:x2
+for y=y1:y2
+if eval(plane(1))>=z && eval(plane(2))>=z && eval(plane(3))>=z B(x,y,z)=1;
+end
+end
+end
+end
+end
+B $\equiv$ insertWater(B,p);
+threePyramid $\equiv$ B;
+end
+
+function rain $=$ createRain(B,W,H)
+dim $\equiv$ size(B);
+ $\mathrm{p = H / (20^{\ast}W)}$ .
+for $\mathrm{x} = 2$ :dim(1)-1
+for $\mathrm{y} = 2$ :dim(2)-1
+if rand(1)0 if isemptyfind(mod(peerClass,2) == 1, 1))
+
+k=find(mod(peerClass,2)==1); index=peerClass(k).'; randIndex=randi(length(index));
+
+```matlab
+offset=index(randIndex); if B(x-1,y-1+floor(offset/2),z)=-1
+B(x,y,z)=-1; B(x-1,y-1+floor(offset/2),z)=1;
+else B(x,y,z)=0; B(x-1,y-1+floor(offset/2),z)=1;
+end continue else if lengthfind(mod(peerClass,2) == 0)) == 2
+if B(x,y+(-1)hati(2),z) == -1 B(x,y,z) = -1; B(x,y+(-1)hati(2),z) = 1;
+else B(x,y,z) = 0; B(x,y+(-1)hati(2),z) = 1; end else offset=peerClass/2; if B(x,y-2+offset,z) == -1
+B(x,y,z) = -1; B(x,y-2+offset,z) = 1; else B(x,y,z) = 0; B(x,y-2+offset,z) = 1;
+end end continue end end end end end end pos = B; end
+```
+
+```matlab
+function pos = moveWater(B) [wx, wy, wz] = getWaterPos(B); sea = [wx, wy, wz].'; dim = size(B); for water = sea x = water(1); y = water(2); z = water(3); if (x == 1 || y == 1 || x > dim(1) - 1 || y > dim(2) - 1) && y = 0
+B(x, y, z) = 0; continue elseif y == 0 || x == 0 continue end if ClassNum(B, [x, y, z], -1, 1) >= 12
+continue else underArea = B(x-1: x+1, y-1: y+1, z-1); underNull = find(underArea == 0); underNullNum = length(underNull);
+peerArea = B(x-1: x+1, y-1: y+1, z); peerNull = find(peerArea == 0); peerNullNum = length(peerNull); if underNullNum > 0 offset = getOffset(underNull); if mod(offset, 3) == 2 B(x, y, z) = 0;
+B(x, y-1 + floor(offset/3), z-1) = -1; elseif mod(offset, 3) == 0 B(x, y, z) = 0; B(x+1, y-2 + floor(offset/3), z-1) = -1;
+elseif mod(offset, 3) == 1 B(x, y, z) = 0; B(x-1, y-1 + floor(offset/3), z-1) = -1;
+end continue elseif peerNullNum > 0 offset = getOffset(peerNull);
+if mod(offset, 3) == 2 B(x, y, z) = 0; B(x, y-1 + floor(offset/3), z) = -1;
+elseif mod(offset, 3) == 0 B(x, y, z) = 0; B(x+1, y-2 + floor(offset/3), z) = -1;
+elseif mod(offset, 3) == 1 B(x, y, z) = 0; B(x-1, y-1 + floor(offset/3), z) = -1;
+end continue end end end pos = B; end
+```
+
+function pos=permeate(B,time,W,H) [wx,wy,wz]=getWaterPos(B); sea=[wx,wy,wz]; groundIndexes=find(wz==2).'; $p = H / (2^{*}W)$ ; a1 = time/(2\*W); a2 = round(a1); if (a1<=a2) && isempty-groundIndexes) for index=groundIndexes if rand(1)<=p x=sea(index,1);y=sea(index,2);z=sea(index,3); if x==0 || y==0 continue end B(x,y,z)=0; end end end pos=B; end
+
+function wave $=$ createWave(B,time,W,H) $\mathrm{d} =$ size(B); $x = d(1)$ -1; a1 $=$ time/(2\*W); a2 $=$ round(a1); if a1 $< =$ a2 wave=B; else h=ceil(2\*H\*sin(2\*pi\*time/W)); if h<2 h=2; end B(x,.,2:h+1) $=$ -1; wave=B; end end
+
+function equation = getPlane(A,B,C) syms x y jz D = [ones(4,1), [[x,y,jz]; A; B; C]]; detd = det(D); z = solve(detd, jz); equation = z; end
+
+```matlab
+function k=getUnstableFactor(B,Pos,factor) x=Pos(1);y=Pos(2);z=Pos(3);
+p=0.02; area=B(x-1:x+1,y-1:y+1,z-1:z+1);
+angles=[1,3,7,9,19,21,25,27];
+edges=[2,4,6,8,10,12,16,18,20,22,24,26];
+centers=[5,11,13,15,17,23]; indexes=find(area==1).'; for index=index(index/9);
+dx=mod(mod(index,9),3); if dx==0 dx=3; end
+dy=ceil(mod(index,9)/3); if dy==0 dy=3; end
+```
+
+if isemptyfind angles $\equiv$ index,1)) if dx>2 a=area(2:dx,;,); else a=area(dx:2,.);
+end if dy>2 a=a(:,2:dy,.); else a=a(:,dy:2,.);
+end if dz>2 a=a(:,2:dz); else a=a(:,dz:2);
+end count=length find(a=-1)); factor=factor-(p\*count); elseif isemptyfind edges $\equiv$ index,1))
+if dx $= = 2$ if dy>2 a=area(:,2:dy,.); else a=area(:,dy:2,.); end
+if dz>2 a=a(:,2:dz); else a=a(:,dz:2); end
+elseif dy $= = 2$ if dx>2 a=area(2:dx,;,); else a=area(dx:2,.); end
+if dz>2 a=a(:,2:dz); else a=a(:,dz:2); end
+elseif dz $= = 2$ if dy>2 a=area(:,2:dy,.); else a=area(:,dy:2,.); end if dx>2 a=a(2:dx,;,); else a=a(dx:2,.); end
+count=length find(a=-1)); factor=factor-(p\*count); elseif isemptyfind (centers $\equiv$ index,1))
+if dx $= 2$ if dx>2 a=area(2:dx,;,); else a=area(dx:2,.); end
+elseif dy $= 2$ if dy>2 a=area(:,2:dy,.); else a=area(:,dy:2,.); end
+elseif dz $= 2$ if dz>2 a=area(:,2:dz); else a=area(:,dz:2); end end
+count=length find(a=-1)); factor=factor-(p\*count); else factor=factor+0; end end k=factor;
+end
+
+function offset $\equiv$ getOffset(posNull)
+if isemptyfind(mod(posNull,3 $= = 1$ ,1))
+k $\equiv$ find(mod(posNull,3 $= = 1$ );
+index $\equiv$ posNull(k).';randIndex $\equiv$ randi(length(index));offset $\equiv$ index[randIndex];else randIndex $\equiv$ randi(length(posNull)); offset $\equiv$ posNull(randIndex);
+end end
+
+```matlab
+function num=getClassNum(B,Pos,rowClass,assignClass)
+x=Pos(1);y=Pos(2);z=Pos(3);
+area=B(x-1:x+1,y-1:y+1,z-1:z+1);
+num=lengthfind(area==assignClass));
+if rowClass==assignClass num=num-1; end end
+```
+
+```matlab
+function num = getSandNum(B,h)
+num = lengthfind(z == (h + 1)); end
+```
+
+function draw(B,time,g)
+dim $\equiv$ size(B); [x1,y1,z1] $\equiv$ getWaterPos(B); [x2,y2,z2] $\equiv$ getSandPos(B); [x3,y3,z3] $\equiv$ getLandPos(B); figure(1) clf('reset'); hold off
+h3 $=$ scatter3(x3,y3,z3,'MarkerEdgeColor',[1.750], 'MarkerFaceColor',[1.750]);
+h3.DisplayName $\equiv$ 'Land Cell'; xlim([0 dim(1)]) ylim([1 dim(2)]) zlim([0 dim(3)]) title(['time $\equiv$ ,num2str(time), G $\equiv$ ,num2str(g)];
+hold on
+h1 $=$ scatter3(x1,y1,z1,'MarkerEdgeColor',[0.75.75], 'MarkerFaceColor',[0.75.75]);
+h1.DisplayName $\equiv$ 'Water Cell'; hold on
+h2 $=$ scatter3(x2,y2,z2,'MarkerEdgeColor',[.6.20], 'MarkerFaceColor',[.6.20]);
+h2.DisplayName $\equiv$ 'Sand Cell'; view(218,46); end
+
+function $[\mathrm{x,y,z}] =$ getWaterPos(B)
+dim $=$ size(B);
+ $\mathrm{wX,wY}$ $= \mathrm{find(B} = = -1)$
+water $=$ [wX,mod(wY, dim(2)),ceil(wY./dim(2))];
+x=water(:,1);y=water(:,2);z=water(:,3);end
+
+function $[\mathrm{x},\mathrm{y},\mathrm{z}] =$ getSandPos(B) dim $=$ size(B);
+
+sX,sY
+ $=$ find(B==1);
+sand $=$ [sX,mod(sY,dim(2)),ceil(sY./dim(2));
+ $\mathrm{x = }$ sand(:,1);y=sand(:,2);z=sand(:,3);end function $[\mathbf{x},\mathbf{y},\mathbf{z}] =$ getLandPos(B)
+dim $=$ size(B);
+IX,IY
+ $=$ find(B==2);
+land $=$ [IX,mod(IY,dim(2)),ceil(IY./dim(2))]; $\mathrm{x =}$ land(:,1);y=land(:,2);z=land(:,3); end
\ No newline at end of file
diff --git a/MCM/2020/B/2010821/2010821.md b/MCM/2020/B/2010821/2010821.md
new file mode 100644
index 0000000000000000000000000000000000000000..2151eb9860680f4920e3539525ee5bdfe42275b1
--- /dev/null
+++ b/MCM/2020/B/2010821/2010821.md
@@ -0,0 +1,650 @@
+# How to build a durable sandcastle foundation?
+
+# Summary
+
+In recent years, as sand castle art has become more popular, more and more ingenious sand castle art works have been created. However, perfect works of art will disappear, and how to be preserved for a long time becomes an obstacle to the advancement of Sandcastle art.
+
+First, the optimal geometry considering the dual erosion effects of waves and tides is determined. On the basis of reasonably limiting the shape selection range, by establishing the Dynamic Model of the Sandcastle Foundation based on the evaluation index of the comprehensive sand and sand carrying capacity of waves and tides, we use COMSOL to accurately simulate the movement state of the sand castle foundation. On this basis, by creating the Sandcastle Foundation Damage Index, we reasonably determine the duration of the sandcastle. Finally, we use a Discrete Global Optimization Algorithm Based on Successive Descent Methods to determine the shape of the optimal sandcastle foundation as a cylinder.
+
+Secondly, the optimal water-sand mixing ratio is determined. Based on Model 1, a model of Water-sand Aggregation-water-sand Ratio Relationship was first established. By limiting the allowed degree of polymerization, the water-sand ratio is limited to a reasonable range. We again use the discrete global optimization algorithm based on the successive descent method to efficiently find the optimal water-sand mix ratio of 0.739.
+
+Further, an optimal geometry considering rain erosion is determined. We define the sandcastle collapse limit coefficient, and comprehensively establish the Rainwater Erosion Resistance Quantification Model by integrating the ability to resist rainwater infiltration and the evacuation ratio of the sandcastle structure. Further, A Quantified Model of Sandcastle's Ability to Resist Wave Erosion was established. Considering two indicators of anti-erosion ability and establishing a fuzzy comprehensive evaluation model, we determine that the optimal shape is still a cylinder.
+
+Finally, we use the method of humidifying and spraying the adhesive when the air humidity is the lowest to improve the stability of the sand castle foundation. A Humidity Prediction Model Based on the RBF Neural Network Algorithm was established, and we used the model to test the humidity of Venice Beach on August 1. Based on the air humidity data obtained in the previous month, we divided the weather conditions into sunny and cloudy days, and predicted the air humidity trends for sunny and cloudy days on August 1, respectively. Finally, we decided to humidify the sandcastle foundation and spray the adhesive at 13:30 on a cloudy day and 13:00 on a sunny day.
+
+# Contents
+
+# 1. Introduction 3
+
+1.1 Background 3
+1.2 Restatement of the Problem 3
+
+# 2.Assumption and Justification 3
+
+# 3.Glossary and Notation 4
+
+3.1 Glossary 4
+3.2 Notation 4
+
+# 4. Optimal Shape Model Based on Sandcastle Foundation Dynamics Model............4
+
+4.1 Model Overview 4
+4.2 Quantitative model of sand-bearing capacity 5
+4.3 Sandcastle Foundation dynamics equation model 5
+4.4 Solution of Sandcastle Foundation Dynamics Equation Model 6
+4.5 Geometric Shape Solving Model Based on Discrete Global Optimization Algorithm 9
+
+# 5. Optimal water and sand ratio model 11
+
+5.1 Model Overview 11
+5.2 Quantitative model of Water and sand polymerization degree 11
+5.3 Solving the Model 12
+
+# 6.Fuzzy comprehensive evaluation model based on rain erosion resistance. 13
+
+6.1 Model Overview 13
+6.2 Evaluation model for rain erosion resistance 13
+6.3 Optimal Shape Fuzzy Comprehensive Evaluation Model 14
+
+# 7. Humidity Prediction Model Based on RBF Neural Network Algorithm 16
+
+7.1 Model Overw.. 16
+7.2 Model Preparation 16
+7.3 RBF neural network algorithm 17
+
+# 8. Sensitivity Analysis 18
+
+8.1 Sensitivity Analysis of Most Shaped Model 18
+8.2 Sensitivity Analysis of Optimal Water-Sand Mixing Model 18
+8.3 Sensitivity Analysis of A Fuzzy Comprehensive Evaluation Mode 20
+8.4 Sensitivity Analysis of Sandcastle warning protection Model 20
+
+# 9. Evaluation and Promotion of Model 21
+
+9.1 Strength and Weakness 21
+9.2 Promotion 22
+
+# 10. Conclusions 22
+
+# References. 23
+
+# Memo 24
+
+# Appendix 25
+
+# 1. Introduction
+
+# 1.1 Background
+
+Nowadays, people always face heavy schooling or busy work, and spending some time at the beach allows people to enjoy the cool sea breeze, warm sunshine, and soft beaches. Therefore, more and more people choose to go to the beach for leisure. Building sandcastles is a must-do item for beach leisure. It can develop children's creativity and reduce adults' stress to a great extent. When people build sandcastles, they always want their sandcastles to be beautiful and to last longer. Therefore, it is necessary to study how to keep the sandcastle on the beach for the longest time.
+
+# 1.2 Restatement of the Problem
+
+Building sandcastles is one of the most popular activities item for children on the beach. Every child wants his sandcastle to last for a long time on the beach. Therefore, we were asked to study the persistent retention of sandcastles on the beach. The specific issues are as follows:
+
+- Below environmental factors are assumed to be the same, such as: the distance of the beach from the water, the kind of sand, the amount of sand, the same water-sand ratio. We were asked to build a model under the action of waves and tides to explore the basic geometry of the sandcastle that can last for the longest time on the beach.
+- Without using any additional additives, how to change the water-sand ratio of the sandcastle foundation can make our sandcastle last for the longest time on the beach.
+- Built on the original model, we have considered the impact of rain on the foundation of the sandcastle, and upgrade and optimize the model to further explore the optimal geometry of the sandcastle.
+- By consulting some relevant literatures, other strategies have been investigated to improve the retention time of sandcastles.
+
+# 2. Assumption and Justification
+
+We make some general assumptions to simplify our model. These assumptions and corresponding justification are listed below:
+
+- It is assumed that the base area and height of each considered shape are the same, and that there is no significant difference in the sand quality and sand quantity of our sandcastle foundation.
+- Sandcastle foundations are built at approximately the same distance from the water.
+- In the process of natural erosion, ignore the situation of huge waves and winds sweeping the sandcastle far from the original location.
+- We only consider the impact of waves, tides and rain on the water-sand ratio of the sandcastle foundation.
+- The sand particles at the sandcastle foundation are evenly distributed.
+- Assume that people will spray the adhesive in time after the warning occurs, and the adhesive viscosity lasts at least 8 hours.
+
+More detailed assumptions will be listed if needed.
+
+# 3.Glossary and Notation
+
+# 3.1 Glossary
+
+- Wave current: refers to the combined effect of waves and tides.
+- Damage index: It takes the change of the base area and height of the sandcastle as a measuring factor to describe the damage degree of the sandcastles.
+
+# 3.2 Notation
+
+Table 1 Notation
+
+
| Symbols | Definition |
| Fs | sand-bearing capacity |
| G | Damage Index |
| Q | Water and sand polymerization degree |
| np | Water-sand ratio |
| Fiu | ability to resist rainwater infiltration |
| Ri | sandcastle structure evacuation ratio |
| H1i | rain erosion resistance |
| H2i | wave current erosion resistance |
| g(●) | Neural Network Radial Basis Function |
+
+# 4. Optimal Shape Model Based on Sandcastle Foundation Dynamics Model
+
+# 4.1 Model Overview
+
+In terms of shape selection, considering the common sandcastle basic shapes that people are accustomed to using, we only consider the most suitable ones from a cuboid with a certain aspect ratio, an elliptical cylinder with a certain aspect ratio, and a sector with a certain radius and arc length ratio. Excellent shape.First, we have established a model for assessing the sand-bearing capacity of waves. Based on this, the basic dynamics equation of the sandcastle foundation is established. Secondly, by creating a sandcastle foundation damage index, we have determined the duration of a sandcastle of a
+
+specific shape. Further, based on the duration of some specific shape sandcastles, we have used a discrete global optimization algorithm based on successive descent methods to determine the optimal sandcastle shape in the global state[1].
+
+# 4.2 Quantitative model of sand-bearing capacity
+
+Because wave and tide are the two main driving forces in the sand movement of sandcastles, we first have considered how the sandcastles are impacted under the action of waves and tides, and choose to define this ability using the capacity of sand-bearing.
+
+Considering tidal and wave sand-bearing capacity:
+
+$$
+C _ {*} = C _ {* C} + C _ {* W} \tag {1}
+$$
+
+The capacity of sand-bearing under tide effect can be expressed as:
+
+$$
+C _ {* C} = \beta_ {C} \frac {n _ {p} V}{1 - n _ {p}} \frac {\left(u ^ {2} + v ^ {2}\right)}{C _ {z} ^ {2} h _ {w} \omega_ {s}} \tag {2}
+$$
+
+Where,
+
+$n_p$ represents the initial value of the water-sand ratio of the sandcastle foundation, which is determined by the volume ratio; $V$ is the initial value of the sandy volume of the sandcastle foundation; $u$ and $\nu$ represents the components of the water flow velocity $x$ and $y$ the sum of the waves under the action of waves and tides; $h_w$ is the height of seawater level; $\beta_C$ is the positive definite coefficient; $\omega_s$ indicates sand-bearing speed.
+
+- Regarding the capacity of sand-bearing under the action of waves, we consider the unbroken wave and the broken wave respectively, which can be expressed as:
+
+$$
+C _ {* W} = \beta_ {1} \frac {n _ {p} V}{1 - n _ {p}} \frac {f _ {W} H _ {r m s} ^ {3}}{T ^ {3} g h \omega_ {s} \sinh_ {w} ^ {3} \left(k h _ {w}\right)} + \beta_ {2} \frac {1}{1 - n _ {p}} \frac {D _ {B}}{h _ {w} \omega_ {s}} \tag {3}
+$$
+
+Where,
+
+$\beta_{1}$ and $\beta_{2}$ are respectively the positive definite coefficients of the unbroken wave and the broken wave; $f_{w}$ represents the wave friction coefficient; $H_{rms}$ is the mean square wave height, $H_{rms} = H_s / \sqrt{2}$ , $H_{s}$ is the effective wave height; $T$ is the wave period; $k$ is the wave quantization value of one period; $D_B$ represents the wave energy dissipation due to the wave breaking.
+
+We build the capacity of sand-bearing $F_{s}$ as:
+
+$$
+F _ {s} = \alpha_ {r} \omega_ {s} \left(C _ {*} - V\right) \tag {4}
+$$
+
+Where,
+
+$\alpha_{r}$ indicates the sand settling coefficient.
+
+# 4.3 Sandcastle Foundation dynamics equation model
+
+We use the right-angle vertex of the three-dimensional geometry as the coordinate center to establish a three-dimensional right-angle coordinate system. Then, changes of $x, y$ and $h$ over time $t$ of our sandcastle foundation satisfy the following equations in dynamic
+
+$$
+\frac {\partial (h V)}{\partial t} + \frac {\partial (h u V)}{\partial x} + \frac {\partial (h v V)}{\partial y} = \frac {\partial}{\partial x} \left(v _ {x} \frac {\partial (h V)}{\partial x}\right) + \frac {\partial}{\partial y} \left(v _ {y} \frac {\partial (h V)}{\partial y}\right) + F _ {s} \tag {5}
+$$
+
+Where,
+
+$h$ indicates the height of the sandcastle foundation; $\nu_{x}$ and $\nu_{y}$ is the horizontal diffusion coefficient.
+
+In the dynamic simulation process, we give the constraint that the normal flux is 0 to determine the solid boundary condition as:
+
+$$
+\bar {\phi} \frac {\partial V}{\partial \bar {n}} = 0 \tag {6}
+$$
+
+Where,
+
+$\vec{\phi}$ represents the total flux through the sandcastle foundation; $\vec{n}$ represents the normal vector.
+
+At the same time, we give the time course of the sand volume as the open boundary condition:
+
+$$
+V = V (x, y, t) \tag {7}
+$$
+
+# 4.4 Solution of Sandcastle Foundation Dynamics Equation Model
+
+# 4.4.1 Solution of Cuboid Dynamics Simulation
+
+- Step1. Initialize the parameters of the sandcastle foundation dynamics equation. We take the optimal shape of a cuboid as an example and define the aspect ratio of the bottom of the cuboid as its shape factor, which is $e_1 = l : w$ . The initial values of $x, y$ in the equation correspond to $l, w$ , respectively. In combination with relevant literature, the parameter initialization values in the given dynamic equation are shown in the following table
+
+Table 2 Parameter Determination
+
+| symbol | Numerical value | symbol | Numerical value | symbol | Numerical value |
| βC | 0.023 | β1 | 0.300 | T | 5.000 |
| Cz | 0.249 | β2 | 0.001 | Hrms | 5.185 |
| ωs | 1.489 | fW | 1.570 | DB | 6.301 |
+
+Table 3 Assignment Table for Initialization Variables
+
+| symbol | Numerical value | symbol | Numerical value |
| hw | 900 | np | 6.0 |
| u | 30 | x | 1.0 |
| v | 30 | y | 1.0 |
| V1 | 100 | | |
+
+
+FIG 1. Cuboid simulation diagram when $t$ is equal to zero
+
+We use COMSOL to solve the dynamic equations and simulate the pressure effect of wave current on the bottom surface of $e_1 = 1.0$ cuboid. The pressure distribution at time $t = 0$ shown in the figure above is obtained. Among them, the wave current starts from the red part and the pressure gradually decreases.
+
+- Step2. Establish a damage index indicator model. In order to describe the degree of damage of the sandcastle and better explain the meaning of duration, we take the change in base area and height as the measuring factor, and give the definition of damage index as
+
+$$
+G = \mu_ {s} \Delta S + \mu_ {h} \Delta h \tag {8}
+$$
+
+Where,
+
+$\Delta S$ and $\Delta h$ represent respectively the increase in the bottom area of the sandcastle and the decrease in the minimum height of the sandcastle, over a period of time; $\mu_{s}$ and $\mu_{h}$ are the influence factors of the change in the bottom area and the height, respectively.
+
+It should be noted that in the real practice, changes are both uncertain of base areas and heights. The sandcastle foundation undergoes the process of waves and tidal effects. The sand at the bottom layer will spread under the effect of erosion, resulting in a reduction in the height of the sandcastle foundation and an increase in its bottom area. The impact of the simplified process on the sandcastle is shown in FIG 1. below:
+
+
+FIG 2. Simplified model of sandcastle foundation erosion
+
+The cubic three-dimensional geometry in the figure represents the original state of the sandcastle foundation, and the dashed line represents the existing state after the sandcastle has been eroded. The increase of the base area and the decrease of the height of the existing state compared with the original state are the quantity of the damage index.
+
+- Step3. Find the relationship between the damage index over time. We obtain that the sand level is eroded by the waves and tides for a period of time, and the water level $h_w'$ at the new time and wave current velocity in the $x$ and $y$ directions are $u'$ and $v'$ , the water-sand ratio $n_p'$ , and the $\Delta S$ and $\Delta h$ of the sandcastle foundation. We discretize the time, set the completion time of the sandcastle to zero, that is, $t = 0$ is the initial time point, and then take the time points at equal intervals to draw a scatter plot as shown in the figure.
+
+
+FIG 3. Damage index over time
+
+The diagram illustrates the damage indicator function established and the line connecting the points at equal intervals. It should be shown that the discrete points after point B are almost on the same horizontal line. As a result, it is revealed that the sandcastle is completely damaged, with a corresponding damage index $G_{\mathrm{max}}$ . Based on that, damage factor is defined with coefficient being $\mu = 0.52$ , that is, when $G = 0.52G_{\mathrm{max}}$ , it is considered that the sandcastle has been damaged to the maximum bearing degree. On this basis, we take point A as the critical time point, which refers to the corresponding time to the maximum degree of damage which could bear, and the time corresponding to that point is the duration $T / day$ .
+
+At the same time, we define the duration of any sandcastle foundation constructed from a three-dimensional geometric shape from the stacking to the critical time point as $T$ . The longer the duration, the better the shape corresponds to the sandcastle foundation.
+
+We simulate the damage index and duration of 10 groups of cuboids with different shape factors $e_1$ under the action of waves and tides. It should be noted that the 10 sets of data we have obtained are only the first step in obtaining the global optimal solution. The next step is shown in the next model. The 10 sets of data are shown in the following table:
+
+Table 4 Corresponding duration of cuboids with different aspect ratios
+
+| Shape factor | Duration | Shape factor | Duration |
| 1.0 | 2.5 | 3.5 | 1.2 |
| 1.5 | 2.4 | 4.0 | 1.4 |
| 2.0 | 2.1 | 4.5 | 2.8 |
| 2.5 | 2 | 5.0 | 2.0 |
| 3.0 | 1.8 | 5.5 | 2.6 |
+
+# 4.4.2 Simulation solution of dynamics equations of elliptic cylinder and sector cylinder
+
+We consider the damage index and duration corresponding to the parameters of the other two common three-dimensional geometric shapes. The detailed data results of the simulation are shown in the appendix.
+
+- We define the ratio of the long axis to the short axis of the ellipse as its shape factor, which is $e_2 = a:b$ . Based on this, a dynamic equation solving model is established, and the simulation simulation graph at $e_2 = 1$ is shown as follows:
+- Taking the ratio of the radius $r$ of the sector cylinder to the arc length $l$ as the coefficient $e_3$ , we take the sector cylinder of $e_3 = 1 / \pi$ as an example to make the following simulation:
+
+
+FIG 4. Cylinder simulation diagram
+
+
+FIG 5. Sector cylinder simulation diagram
+
+# 4.5 Geometric Shape Solving Model Based on Discrete Global Optimization Algorithm
+
+# 4.5.1 Model Overview
+
+By solving the dynamic equation of the sandcastle foundation, we can get the duration $T_{i}(i = 1,2,3)$ of sandcastle foundation shape under the control of $e_{i}(i = 1,2,3)$ . However, in order to find the optima
+
+value of $T_{i}$ , we try to establish an optimal three-dimensional geometric model with a Discrete Global Optimization Algorithm as the core. On this basis, we can find the optimal coefficient value $e_{i}(i = 1,2,3)$ of any of the three shapes, so that the shape we choose has the longest duration.
+
+# 4.5.2 Model establishment and solution
+
+We still take the cuboid as an example. Through simulation, we can obtain the duration under multiple sets of coefficients. In this regard, we use these data to deduce the Discrete Global Optimization Algorithm based on the successive descent method. The specific steps are as follows:
+
+Step1. Successive Descent method to find discrete local minimum points. (1) set the set $E$ as the Cuboid shape factor selection point, satisfy $E \subset R^n$ if $f(e_k + d^+) \geq f(e_k)$ , let $\Omega \coloneqq \Omega \setminus d^+$ , cycle this
+
+
+
+step; determine whether $\Omega$ is 0 to determine whether to update the discrete local minima, after, we get the discrete global minima $e_k^+$ .
+
+Step2. Construct a discrete fill function. Let initialization conditions $\varepsilon = 10^{-5}$ , $r = 1$ , $q_0 = 0.01$ ; $V = V_0 = \{\pm p_j, j = 1,2,\dots,n\}$ . The constructed discrete fill function is:
+
+$$
+F \left(e _ {k}, e _ {k} ^ {+}, q, r\right) = \frac {1}{q + \left\| e _ {k} - e _ {k} ^ {+} \right\|} \varphi_ {q} \left(\max \left\{f \left(e _ {k}\right) - f \left(e _ {k} ^ {+}\right) + r, 0 \right\}\right) \tag {9}
+$$
+
+Step3. If $r \leq \varepsilon$ , the algorithm is terminated, and local variable $e_k^+$ can be used as the discrete global minimum, otherwise, the next step is performed.
+
+Step4. If $D \neq \phi$ , go to step 6, otherwise, go to the next step.
+
+Step5. If $q < \varepsilon \times 10^{-2}$ , then make $r = r / 10$ , $q = q_0 / 10$ , $D = D_0$ , go to the step 2, otherwise, let $q = q / 10$ , go to the step 2
+
+Step6. Take any direction $d \in D$ , so that $D = D - d$ , enter the inner loop phase, change the initial value of $D$ , and cycle the
+
+corresponding steps. When the parameter $r$ are sufficiently small, it is believed that there is no better local minimum point in the selected point set $E$ .
+
+From the dynamic model, we can get the corresponding duration $T$ for each assignment, and with $K = 1 / T$ as the objective function, we optimize it by the algorithm:
+
+$$
+\min K = \left\{f \left(e _ {k}\right), e _ {k} \in R ^ {n} \right\} \tag {10}
+$$
+
+$$
+\max T = \frac {1}{K} \tag {11}
+$$
+
+Similarly to the other two shape models, we can use the Discrete Global Optimization Algorithm to find the maximum $T$ in the duration controlled by the three shape scale as shown in the following table:
+
+Table 5 Table of the best shape parameters for the three shapes
+
+| Geometric shape | shape factor | Duration |
| Cuboid | 1.46 | 2.9 |
| Elliptic cylinder | 1.00 | 4.8 |
| Sector cylinder | 0.3π | 3.7 |
+
+From the data in the above table, we can clearly conclude that when the base area is constant and the ratio of the long axis to the short axis is 1, the sandcastle foundation can last the longest time. We approximate it as a cylinder with a maximum duration of 4.8 days.
+
+# 5. Optimal water and sand ratio model
+
+# 5.1 Model Overview
+
+We use the optimal shape model based on the sandcastle foundation dynamics model to obtain the optimal shape of the sandcastle foundation with the longest duration when the water-sand ratio is constant. However, the size of the water-sand ratio will also directly affect the erosion resistance of the sandcastle foundation. We have introduced the equation of the relationship between the water-sand polymerization degree and the water-sand ratio, and at the same time limit the allowed polymerization degree range to obtain a reasonable water-sand ratio Range, and then by solving the model, the optimal water-sand ratio is obtained[2].
+
+# 5.2 Quantitative model of Water and sand polymerization degree
+
+We consider the aggregation of water and sand, leading to the concept of the degree of water-sand polymerization. We give the definition of the degree of polymerization of water and sand based on the volume change of the water and sand specific gravity before and after polymerization:
+
+$$
+Q = \left(1 - \frac {\frac {1}{\gamma_ {1}} V _ {1} + \frac {1}{\gamma_ {2}} V _ {2}}{V _ {1} + V _ {2}}\right) \times 100 \% \tag{12}
+$$
+
+Where,
+
+$V_{1}, V_{2}$ respectively represent the volume of sand and water before mixing; $\gamma_{1}$ and $\gamma_{2}$ represent the water absorption coefficient and water solubility coefficient, respectively. We use these two coefficients to express the polymerization ability of water and sand. We simplify the equation with the water-sand ratio $n_{p}$ :
+
+$$
+Q = \left(1 - \frac {\left(\frac {n _ {p}}{\gamma_ {1}} + \frac {1}{\gamma_ {2}}\right)}{n _ {p} + 1}\right) \times 100 \% \tag{13}
+$$
+
+We use MATLAB to solve this equation and get the relationship curve between the degree of Water and sand polymerization and the ratio of water and sand, as shown in FIG 6.
+
+
+FIG 6. Degree of convergence of water and sand
+
+The degree of water-sand polymerization increases with the increase of water-sand ratio. We must consider not only the relatively large cohesiveness, but also the optimization of erosion resistance, that is, the longest duration. In combination with the actual situation, we choose a two-point interval with a large viscosity range as shown in our figure as our optimization constraint.
+
+# 5.3 Solving the Model
+
+We use the sandcastle foundation dynamics model in Question 1 as the basis, use the optimal proportion of the best shape we selected, and use the initial value of the water-sand ratio $n_{p}$ as a variable to discretely process the target $T$ in the same way:
+
+First put into the dynamic equation (1), according to the change of sandcastle foundations $\Delta S$ and $\Delta h$ , the relationship between the damage index and time is obtained, and the constraint condition is added:
+
+$$
+0. 7 < n _ {p} < 0. 8 5 \tag {14}
+$$
+
+Find the duration table record of $n_p$ equally spaced points:
+
+Table 6 Corresponding duration of different water-sand ratios
+
+| Water-sand ratio | Duration | Water-sand ratio | Duration |
| 0.70 | 4.7 | 0.78 | 5.1 |
| 0.72 | 4.9 | 0.80 | 5.0 |
| 0.74 | 5.2 | 0.82 | 4.3 |
| 0.76 | 4.8 | 0.84 | 4.2 |
+
+Using the Discrete Global Optimization Algorithm of the best shape model in Problem 1, we can conclude that when the water-sand ratio is $n_p = 0.739$ , the longest duration of the sandcastle foundation is achieved under the condition of ensuring good shape adhesion.
+
+# 6.Fuzzy comprehensive evaluation model based on rain erosion resistance
+
+# 6.1 Model Overview
+
+Aiming at the erosion of sandcastles by waves and tides, we have established some models to obtain the optimal shape of the sandcastle foundation as a cylinder. However, the actual erosion risk facing sandcastles is not limited to the effects of waves and tides. We should also consider the erosion of rainwater. Based on the cuboids, ellipses, and sectors at the optimal shape coefficients obtained in Problem 1, we have further addressed the problem of rain erosion., we define the sandcastle collapse limit coefficient, and comprehensively establish the rain erosion resistance assessment model by comprehensively ability to resist rainwater infiltration and sandcastle structure evacuation ratio. On this basis, an evaluation model of sandcastle anti-wave erosion ability is constructed. Finally, a fuzzy comprehensive evaluation model is established to evaluate the optimal sandcastle shape under the action of waves and rain[3].
+
+# 6.2 Evaluation model for rain erosion resistance
+
+# 6.2.1 Establishment of Model
+
+Before establishing a fuzzy comprehensive evaluation model for the optimal shape of the sandcastle foundation, we set up the evaluation model of rain erosion resistance as follows:
+
+First, the sandcastle's ability to resist rainwater infiltration is established as follows[4]:
+
+$$
+F _ {u} ^ {i} = \sqrt {2 R _ {a} ^ {i} - 1} + \left(R _ {a} ^ {i} - \sqrt {2 R _ {a} ^ {i} - 1}\right) \tag {15}
+$$
+
+Where,
+
+$F_{u}^{i}(i = 1,2,3)$ represents the ability of the cuboid, cylinder and sector cylinder to resist rain infiltration. $Q^{i}$ represents the degree of aggregation of the three shapes of gravel, respectively. $R_{a}^{i} = \kappa_{1}\cdot n_{p}\cdot n_{\text{collapse}}^{i}$ . $\kappa_{1}$ represents positive definite coefficient. $n_{\text{collapse}}^{i}(i = 1,2,3)$ represents the sandcastle collapse limit coefficients of the three shapes, respectively.
+
+Secondly, the evacuation ratio indicators for establishing the sandcastle structure are as follows:
+
+$$
+R ^ {i} = \kappa_ {2} / \left[ Q ^ {i} \left(V _ {f} - V _ {D L}\right) \right] \tag {16}
+$$
+
+Where,
+
+$R^{i}(i = 1,2,3)$ . $\kappa_{2}$ represents positive definite coefficient. $V_{f}$ represents the shear capacity of sandcastle gravel. $V_{DL}$ represents the shear force on the sand under the rain gravity load.
+
+Finally, an assessment model for rainwater erosion resistance is established:
+
+$$
+H _ {1} ^ {i} = F _ {u} ^ {i} \cdot n _ {\text {c o l l a p s e}} ^ {i} / R ^ {i} \tag {17}
+$$
+
+Where,
+
+$H_{1}^{i}(i = 1,2,3)$ represents rainwater erosion resistance of the three shapes.
+
+# 6.2.2 Solution of Model
+
+In order to solve the model, we consulted some literatures and defined below parameters as shown in the following table:
+
+Table 7 Number of rain erosion resistance model
+
+| Symbols | Numerical value |
| (κ1, κ2) | (10.6,12.1) |
| np | 0.125 |
| (n1collapse, n2collapse, n3collapse) | (1,1.24,1.56) |
| Vf | 305 |
| VDL | 355 |
+
+Then, we obtained the rainwater erosion resistance of cuboids, cylinders, and sectors as shown in the following table:
+
+Table 8 Resistance to rain erosion
+
+| Geometric shape | Numerical value |
| cuboid | 50.6 |
| cylinder | 63.5 |
| thick sector | 68.1 |
+
+# 6.3 Optimal Shape Fuzzy Comprehensive Evaluation Model
+
+# 6.3.1 Establishment of Model
+
+Based on the evaluation model of sandcastle resistance to rain erosion, we first establish the evaluation model of sandcastle resistance to wave current erosion as follows:
+
+$$
+H _ {2} ^ {i} = \kappa_ {3} T _ {i} \tag {18}
+$$
+
+Where,
+
+$H_{2}^{i}(i = 1,2,3)$ represents the resistance to wave current erosion of the three shapes. $\kappa_{3}$ stands for positive definite coefficient.
+
+It should be noted that, in order to determine whether the cuboid, elliptic cylinder and fan-shaped body are still optimal with the optimal shape coefficient, we have established the optimal shape fuzzy synthesis based on the numerical values of the different capabilities of the three shapes. The evaluation model is as follows:
+
+- Step1.Determine membership function. We take the ratio of the ability to resist wave current erosion or rain erosion to the total shape of the three shapes as the membership, so we establish membership functions as follows:
+
+$$
+\mu_ {H 1} \left(H _ {1} ^ {i}\right) = \frac {H _ {1} ^ {i}}{\sum_ {i = 1} ^ {3} H _ {1} ^ {i}} \tag {19}
+$$
+
+$$
+\mu_ {H 2} \left(H _ {2} ^ {i}\right) = \frac {H _ {2} ^ {i}}{\sum_ {i = 1} ^ {3} H _ {2} ^ {i}} \tag {20}
+$$
+
+- Step2. Calculate membership table based on membership function. We substitute the values of the two capabilities of the three shapes into Formula $(19),(20)$ , calculate the membership table of the optimal shape, and establish a fuzzy relation matrix.
+- Step3.Determine the weight of the two capabilities in the optimal evaluation. We have considered that when our sandcastle faces seawater, tidal erosion and rainwater erosion at the same time, the erosion degree of rainwater on sandcastle is much greater than that of seawater and tide on sandcastle. Therefore, we set the weight of the two capabilities in decision making to $A = (0.3, 0.7)$ :
+- Step4: Calculate comprehensive evaluation results. The calculation formula for the comprehensive evaluation results is as follows:
+
+$$
+B = A \cdot R \tag {21}
+$$
+
+# 6.3.2 Solution of Model
+
+In order to solve the model, we have determined the calculation parameters and calculation results by referring to the online information and citing reference parameters:
+
+Table 9 Number of wave current erosion resistance model
+
+| Symbols | Numerical value |
| κ3 | 13.44 |
| (T1,T2,T3) | (2.6,6.8,5.2) |
+
+Then, we obtained the anti-wave erosion ability of cuboid, cylinder and fan-shape cylinder as shown in the following table:
+
+Table 10 Resistance to wave erosion
+
+| Geometric shape | Numerical value |
| rectangle | 34.94 |
| roundness | 91.39 |
| sector | 69.89 |
+
+The membership degrees corresponding to the three shapes according to the membership function are shown in Table 11:
+
+Table 11 Competency assessment membership scale
+
+| Evaluation index | Cuboid | Cylinder | Sector cylinder |
| Rain erosion resistance | 0.278 | 0.348 | 0.374 |
| wave current erosion resistance | 0.178 | 0.466 | 0.356 |
+
+This determines the fuzzy relation matrix:
+
+$$
+R = \left[ \begin{array}{l l l} 0. 2 7 8 & 0. 3 4 8 & 0. 3 7 4 \\ 0. 1 7 8 & 0. 4 6 6 & 0. 3 5 6 \end{array} \right] \tag {22}
+$$
+
+Since the weight of the project in decision-making is $A = (0.3, 0.7)$ , the comprehensive evaluation obtained is:
+
+$$
+B = A \cdot R = (0. 2 0 8, 0. 4 3 0, 0. 3 6 1) \tag {23}
+$$
+
+It can be seen that the stability of cuboids, cylinders, and fan-shape cylinders under the combined action of wave erosion and rain erosion is 0.208, 0.430, and 0361, respectively. By comparison, we can know that the stability of cylinders under double erosion is the highest.
+
+# 7. Humidity Prediction Model Based on RBF Neural Network Algorithm
+
+# 7.1 Model Overw
+
+Sandcastles are products of mixing sand and water. In order to improve the duration of sandcastles, the most important thing is the humidity of the sandcastle and the protection after the production is completed. After consulting the relevant literature, we know that the adhesive protection after the production is also determined by the humidity of the sandcastle. Therefore, when we know the general trend of humidity, we can make the sandcastle last longer. We selected the RBF neural network algorithm as the prediction network, combined with the daily changing characteristics of the beach temperature as an input factor, and proposed a sandcastle humidity prediction model suitable for better extending the sandcastle duration[5].
+
+# 7.2 Model Preparation
+
+We take a typical beach, Venice Beach, as an example. The humidity data from August 1 to 3, 2019 was obtained from the U.S. Meteorological Administration. It is worth mentioning that because of the similar sandy conditions and weather conditions of the beach, the Venice Beach we selected is very typical[6].
+
+# - Index screening:
+
+The humidity of the beach is the result of the combination of many factors, which are closely related to solar radiation, wind, beach temperature, and ground vegetation. We used principal
+
+component analysis to find that temperature changes caused by the sun's light intensity had the most significant effect on beach humidity. Therefore, we explore the change of the humidity of the beach under different time and different weather conditions.
+
+Through observation data, we found that there is a clear correlation between the beach environment humidity and beach temperature, specifically: no matter whether it is sunny or cloudy, as the beach temperature changes over time, the humidity will also change accordingly; however, under sunny weather, Humidity changes on the beach are greater than on cloudy days.
+
+In summary, the humidity of the beach is indeed mainly affected by the change of the ambient temperature, and it has a negative correlation with it. We select the ambient temperature at different time periods of the day as the input factor, select 500 input samples, 50 simulated samples, and 50 predicted samples.
+
+# 7.3 RBF neural network algorithm
+
+The RBF neural network algorithm has strong non-linear fitting property. It not only has good best approximation performance for complex functions, but also has fast convergence speed.
+
+Assume that the input vector is $X = \left[x_{1},x_{2},\dots ,x_{n}\right]^{T}$ , n is the number of input samples, $W = \left[w_{1},w_{2},\dots ,w_{n}\right]^{T}$ is the output weight vector, $m$ is the number of hidden nodes, $d$ is the offset, $h(X)$ is the network output, and $g(\bullet)$ is the radial basis function. Gaussian functions are usually used:
+
+$$
+g \left(\left\| X - C _ {i} \right\|\right) = \exp \left(- \left\| X - C _ {i} \right\| ^ {2} / \sigma_ {i} ^ {2}\right) \tag {24}
+$$
+
+Where, $\| \bullet \|$ is the European norm, and $C_i$ is the $i$ data center in the network. In this case, the output of the neural network is:
+
+$$
+h (X) = d + \sum_ {i = 1} ^ {m} w _ {i} \varphi \left(\left\| X - C _ {i} \right\|\right) \tag {25}
+$$
+
+Through the RBF neural network algorithm, we obtained the humidity change map under sunny and cloudy conditions. Starting from 7:00 am, samples were taken every 30 minutes, as shown in the figure below:
+
+
+FIG 7. Moderate forecast curves for sunny and cloudy days
+
+As can be seen from the figure, we should humidify the glue in a timely manner at 13:30 on a cloudy day and at 13:00 on a sunny day. We know that the glue can maintain its viscosity for at least 8 hours, so the humidity starts to increase to 15:00 sandcastles can be protected before reaching basic stability at 21:00. By predicting the change in humidity, humidity prediction of the sandcastle situation is carried out to ensure the longest lasting time.
+
+# 8. Sensitivity Analysis
+
+# 8.1 Sensitivity Analysis of Most Shaped Model Based on Sandcastle Foundation Dynamics Model
+
+It should be noted that maximum damage coefficients can be selected differently by other people. If the value of $\mu$ is different, the maximum duration corresponding to each shape will change, and the change trend is complex and difficult to determine. Therefore, we need to explore whether there is a stable result (the cylinder has the strongest endurance) for different $\mu$ to prove the stability of the model. Here we continue to determine the stability of the cylinder by continuously taking the $\mu$ value to determine whether the cylinder is still the best shape. The results are as follows:
+
+
+FIG 8. Judging results
+
+As shown in the figure, with $\Delta \mu = 0.04$ as the time interval, we take 24 sample points around $\mu = 0.52$ for analysis. Through graphical analysis, when $\mu \in (0.08,0.96)$ , a cylinder is still the optimal shape. When $\mu \notin (0.08,0.96)$ , the optimal shape becomes a fan or cuboid. Therefore, we believe that our model is stable when it can withstand the maximum damage factor $\mu \in (0.08,0.96)$ . In other words, our model is very robust.
+
+# 8.2 Sensitivity Analysis of Optimal Water-Sand Mixing Model
+
+Based on the introduction of the equation of the relationship between water-sand polymerization degree and water-sand ratio, by limiting the allowable degree of polymerization, we have obtained a reasonable range of water-sand ratio. However, the relationship between the degree of polymerization
+
+and the ratio of water to sand is affected by two parameters of $\gamma, \gamma'$ . When the value of $\gamma, \gamma'$ is not accurate, the results of the model could be questionable. Therefore, we continuously adjust the value of $\gamma, \gamma'$ , as shown in FIG 9. Based on this, the robustness of the model is finally judged by calculating the error of the model result.
+
+
+FIG 9. Degree of polymerization curve
+
+As shown in the figure, based on the original coefficients, we increase and decrease the data by $1\%$ and $2\%$ respectively (note that its initial parameter $\gamma = 1.56, \gamma' = 2.86$ ), so as to obtain the graph of the degree of aggregation-water-sand ratio in five cases. Based on this, we apply Model 2 again and combine the initial results of Model 2 to calculate the error table of the three shapes as follows:
+
+Table 12 Error of the table
+
+| Error/%\(\left( \gamma , \gamma^{\prime} \right)\) | Cuboid | Cylinder | Sector cylinder |
| (1.58,2.88) | 2.3 | 3.9 | 3.6 |
| (1.59,2.89) | 4.9 | 4.9 | 5.0 |
| (1.54,2.84) | 3.1 | 3.3 | 2.9 |
| (1.53,2.83) | 3.8 | 4.8 | 4.6 |
+
+From the above table, we can observe that when $\gamma, \gamma'$ increases and decreases $1\%$ and $2\%$ respectively, the final error of all shapes is within $5\%$ . Therefore, it shows that our model is more robust when $\gamma, \gamma'$ has some errors.
+
+# 8.3 Sensitivity Analysis of A Fuzzy Comprehensive Evaluation Model Based on the Ability to Resist Rain Erosion
+
+In order to make the fuzzy comprehensive evaluation model valid, we must ensure that the fuzzy comprehensive evaluation index (resistance to rainwater erosion and resistance to wave erosion) is accurate and reliable. In determining the values of the two capabilities, we have introduced three positive definite coefficients $\kappa_{1},\kappa_{2},\kappa_{3}$ to assist the calculation. However, since the values of the three positive definite coefficients are not very reliable, we continuously adjust the values of the three positive definite coefficients to determine whether the cylinder has always been the optimal shape and thus to judge the stability of the model. The judgment results are as follows:
+
+
+FIG 10. Stability judgment
+
+By observing the figure, we clearly know that when $\kappa_{1}$ increases to $130\%$ and above, the cylinder is no longer the optimal shape, and the optimal shape is the fan-shape cylinder. When $\kappa_{2}$ is reduced to $75\%$ and below, the cylinder is no longer the optimal shape, and the optimal shape is a cuboid. When $\kappa_{3}$ fluctuates at $30\%$ of its original value, the cylinder is still the optimal shape. Therefore, we can tell that when the three positive definite coefficients fluctuate within $25\%$ of the original value, the results of our model have not changed much and have shown strong robustness.
+
+# 8.4 Sensitivity Analysis of Sandcastle warning protection Model
+
+The advantage of the sandcastle humidity prediction model is that it can predict the trend graph of beach humidity in advance and remediate sandcastles in time. If the error between the actual value and the predicted value of the beach humidity does not exceed $10\%$ , we consider that the sandcastle humidity prediction model based on the RBF neural network algorithm is highly reliable.
+
+The simulated was performed using MATLAB. First, we choose the mean square error to be 0.02 and the expansion speed of the radial basis function to be 0.8. Then we input the samples to build the RBF neural network and simulate the model. Finally, the average relative error between the predicted value and the actual value of the beach humidity is $4.32\%$ . The comparison chart between the predicate
+
+value and the actual value using ORIGIN fitting is shown in FIG 11, which illustrates that the sandcastle humidity prediction model based on the RBF neural network algorithm can predict the humidity basic trend.
+
+
+FIG 11. Actual vs. predicted curve
+
+# 9. Evaluation and Promotion of Model
+
+# 9.1 Strength and Weakness
+
+# 9.1.1 Strengths
+
+$\spadesuit$ Have fully considered the erosion of sandcastles caused by waves and tidal effects and established a sand-bearing capability indicator.
+$\spadesuit$ Considering the change in the bottom area and height of the sandcastle foundation during the movement, we create a sandcastle damage index to quantify the sandcastle damage reasonably.
+$\spadesuit$ Based on some solutions obtained from the sandcastle dynamics simulation, the discrete global optimization method based on the successive descent has been used to obtain the global optimal solution.
+$\spadesuit$ The equation of the relationship between water-sand polymerization degree and water-sand ratio is introduced. The reasonable water-sand ratio range is obtained by limiting the allowable polymerization degree range.
+$\spadesuit$ On the issue of rain erosion, we put forward the collapse limit coefficient of sandcastle specifically. Considering the ability of resisting rain infiltration and the evacuation ratio of sandcastle structure comprehensively, we established an evaluation model to determine the stability of the selected shape.
+$\spadesuit$ Because the RBF neural network overcomes slow convergence and local minimization of the other neural network algorithms, when designing the sandcastle early warming protection mechanism, we have chosen the RBF neural network model to predict when to apply adhesive to enhance the duration of sandcastle
+
+$\spadesuit$ When using the RBF neural network algorithm, we have selected input factors based on the changing characteristics of the ambient temperature to accurately and reasonably predict the daily humidity change trend, thereby effectively humidity prediction the sandcastle and extending its duration.
+
+# 9.1.2 Weaknesses
+
+$\spadesuit$ We did not consider three-dimensional shapes other than cuboids, elliptical cylinders, and sector cylinders, and only selected the three types of shapes most commonly used in sandcastles.
+$\spadesuit$ We only have considered sandcastles under various water erosion conditions, but not the impact of other factors in nature.
+$\spadesuit$ The accuracy of some selected coefficients from other literatures may affect some final conclusions.
+$\spadesuit$ We only have considered the situation when the waves and tides are relatively calm. When the waves and tides significantly fluctuate, the sandcastle might not collapse in the way we have expected.
+
+# 9.2 Promotion
+
+Considering the impact of water erosion on sandcastle alone, our model is very robust. Therefore, our model is suitable when the quantitative factors of water erosion fluctuate within a certain range. However, under objective natural conditions, the factors affecting the stability of sandcastles may not be limited to water erosion (such as wind erosion, light, etc.). Therefore, our strategy may need to be slightly modified to cope with the sandcastle foundation under different natural and objective conditions. Further, based on the original model, we tried to establish the basic dynamics equation of the sandcastle considering the combined effects of wind erosion and light, making the solution of the problem more practical. In addition, we believe that the final choice of the cylinder is the most stable of the three proposed shapes, but the problem is that the shape is diverse and there is no guarantee that the cylinder will still be the best of all shapes. Therefore, it is necessary to solve the problem of limited shape being considered.
+
+We have two potential solutions to address that only three shapes have been investigated:
+
+1. Perform cluster analysis on multiple shapes to find more sandcastle foundations that are applicable to actual situations.
+2. Introduce the proportion of base area and height, have more shapes under consideration and find out the best three-dimensional aspect ratio of each shape.
+
+# 10. Conclusions
+
+A detailed strategy to extend the duration of sandcastle foundation was provided in this paper. Firstly, the most stable shape of the sandcastle which was affected by both of the wave and tide was found to be the cylinder. It showed the highest resistance to the wave current erosion among all the considered shapes, according to the optimal shape model based on the dynamics equation of sandcastle foundation. Secondly, the optimum water-sand ratio was worked out to be 0.32 by taking the water-sand polymerization degree into account. Then, the cylinder was confirmed to be the most stable shape by fuzzy comprehensive evaluation, and the rain erosion resistance and resistance to wave current
+
+erosion of the sandcastle were also taken into account. Finally, an humidity prediction model of sandcastle was proposed based on RBF neural network, and the daily humidity trend was predicted accurately by the model. When the predicted humidity reached its limit, adhesive should be used in time to keep the bonding effect and stabilize the sandcastle foundation.
+
+# References
+
+[1] Cui Jie. Research on two-dimensional mathematical model of sediment under the action of wave and current [D]. Tianjin University, 2014.
+[2] Yang Yongjian. Several deterministic algorithms for global optimization [D]. Shanghai University, 2005.
+[3] Lu Yehong. Test and simulation of the damage resistance of building roof structure in rainy season [J]. Computer Simulation, 2018, 35 (12): 176-180.
+[4] Yu Gaofeng, Qiu Jinming. Analysis of Teaching Effect Analysis and Optimization of Mathematical Modeling Based on Fuzzy Comprehensive Evaluation—Taking Sanming University as an Example [J]. Journal of Lanzhou University of Arts and Sciences (Natural Science Edition), 2018, 32 (05): 112-115.
+[5] Xu Tongyu, Wang Yan, Zhang Xiaobo, Chen Chunling, Xu Hui, Zhou Yuncheng. Application of RBF neural network to the simulation and prediction of humidity in northern sunlight greenhouse [J]. Journal of Shenyang Agricultural University, 2014, 45 (06): 726-730.
+[6] Wang Chengwu, Guo Songlin, Wang Wei. Research on short-term power load forecasting by improved particle swarm optimization RBF neural network [J]. Electronic Test, 2020 (03): 45-46 + 101.
+
+# Memo
+
+# TO:Fun in the Sun
+
+# Topic: Teach you how to build the longest-lasting sandcastle
+
+Dear Vacation Magazine Editor:
+
+In recent years, the sandcastle related art works have become increasingly popular. After the creation of great sandcastle art works, how to preserve these art works for a long time becomes an obstacle to promoting this type of activities. Our team has developed a complete sandcastle humidity prediction strategy to achieve the purpose of extending the duration of sandcastles. We appreciate this opportunity to introduce you to our strategy.
+
+The most stable shape of a sandcastle under the action of sea waves and tides: In general, the most durable sandcastle can be piled with the optimal size ratio. These sandcastle shapes have the longest duration: the cuboid with an aspect ratio of 1.46, the elliptical cylinder with an aspect ratio of 1.00, and the fan-shape cylinder with a radius and arc length ratio of A.
+
+The best mixing ratio of water and sand: sandcastles are products of the mixture of sand and water. When the water and sand ratio is 0.739, the sandcastle has the longest duration.
+
+The most stable shape choice for rainy days: Rainy days are also a huge threat for sandcastles. It has been proven that the cylinder is the most stable and longest lasting sandcastle shape.
+
+Measures to extend the duration of the sandcastle: The sandcastles cannot last long without proper humidity. We have found the lowest humidity on sunny and cloudy days happens at 13: 00-13: 30 in the afternoon. Adhesive is recommended to be used to enhance the sandcastle duration during this time period with low humidity.
+
+The above is the optimal sandcastle shape and humidity prediction system that we have proposed. We are eager to help you build the longest lasting sandcastles. If you have any questions about our solution, please feel free to contact us. Our team members are willing to solve any questions you have.
+
+Regards,
+
+Team: 2010821
+
+March 9 2020
+
+# Appendix
+
+Table 13 Corresponding duration of Cylinder with different aspect ratios
+
+| Shape factor | Duration | Shape factor | Duration |
| 1.0 | 4.8 | 3.5 | 3 |
| 1.5 | 3.4 | 4.0 | 4.6 |
| 2.0 | 2.5 | 4.5 | 4.7 |
| 2.5 | 4 | 5.0 | 2.0 |
| 3.0 | 4.2 | 5.5 | 3.6 |
+
+Table 14 Corresponding duration of sector Cylinder with different aspect ratios
+
+| Shape factor | Duration | Shape factor | Duration |
| 0.1π | 3.5 | 0.6π | 3.2 |
| 0.2π | 3.6 | 0.7π | 2.8 |
| 0.3π | 3.7 | 0.8π | 2.9 |
| 0.4π | 2 | 0.9π | 3 |
| 0.5π | 1.8 | π | 3.5 |
\ No newline at end of file
diff --git a/MCM/2020/B/2019696/2019696.md b/MCM/2020/B/2019696/2019696.md
new file mode 100644
index 0000000000000000000000000000000000000000..71d7c1e61ffa83f610136c3caee9958171f6078b
--- /dev/null
+++ b/MCM/2020/B/2019696/2019696.md
@@ -0,0 +1,637 @@
+# Optimal 3D Sandcastle Shape under Erosion
+
+# Summary
+
+Waves, tides and rainfall could erode the foundation of sandcastle. We used the fluid-structure interaction theory, considering the characteristics of sand, establishes sandcastle-erosion model and sandcastle-rain-erosion model, and solved them based on the genetic algorithm, multi-step design, iterative calculation, data analysis and optimization, finally confirmed the optimal three-dimensional geometry of sandcastle to resist erosion under different influence factors.
+
+The effect of sea water on sandcastle is categorized as two modes: waves and tides. We study the simplified model of flow-solid interaction. The sand foundation is cut off by the crashing of waves layer by layer, while the tide rises slowly to immerse the sandcastle. Sand layer falls out of sandcastle when it is saturated with water which reduces cohesion between sand grains.
+
+As for problem 1, it is referred that resistance to erosion reaches its maximum when the water-faced surface is smooth.[4] Thus, we take frustum of a cone (including cylinder but not cone, for castle will be built above it) as an initial research object, to investigate the value of radius of the bottom $R$ , radius of the top $r$ and height $H$ to maximum remaining proportion of foundation $\alpha$ in the same amount of time when initial volume $V_{0} = 0.2m^{3}$ . Partial derivatives and genetic algorithms are used to obtain the maximum remaining proportion $\alpha$ , which is $62.4\%$ when $R = 0.64m$ , $r = 0.20m$ , $H = 0.33m$ . For any fixed volume of sand, the optimal shape to resist seawater erosion is a frustum of a cone maintaining the ratio of the above $R$ , $r$ and $H$ values. As the velocity and flux of the water-faced surface is bigger, more force is applied on water-faced surface, the geometry is optimized as a frustum of an ellipsoidal cone to maximum its effectiveness of resistance to erosion.
+
+As for problem 2, relationship between water-to-sand proportion $\beta$ and the thickness of sand layers cut by waves is established. First assume the shape of sandcastle is given by problem 1, regard $\beta$ as a variable and solve the value of $\beta$ when $\alpha$ reaches its maximum. Currently when $\alpha_{max} = 65.0\%$ , $\beta = 22\%$ . Next, as $\beta$ is correlated to the shape of optimal geometry, based on problem 1 and relationship between the length cut under erosion and the rate of water absorption saturation to solve the value of $R$ , $r$ , $H$ , $\beta$ when $\alpha$ reaches its maximum. The result is $R = 0.58m$ , $r = 0.23m$ , $H = 0.37m$ , $\beta = 24\%$ when $\alpha$ reaches its maximum $\alpha_{max} = 68.6\%$ . Therefore, the optimal water-to-sand proportional is $24\%$ .
+
+Problem 3 is discussed in two cases: sandcastle is affected by both seawater and rainfall and sandcastle is only affected by rainfall. In the first case, the optimal geometry in problem 1 remains $\alpha = 55.7\%$ after erosion, reducing a volume of $6.7\%$ compared with not affected by rainfall. The geometry model is improved that $R = 0.52m$ , $r = 0.16m$ , $H = 0.50m$ , when $\alpha$ reaches its maximum $\alpha_{max} = 59.2\%$ . In the second case, the optimal geometry in problem 1 remains $\alpha = 92.6\%$ after erosion. As $H$ increases, $\alpha$ increases. Thus, in both cases, optimal geometry in problem 1 is not optimal in problem 3.
+
+Four advice is given in problem 4: increase initial volume; build sandcastle away from the sea; add adhesive into sand-water mixture; build a sand wall around the sandcastle.
+
+Finally, sensitivity analysis of coefficient of length cutting $\lambda_{c}$ , coefficient of saturated water absorption $\lambda_{s}$ , initial volume $V_{0}$ is given.
+
+Key Words: 3D Sandcastle Model; Fluid-Structure Interaction; Optimizing Model;
+
+# Contents
+
+# 1 Introduction 1
+
+1.1 Background. 1
+1.2 Restatement of the Problem 1
+
+# 2 Analysis of the problem 1
+
+2.1 Literature Review 1
+2.2 Problem Analysis 2
+
+# 3 Assumptions and Justifications 3
+
+# 4 Notations. 4
+
+# 5 Model and Solution 5
+
+5.1 Sandcastle-Erosion Model 5
+5.2 Solution to Optimal Water-to-Sand Mixture Proportion 12
+5.3 Sandcastle-Rain-Erosion Model 14
+
+# 6 Advice on Building a Sandcastle 17
+
+# 7 Sensitivity Analysis 18
+
+# 8 Strengths and Weakness 20
+
+8.1 Strengths 20
+8.2 Weakness 20
+
+# Article for Fun in the Sun 21
+
+# References 23
+
+# Appendix 24
+
+# 1 Introduction
+
+# 1.1 Background
+
+Beach is a paradise for children, while sandcastle is an indispensable part of it. Various kinds of sandcastles are constructed, decorated with roofs, turrets, windows and steps. Every child who played on a beach knows that dry sand could only loosely pile up like a comicalness, and will collapse while mixed up with too much water. Thus, what is the golden ratio of sand and water that will keep the sandcastle standing? Naturally, here comes the question of how to make the sandcastle survive longer under the erosion of waves and tides. This article aims to discuss "perfect sandcastle" in consideration of factors as waves, tides and rainfall.
+
+# 1.2 Restatement of the Problem
+
+Considering the background information and restricted conditions identified in the problem statement to address the following issues:
+
+$\succ$ Problem 1. On the premise of ensuring the use of sand with the same kind of material and same water-to-sand proportion with similar quality of sand, establish a mathematical model to determine the geometry that mostly remains intact for the longest time under the influence of waves and tides.
+$\succ$ Problem 2. Use the same model in Problem1 to solve the most suitable water-to-sand proportion to make foundation remain longer.
+$\succ$ Problem 3. Change the mathematical model of the first question to determine whether the geometry solved above is still optimal when rainfall conditions are considered.
+$\succ$ Problem 4. Give advice on how to make sandcastles last longer.
+
+# 2 Analysis of the problem
+
+# 2.1 Literature Review
+
+The problem with building a sandcastle for the longest duration is essentially one of wet granular materials. Torsten, G. and David, M. modelled and measured the cohesion in wet granular materials by using a Cohesive Discrete Element Method (CDEM)[1]. Nowak, S., Samadani, A. and Kudrolli, A. had taken a model called frictionless liquid-bridge model to observe dependence of the stability angle on some parameters of the system, such as system size
+
+and surface tension[2].
+
+As for the physical property of granular media mixed with wetting liquid, Hornbaker, D., Albert, R., Albert, I. et al. reached that nanometer-scale layers of liquid on millimeter-scaled grains could sharply change the properties of granular media, which could even cause new physical phenomena not found in dry materials[3]. The ratio of raw materials is important in industrial production, because the ratio of raw materials will determine the physical properties of the product. Emiroğlu, M., Yalama, A., & Erdogdu, Y. proved by experiment that there is a great difference in the physical properties of ready-mixed clay plaster produced in different clay/sand proportions [4].
+
+For the consideration of the bond between sand grain and water in a certain proportion, Kudrolli, A. states that the inherit cohesion is highly relevant with volume fractions [5], which could explain the importance of water-to-sand ratio in the process of building a sandcastle. As to the reason that causes the sandcastle to collapse, Thomas C. Halsey and Alex J. Levine states that the failure point occurs in the majority of the sandpile rather than on the surface [6], which mainly points to the dry sandpile. Finally, Pakpour, M., Habibi, M., Møller, P., and Bonn, D. gave a model on investigating the maximum height of wet sand column while considering the stability of sandcastle [7].
+
+# 2.2 Problem Analysis
+
+fluid-structure interaction is a branch of mechanics theory that studies the interaction between fluids and solids. We simplify the model involved in this theory and only study the flow to solid one-way interaction.
+
+Problem 1: The goal of problem 1 is to establish the geometry of the sandcastle with the longest duration under the erosion of waves and tides, with the constraints that the sandcastles have 1) roughly same volume, 2) the same raw material, 3) the same sand-to-water ratio and 4) roughly the same distance from the water source (that is, the distance from the erosion source). In order to simplify and solve problem 1, a sandcastle-erosion model with geometric parameters as independent variables is established to find the most defensively geometry.
+
+Problem 2: The goal of problem 2 is to investigate the optimal sand-to-water mixture proportion to make the sandcastle most defensible. In addition to the constraints of problem 1, and regardless of the impact of changes in the geometry of the sandcastle, problem 2 requires that raw materials of the sandcastle could be nothing but sand and water. In order to solve problem 2, the sandcastle-erosion model based on problem 1 is changed, which takes the sand-to-water mixture proportion as independent variable. The algorithm of solving the optimization model is used to solve the optimal value of the sand-to-water mixture proportion.
+
+Problem 3: The goal of problem 3 is to investigate the geometry of the sandcastle that could remain longest time under the erosion of not only waves and tides, but the effect of rainfall as well. The solution of problem 3 is to establish a sandcastle-rain-erosion model based on the sandcastle-erosion model of problem 1, adding rainfall factors besides the influence of waves and tides. Problem 3 could be solved by using same algorithm in problem 1.
+
+Problem 4: Problem 4 asks for advice to increase the survival time of sandcastles. The solution method is to discuss the model results of problems 1-3 to obtain useful advice. Besides, theoretical analysis based on literature, sandcastle structures and common sense will also be conducted in this article.
+
+# 3 Assumptions and Justifications
+
+- The initial volume of the sandcastle before the erosion process began was large enough that all the sandcastles of different shapes which we investigate have positive volumes at the end of the observation period. If there are sandcastles with zero volume (that is, the sandcastle is completely destroyed) after the erosion process, or all sandcastles are in volume of zero after the erosion process, it is impossible to compare the influence of shape and geometric parameters on the stability of sandcastles in the modelling process.
+- Sand falls out of the sandcastle simply because the sand is saturated with water and a tiny external force is applied on the sand. When there is no liquid between the sand grains, cohesion is negligible, which explains why dry sand could not construct sandcastles. At small volume fractions, liquid bridges which induce cohesion between grains are formed. At higher volume fractions, large contiguous wet clusters form. However, when the volume fractions exceed the threshold value, cohesion becomes negligible again [5], which is the main reason that causes sand fall out.
+- The crash of the waves causes the sand to absorb water more quickly than it would if it were simply immersed in seawater. Since the initial velocity is not zero when the waves hit the outer surface of the sandcastle, while some of the water carried in the waves will penetrate into the sandcastle, the crashing process can be regarded as the speed of water penetration into the sandcastle is accelerated.
+- The effectiveness of seawater erosion on sandcastle is not correlated with time. That is, the volume reduction caused by the part of water penetrating into the sandcastle caused by waves is ignored. Compared with sandcastle, the volume of sea water is extremely massive, so the loss of water caused by each wave crash could be ignored, which explains the effectiveness of seawater
+
+erosion on sandcastle is not affected by time.
+
+- Ignore crash effects caused by tide on sandcastle. According to the tidal conditions in different sea areas around the world [8] sea surfaces in different sea areas rises and falls at a relatively steady and slow speed. Only the penetrating influence caused by the tide on the sandcastle is considered.
+- Ignore the evaporation of water in the sand. As the sand-to-water mixture proportion plays an important role under lots of circumstances, not considering the evaporation can achieve goals more directly by removing interference.
+- The sandcastle stands on the beach which is horizontal. If the sandcastle stands on a slope, calculations like saturated time for water-positive surface and top surface will become too much complicated, which causes unnecessary trouble when solving our core question.
+
+# 4 Notations
+
+| Definitions | Descriptions |
| α | Remaining proportion of the sandcastle |
| β | Water-to-sand mixture proportion |
| λc | Coefficient of length cutting |
| λs | Coefficient of saturated water absorption |
| νt | Velocity that tide rises and falls |
| he | Maximum height of tide |
| h tide | Height of tide |
| li | Length cut in different conditions |
| te | Time when sea level reaches its peak at the first time |
| tf | Time when sea level reaches the top of sandcastle at the first time |
| F | Cohesion between sand grains |
| H | Height of sandcastle |
| Lp | Saturated water absorption at the bottom of water-position surface |
| Ln | Saturated water absorption at the bottom of water-negative surface |
| Lt | Saturated water absorption at the top surface |
| V0 | Initial volume of the sandcastle |
| r | Radius of top surface of a frustum-shaped sandcastle |
| R | Radius of bottom surface of a frustum-shaped sandcastle |
| k1, k2, k3 | Constants |
+
+# 5 Model and Solution
+
+# 5.1 Sandcastle-Erosion Model
+
+It is referred that resistance to erosion reaches its maximum when the water-faced surface is smooth[4]. That means the shape with corner angle is easier to be eroded than cylinders. Thus, we take frustum of a cone (including cylinder but not cone, for castle will be built) as an initial research object. In this section, the sandcastle-erosion model is discussed in detail and a concrete calculation method to evaluate the stability of sandcastles is proposed to obtain the optimal geometry required in problem 1.
+
+In order to more concisely and accurately quantify the effect of erosion on the sandcastle, the following two-dimensional surfaces of sandcastle are defined:
+
+Water-positive surface: the surface of the sandcastle facing the sea, that is, the curved surface of the sandcastle that is visible from the direction of the sea.
+
+Water-positive section: the maximum section of sandcastle visible from the direction of the sea.
+
+Water-negative surface: the surface of the sandcastle back to the sea, that is, the curved surface of the sandcastle that is invisible from the direction of the sea.
+
+Top surface: the surface that is parallel to the bottom surface.
+
+Whole surface: the surface that involves water-positive surface, water-negative surface and top surface.
+
+
+
+
+
+
+Figure 1 The sketch of each position (the yellow part)
+
+
+
+According to the tidal data of various of places in the world [8], the variation of tide height is rather obvious in 24 hours. In order to study the influence of tidal change on sandcastle, the following model takes the tidal data of Dandong, China on March 6, 2020 as an example for analysis. Meanwhile, in order to avoid the excessive impact of the tide on the sandcastle, which was immersed in the sea for most of the observation time, the model choose to build the sandcastle at a height of $200\mathrm{cm}$ , and the maximum height of the sandcastle is required to be less than $1\mathrm{m}$ , so as to reflect the erosion effect of both tide and waves.
+
+The goal of problem 1 is to seek the sandcastle that has the longest duration under the erosion of waves and tides. The duration is defined as the time it takes for the volume of the sandcastle to decrease (the surface sand falls off the main body) to a certain value, which is equivalent to the maximum volume retained by the sandcastle under the same erosion of waves and tides.
+
+
+Figure 2 Tidal data from Dandong, China on March 6, 2020
+
+Assuming that the wave height is small enough, so when the sea level is less than 200 centimeters, it can be regarded as sandcastle is not affected by the tide and waves, as sandcastle is built at the height of 200 centimeters. According to the tidal table, the erosion condition of sandcastle is studied only for four hours, from 18:00 to 24:00. It has also assumed that the initial volume $V_{0}$ of the sandcastle is large enough that the all sandcastles can maintain a positive volume after four hours of erosion, while the roughly shape of sandcastle remains largely unchanged after the erosion.
+
+Due to the slow rising speed of the tide (approximately $50cm / h$ ), the crashing impact of the tide on the sandcastle is neglected, the immersing impact of the tide on the sandcastle is merely considered. In contrast to the long impacting period of tides, waves crash rather rapidly (approximately 10 times/min), which is regarded as a continuous impact on the sandcastle. Since the wave height is assumed to be small relative to the height of the sandcastle, while the
+
+waves hit the sandcastle continuously and the sea water rises at a constant speed, the time length that every part of water-position surface was impacted remains same.
+
+Assume $t_0$ represents the crashing time length of waves of the same height on water-positive surface at each point as the sea surface rises at a constant speed. In a fixed interval of time $t_0$ , the length cut by waves from water-positive sand surface stands for $l_1$ , while $l_1$ is in proportion to the ratio of the area of water-positive surface $S_1$ and the area of water-positive section $S_1'$ , that is
+
+$$
+l _ {1} = \lambda_ {c} \times \frac {S _ {1} ^ {\prime}}{S _ {1}} \tag {1}
+$$
+
+Among them,
+
+$\lambda_{c}$ represents coefficient of length cutting, which is correlated to water-to-sand mixture proportion and considered a constant in problem 1.
+
+In order to simply the calculation, it is assumed that after the sand on the water-position surface was crashed, part of the sand should have fallen out but remains saturated slightly attaching to the majority of sandcastle.
+
+The following discussion is based on the initial height of 200 centimeters, while initial time is based on the moment sea level reaches initial height. As the tide rises and falls at a constant speed, the height of tide $h_{\text{tide}}$ satisfies:
+
+$$
+h _ {t i d e} = \left\{ \begin{array}{l l} \nu_ {t} \times t & t \leqslant t _ {e} \\ h _ {e} - \nu_ {t} \times (t - t _ {e}) & t > t _ {e} \end{array} \right. \tag {2}
+$$
+
+Among them,
+
+$\nu_{t}$ represents the velocity that tide rises and falls,
+
+$h_e$ represents maximum height of tides,
+
+$t_e$ represents the time at which the sea level reaches its peak.
+
+In unit interval, the length that immersion causes the water to penetrate vertically from the whole surface of the sand stands for $l_{2}$ , that is,
+
+$$
+l _ {2} = \lambda_ {s} \times t \tag {3}
+$$
+
+Among them,
+
+$\lambda_{s}$ represents coefficient of saturated water absorption, which is correlated to water-to-sand mixture proportion and considered a constant in problem 1.
+
+$\lambda_{c} = 9.8\times 10^{-2},\lambda_{s} = 7.0\times 10^{-6}$ is assumed[7].
+
+The following is a schematic of the reduction of sandcastle:
+
+
+Figure 3 The schematic of the reduction of sandcastle
+
+- The outermost green section is the part that is saturated with the increase of the tide height. Since the water absorption of different parts reaches saturation at the same rate, the lower part has a rather longer time interval to contact with the seawater and absorbs more, while the upper part absorbs less.
+- The purple section is the sand layer on the water-positive surface cut by the waves. Since the waves are continuously having an impact on the sandcastle is assumed, and the wave height is rather small, while sea water is rising slowly at a constant speed, the length of the upper and lower parts cut by the waves remains approximately same.
+- Considering that the sand piles formed at the same volume are of different heights and the maximum height of rising tide $h_e$ is fixed, thus, if different shapes are located in the same position, the sandcastle with a small height will be submerged. The blue section represents the length of water absorption saturation. As the whole sandcastle is immersed in the sea, the water absorption length of each surface of the sandcastle is the same.
+- The green section of the inner part is the saturated water absorption length of the sandcastle at falling tide, while the purple section of the inner part is the length of the sandcastle cut by waves during the falling tide.
+- The red part is the remaining sandcastle after a whole period of rising and falling tide.
+
+The segmentation process is shown as follows
+
+
+Figure 4 The segmentation process
+
+Assume $t_f$ stands for time that sea level rises just above the sandcastle (the rise of height equals to $H$ , where $H$ represents the height of the sandcastle).
+
+- Step 1: From time $t = 0$ to $t = t_f$ , when the sea level rises from exactly touching the bottom of the sandcastle to exactly reaching the top of the sandcastle, the length of water-positive surface cut by waves add on the length that the lowest level absorbs water to reach saturation could be described as
+
+$$
+l _ {1} + l _ {2} \left(t _ {f}\right) \tag {4}
+$$
+
+- Step 2 (completely immersed): from time $t = t_f$ to $t = t_e$ , when the sea level rises from the top of the sandcastle to the top water level, the length that the lowest level of water-position surface absorbs water to reach saturation could be described as
+
+$$
+l _ {2} \left(t _ {e} - t _ {f}\right) \tag {5}
+$$
+
+- Step 3 (completely immersed): from time $t = t_e$ to $t = t_e + (t_e - t_f)$ , when the sea level falls from the top water level to the top of the sandcastle, the length that the lowest level of water-position surface absorbs water to reach saturation could be described as
+
+$$
+l _ {2} \left(t _ {e} - t _ {f}\right) \tag {6}
+$$
+
+- Step 4: from time $t = t_e + (t_e - t_f)$ to $t = 2t_e$ , when the sea level falls from the top of the sandcastle to completely have no contact with sandcastle, the length of water-positive surface cut by waves add on the length that the lowest level absorbs water to reach saturation could be described as
+
+$$
+l _ {1} + l _ {2} \left(t _ {f}\right) \tag {7}
+$$
+
+In the whole process, the saturated water absorption length $L_{p}$ at the bottom of the water-positive surface is the sum of equation (4)~(7), that is:
+
+$$
+L _ {p} = 2 \times l _ {1} + 2 \times l _ {2} \left(t _ {e}\right) \tag {8}
+$$
+
+In a similar way, the saturated water absorption length $L_{n}$ at the bottom of the water-negative surface is described as:
+
+$$
+L _ {n} = 2 \times l _ {2} \left(t _ {e}\right) \tag {9}
+$$
+
+The saturated water absorption length $L_{t}$ at the top surface is described as
+
+$$
+L _ {t} = 2 \times l _ {2} \left(t _ {e} - t _ {f}\right) \tag {10}
+$$
+
+The radius $R'$ of the bottom of the sandcastle after erosion is
+
+$$
+R ^ {\prime} = \frac {2 R - L _ {p} - L _ {n}}{2} \tag {11}
+$$
+
+The radius $r'$ of the top of the sandcastle after erosion is
+
+$$
+r ^ {\prime} \approx \frac {2 r - \left[ 2 \times l _ {1} + 2 \times l _ {2} \left(t _ {e} - t _ {f}\right) \right]}{2} = r - \left[ l _ {1} + l _ {2} \left(t _ {e} - t _ {f}\right) \right] \tag {12}
+$$
+
+The height $H'$ of the sandcastle after erosion is
+
+$$
+H ^ {\prime} = H - L _ {t} \tag {13}
+$$
+
+As the initial volume of sandcastle $V_{0} = 0.2m^{3}$ is constant,
+
+$$
+V _ {0} = \frac {1}{3} \pi H \left(R ^ {2} + r ^ {2} + R \times r\right) \tag {14}
+$$
+
+To maximize the remaining proportion $\alpha$ , that is,
+
+$$
+\begin{array}{l} \max \alpha = \frac {\frac {1}{3} \pi \times H ^ {\prime} \times \left(R ^ {\prime 2} + r ^ {\prime 2} + R ^ {\prime} \times r ^ {\prime}\right)}{V _ {0}} \times 100 \% \\ s. t. V _ {0} = \frac {1}{3} \pi H \left(R ^ {2} + r ^ {2} + R \times r\right) \tag {15} \\ 0 < H < 0. 7 5 \\ 0 \leqslant r \leqslant R \\ \end{array}
+$$
+
+Genetic algorithm toolbox in MATLAB and partial derivative method are used to solve the problem. Results are as follows:
+
+
+Figure 5 The result of sandcastle-erosion model
+
+That is, at the value of $V_{0} = 0.2m^{3}$ , $R = 0.64m$ , $r = 0.20m$ , $H = 0.33m$ , remaining proportion $\alpha$ takes the maximum value $\alpha_{max} = 62.4\%$ .
+
+The shape and proportion of the optimal 3D shape are shown in figure 6.
+
+
+Figure 6 the optimal 3D model
+
+Because the velocity and flux of the water-positive surface is bigger, more force is applied on water-positive surface. Similarly, there is less force applied on the water-negative surface and side of the geometry. On this aspect, the model could be further optimized as ellipsoidal cone sets to make water-positive side more stable. The optimized geometric model is eroded proportionally from four directions to maximize the resistance of the geometry. Optimized three-dimensional geometry model shows as follows(the blue one):
+
+
+Figure 7 The optimized three-dimensional geometry model
+
+# 5.2 Solution to Optimal Water-to-Sand Mixture Proportion
+
+As is discussed in problem analysis, the goal of problem 2 is to investigate the optimal sand-to-water mixture proportion to make the sandcastle most defensible. Water-to-sand proportion $\beta$ is defined as the volume of water divided by the volume of sandcastle (based on the hypothesis that the volume of sandcastle would not increase when the sandcastle absorbs water).
+
+When $\beta$ is of small magnitude, the cohesion between sand grains turns to be weak, while cohesion increases sharply as $\beta$ increases. The cohesion of sand is considered to quickly disappear when the sand is saturated with water.
+
+It is assumed that $F$ and $\beta$ have a relationship of
+
+$$
+F = k _ {1} \times \beta^ {2} \tag {16}
+$$
+
+at certain realms. Among them, $k_{1}$ is a constant.
+
+Cohesion $F$ is required to have a lower bound to make sure the sandcastle would not collapse without the influence of water. Correspondingly, $\beta$ is required to have a lower bound $\beta_{min} \approx 10\%$ .[2] Thus, the range of $\beta$ should be
+
+$$
+\beta \in \left(\beta_ {\min }, \beta_ {\max }\right) \tag {17}
+$$
+
+Among them, $\beta_{max}$ represents water-to-sand mixture proportion when the whole sandcastle is saturated with water. It is referred that $\beta_{max} \approx 30\%$ .[2] Assume $t_0$ represents the crashing time of waves on every point of the water-positive surface with sea level rises in a constant speed. The thickness $T$ cut by the waves and cohesion $F$ have an inverse relationship at a fixed duration $t_0$ , that is,
+
+$$
+T = \frac {k _ {2}}{F} \tag {18}
+$$
+
+Among them, $k_{2}$ is a constant. Thus, coefficient of length cutting $\lambda_{c}$ and $T$ have a relationship of
+
+$$
+\lambda_ {s} = T \tag {19}
+$$
+
+According to equations (1), (16)~(19), it is concluded that the length $l_{1}$ which is cut perpendicularly of the water-positive surface and water-to-sand mixture proportion $\beta$ have a relationship of
+
+$$
+l _ {1} = \frac {k _ {2}}{k _ {1} \times \beta^ {2}} \times \frac {S _ {1} ^ {\prime}}{S _ {1}} \tag {20}
+$$
+
+Among them, $S_{1}$ represents the area of water-positive surface, $S_{1}'$ represents the area of
+
+water-positive section.
+
+When the sand is in the condition of absorbing water, $\beta$ and the time to reach saturated have a relationship of power function with exponent $1/2$ . Thus, the coefficient of saturated water absorption $\lambda_{s}$ and water-to-sand proportion $\beta$ have a relationship of
+
+$$
+\lambda_ {s} = k _ {3} \times \beta^ {2} \tag {21}
+$$
+
+Among them, $k_{3}$ is a constant. According to equations (3) and (21), it is concluded that
+
+$$
+l _ {2} = k _ {3} \times \beta^ {2} \times t \tag {22}
+$$
+
+It is referred that $k_{2} / k_{1} = 1 \times 10^{-2}$ , $k_{3} = 7 \times 10^{-4}$ . [3] Equation(22) indicates that in the same time, the larger the water-to-sand ratio is, the longer the sand layer reaches saturated.
+
+To simplify calculation, we first calculate the water-to-sand mixture proportion based on the geometry of problem 1.
+
+Results are as follows:
+
+
+Figure 8 The best $\beta$ of the optimal shape in problem 1
+
+Within the limitation of (17), when $\beta$ takes the value of $\beta_{1} = 22\%$ , the remaining proportion of sandcastle could take its maximum value, which is approximately $65.0\%$ . The optimal water-to-sand proportion is $22\%$ , based on the geometry of problem 1. That is, for every 100 volumes of sand pile (with gaps in the sand), about 22 volumes of water are added to make the sandcastle.
+
+Since the best structure of foundation is related to water-to-sand mixture proportion, in the following calculations $\beta$ is considered as an independent variable of ratio $\alpha$ , to compute the value of $\beta$ under the restricted conditions of (17).
+
+Similarly, genetic algorithm toolbox of MATLAB and partial derivative method are used to solve the problem. As the function of three variables $\alpha(R,r,\beta)$ is involved, the graph of the function is inconvenient to be shown. Only numerical results are given as follows:
+
+Table 1 The result of optimal Water-to-Sand mixture proportion
+
+
+
+Therefore, the optimal water-to-sand proportion is $\beta = 24\%$ based on the consideration of various shapes of the frustum of the cone. For every 100 volumes of sand pile (with gaps in the sand), about 24 volumes of water are added to make the sandcastle. The ratio of water to sand volume is about 1:5 when making the foundation of the sandcastle.
+
+# 5.3 Sandcastle-Rain-Erosion Model
+
+When rainfall is taken into consideration, in order to establish sandcastle-rain-erosion model, problem 3 is discussed in two situations: one is that sandcastle is eroded by both seawater and rainfall; the other is that sandcastle is eroded only by rainfall.
+
+In situation 1, the impact of rainfall and the impact of seawater are superimposed directly, that is, rainfall directly affect the sandcastle that is eroded by seawater. Assume raindrops are particles with mass and vertically hit on the sandcastle, which have a same applied force compared with waves. Every part of the sandcastle was hit by the raindrops for the same time, which was represented as $t_1$ .
+
+Assuming that the length that the raindrops vertically cut from the surface of the sandcastle stands for $l_{4}$ in a period $t_{1}$ , which is proportional to the ratio of $S_{2}'$ and $S_{2}$ .
+
+$$
+l _ {4 i} = a _ {1} \times \frac {S _ {2 i} ^ {\prime}}{S _ {2 i}} \tag {23}
+$$
+
+Among them,
+
+$\lambda_{c}$ represents coefficient of length cutting,
+
+$S_{2i}^{\prime}$ represents the area of the vertical projection of top surface or side surface,
+
+$S_{2i}$ represents the area of top surface or side surface,
+
+$i = 1,2$ or 3 represents situation on top surface, water-positive surface or water-negative surface.
+
+
+Figure 9 The sketch of vertical projection of each face
+
+
+
+In order to simplify subsequently processing steps, it is similarly assumed that after the sand on the surface was eroded by the rain, part of the sand should have fallen out but remains saturated slightly attaching to the majority of sandcastle. Another schematic of the reduction of sandcastle is showed below.
+
+
+Figure 10 Another schematic of the reduction of sandcastle
+
+Based on the schematic of problem 1, the yellow section is the part that is saturated by the rainfall, the angle of inclination of the sandcastle remains same after the erosion of the rain.
+
+The remaining height of sandcastle $H_{1}''$ superimposed with rain erosion is
+
+$$
+H _ {1} ^ {\prime \prime} = H ^ {\prime} - l _ {4 1} \tag {24}
+$$
+
+The radius of top surface of sandcastle $r_1''$ superimposed with rain erosion is
+
+$$
+r _ {1} ^ {\prime \prime} = \frac {2 r ^ {\prime} - l _ {4 2} - l _ {4 3}}{2} \tag {25}
+$$
+
+The radius of bottom surface of sandcastle $R_{1}''$ superimposed with rain erosion is
+
+$$
+R _ {1} ^ {\prime \prime} = \frac {2 R ^ {\prime} - l _ {4 2} - l _ {4 3}}{2} \tag {26}
+$$
+
+When the remaining proportion $\alpha_{1}^{\prime \prime}$ is
+
+$$
+\begin{array}{l} \max \alpha_ {1} ^ {\prime \prime} = \frac {\frac {1}{3} \pi \times H _ {1} ^ {\prime \prime} \times \left(R _ {1} ^ {\prime \prime 2} + r _ {1} ^ {\prime \prime 2} + R _ {1} ^ {\prime \prime} \times r _ {1} ^ {\prime \prime}\right)}{V _ {0}} \times 100 \% \\ s. t. V _ {0} = \frac {1}{3} \pi H \left(R ^ {2} + r ^ {2} + R \times r\right) \tag {27} \\ 0 < H < 0. 7 5 \\ 0 \leqslant r \leqslant R \\ \end{array}
+$$
+
+According to the data processed in problem 1 ( $R = 0.64m, r = 0.20m$ ), it is concluded that $\alpha_{1}^{\prime \prime} = 55.7\%$ .
+
+Using the algorithm similar to problem 1, results are as follows.
+
+
+Figure 11 The result of Sandcastle-Rain-Erosion model for situation 1
+
+That is, at the value of $V_{0} = 0.2m^{3}$ , $R = 0.52m$ , $r = 0.16m$ , $H = 0.50m$ , remaining proportion $\alpha_{1}^{\prime \prime}$ takes the maximum value $\alpha_{1max}^{\prime \prime} = 62.4\%$ . As $\alpha_{1}^{\prime \prime}$ reaches $\alpha_{1max}^{\prime \prime}$ , the remaining volume of the latter three-dimensional geometry model increases approximately $4\%$ compared to the geometry model established in problem 1. Thus, three-dimensional geometry in problem 1 is not the optimum in the situation 1 of problem 3.
+
+In situation 2, only the impact of rainfall is considered. The schematic of the reduction of sandcastle is showed below:
+
+
+Figure 12 The schematic of the reduction of sandcastle
+
+Similarly, the yellow section is the part that is saturated by the rainfall, the angle of inclination of the sandcastle remains same after the erosion of the rain.
+
+The remaining height of sandcastle $H_{2}''$ only with rain erosion is
+
+$$
+H _ {2} ^ {\prime \prime} = H - l _ {4 1} \tag {28}
+$$
+
+The radius of the top surface of sandcastle $r_2''$ only with rain erosion is
+
+$$
+r _ {2} ^ {\prime \prime} = \frac {2 r - l _ {4 2} - l _ {4 3}}{2} \tag {29}
+$$
+
+The radius of the bottom surface of sandcastle $R_{2}^{\prime \prime}$ only with rain erosion is
+
+$$
+R _ {2} ^ {\prime \prime} = \frac {2 R - l _ {4 2} - l _ {4 3}}{2} \tag {30}
+$$
+
+When the remaining proportion $\alpha_{2}^{\prime \prime}$ is
+
+$$
+\begin{array}{l} \max \alpha_ {2} ^ {\prime \prime} = \frac {\frac {1}{3} \pi \times H _ {2} ^ {\prime \prime} \times \left(R _ {2} ^ {\prime \prime 2} + r _ {2} ^ {\prime \prime 2} + R _ {2} ^ {\prime \prime} \times r _ {2} ^ {\prime \prime}\right)}{V _ {0}} \times 100 \% \\ s. t. V _ {0} = \frac {1}{3} \pi H \left(R ^ {2} + r ^ {2} + R \times r\right) \tag {31} \\ 0 < H < 0. 7 5 \\ 0 \leqslant r \leqslant R \\ \end{array}
+$$
+
+The results are as follows.
+
+According to the data processed in problem 1 ( $R = 0.64m, r = 0.20m$ ), it is concluded that $\alpha_{2}^{\prime \prime} = 92.6\%$ . It is observed that $\lim_{h \to +\infty} \alpha_{2}^{\prime \prime} = 100\%$ . As the relationship between the height and the stability of the sandcastle was not considered in the model, $\alpha_{2max}^{\prime \prime}$ is unsolvable. However, it is confirmed that three-dimensional geometry in problem 1 is not the optimum in this situation.
+
+# 6 Advice on Building a Sandcastle
+
+■ Advice 1: Increase the initial volume of sandcastles.
+
+When the sandcastle is extremely small, it is difficult to withstand the waves and tides. The results of the sandcastle-erosion model also show that increasing the initial volume $V_{0}$ within a certain range can increase the proportion of remaining sandcastle after the erosion of waves.
+
+
+Figure 13 The relationship between $\alpha_{max}$ and $V_0$
+
+■ Advice 2: Build the sandcastle on a beach far from the sea.
+
+The beach is inclined to the sea in reality. The farther the sandcastle is from the sea and the higher the sandcastle is, the less sandcastle is affected by the waves. When the distance is far enough, the erosion of the sea can be completely eliminated.
+
+Advice 3: Add some adhesive to the sand-water mixture.
+
+The adhesive can enhance the cohesion between sand grains, which helps the surface sand adhere to the main body of the sandcastle. Meanwhile, the adhesive can reduce the surface sand exposure to the gap in the sea water, reducing the rate of sand saturation to a certain extent. Using adhesive could also reduce the volume that sandcastle lost in the same period.
+
+■ Advice 4: Build a sand wall around the sandcastle.
+
+When kids are building sandcastles, moats around sandcastles are often built. On the one hand, sand walls absorb the force caused by the crashing of waves; on the other hand, sand walls can delay the time that sandcastle contacts tides.
+
+# 7 Sensitivity Analysis
+
+As coefficient of length cutting $\lambda_{c}$ and coefficient of saturated water absorption $\lambda_{s}$ and initial volume $V_{0}$ have an important influence on the optimum, sensitivity analysis is processed. For three coefficients $\lambda_{c},\lambda_{s}$ and $V_{0}$ , a change of $\pm 5\%$ is applied. The relative influence on the optimal radius of bottom surface $R$ , radius of top surface $r$ , height of the sandcastle $H$ and remaining proportion of the sandcastle $\alpha$ is observed.
+
+Table 2 The sensitivity analysis of ${\lambda }_{c}$
+
+| result λc | R | r | H | α |
| +5% | 0.73% | 1.24% | 1.92% | 1.96% |
| +4% | 0.59% | 1.00% | 1.55% | 1.57% |
| +3% | 0.44% | 0.75% | 1.18% | 1.18% |
| +2% | 0.29% | 0.50% | 0.78% | 0.78% |
| +1% | 0.15% | 0.25% | 0.40% | 0.39% |
| -1% | 0.15% | 0.25% | 0.40% | 0.34% |
| -2% | 0.29% | 0.51% | 0.81% | 0.81% |
| -3% | 0.44% | 0.77% | 1.21% | 1.20% |
| -4% | 0.58% | 1.03% | 1.61% | 1.72% |
| -5% | 0.73% | 1.29% | 1.74% | 2.01% |
+
+Table 3 The sensitivity analysis of ${\lambda }_{s}$
+
+| result λs | R | r | H | α |
| +5% | 0.50% | 1.05% | 1.54% | 3.50% |
| +4% | 0.42% | 0.83% | 1.31% | 2.60% |
| +3% | 0.31% | 0.63% | 0.96% | 1.93% |
| +2% | 0.21% | 0.44% | 0.65% | 1.30% |
| +1% | 0.11% | 0.22% | 0.31% | 0.64% |
| -1% | 0.11% | 0.21% | 0.30% | 0.58% |
| -2% | 0.25% | 0.44% | 0.65% | 1.30% |
| -3% | 0.35% | 0.69% | 1.04% | 2.00% |
| -4% | 0.48% | 0.92% | 1.32% | 2.73% |
| -5% | 0.55% | 1.11% | 1.60% | 3.44% |
+
+Table 4 The sensitivity analysis of ${\mathrm{V}}_{0}$
+
+| result
+V0 | R | r | H | α |
| +5% | 1.60% | 1.58% | 1.76% | 1.83% |
| +4% | 1.27% | 1.27% | 1.45% | 1.52% |
| +3% | 0.94% | 0.93% | 1.09% | 1.15% |
| +2% | 0.63% | 0.62% | 0.74% | 0.78% |
| +1% | 0.32% | 0.31% | 0.37% | 0.39% |
| -1% | 0.32% | 0.30% | 0.38% | 0.38% |
| -2% | 0.65% | 0.63% | 0.74% | 0.81% |
| -3% | 0.94% | 0.93% | 1.10% | 1.19% |
| -4% | 1.30% | 1.31% | 1.48% | 1.61% |
| -5% | 1.62% | 1.61% | 1.79% | 2.02% |
+
+The results show that the coefficients $\lambda_{c},\lambda_{s}$ and initial volume $V_{0}$ have high stabilities. Thus, the error of the final results is within the acceptable range. The results that including the optimal
+
+sandcastle shape under the influence of seawater based on sandcastle-erosion model, the optimal water-to-sand proportion and the optimal sandcastle shape under the influence of rainfall and seawater based on sandcastle-rain-erosion model are reliable.
+
+# 8 Strengths and Weakness
+
+# 8.1 Strengths
+
+- In this article, the erosion effects of waves, tides and rainfall on sand grains are analyzed based on the related principles of fluid-structure interaction. The effect of seawater on sandcastle is simplified as the wave crashing to cut sand layer and the tide immersing to make the sand become saturated with water and be on the verge of falling off. More attention is paid to the main aspects of the solid-liquid interaction, and the tedious microscopic analysis is avoided.
+- Sandcastle-erosion model simulates the scenario in which an ordinary family builds a sandcastle on the beach, based on the assumption that the amount of sand is appropriate, so that people can directly use the results of this model to build the foundation of sandcastle in their real lives.
+- Combined with the partial derivative and MATLAB genetic algorithm toolbox to solve the result greatly reduce the computation time, while ensuring the accuracy of the result.
+
+# 8.2 Weakness
+
+- In order to facilitate calculation and reality constructions, only relatively regular three-dimensional geometries are considered, while the ignored geometries including frustum of elliptic cone and eccentric frustum of a cone may have a better resistance to erosion.
+- It is assumed the waves have the same impact effect on each point of the water-positive surface of the sandcastle. However, the thickness of the sand layer crashed by the waves in the front is larger than the thickness of the sides.
+- The saturated water absorption rate of different sand is different, so is the minimum water-to-sand proportion that makes sandcastle stable. As there are various kinds of sands, estimation in this article could cause much difference.
+
+# Article for Fun in the Sun
+
+# How to build a perfect sandcastle?
+
+Have you ever tried to build a sandcastle when you are at the beach? Of course, I have. However, the 'sandcastle' I built was just nothing but a shapeless mound of sand decorated with shells and other objects. I believe there are many people concerning about how to build a perfect sandcastle. If the tide rises, have you ever thought about what shape it will be in order to be more resistant to the erosion of the sea?
+
+Some people may think that building sandcastle is just a mechanical building game like building a house, other people may consider it as a game identifying combinations of different shapes. However, the game of building a sandcastle may seem easy, it actually involves the mechanics of the interaction between fluid and solid. Specialized research in the knowledge of this area can be used in the construction of bridges, navigation and aviation industries. Now let's talk about how to use knowledge to build a perfect sandcastle!
+
+First, we need to understand why a sandcastle could stand before explaining the best method to build a perfect sandcastle. Kids who experienced playing on a beach are aware that dry sand create nothing useful. Sand becomes sticky only when it is mixed with water. This is because the water will disperse into small drops and fill in the gaps between the grains of sand. The bridge of water droplets between the grains creates an attraction called "surface tension" between the connected grains, and it is this attraction that makes sand grain difficult to separate. (If you don't know the surface tension, you can fill the glass with a glass of water, the circular projections at the top of the cup are the result of surface tension attraction between water droplets)
+
+Now you have known the reason behind the sandcastle's standing. Anyone who has ever built a sandcastle will know that the first step in building a sandcastle is to build a large foundation on which to build a more elaborate sandcastle. The foundation is always very important, because you never know when the waves will rise and destroy the hard-built sandcastle. So how do we build a properly shaped sandcastle foundation that could mostly resist other effects? Don't worry, our team provides you with answers by building mathematical models.
+
+In order to find the most suitable shape for building the sandcastle foundation, taking the factors that threaten the sandcastle into account, such as frequent and rapid waves and gradually rising tides, we built the sandcastle-erosion model. To ensure the accuracy of the model results, we used the same kind of sand and mixed it with water in the same proportion.
+
+By simulating the residual states of sandcastles of different shapes under different erosion conditions under the same volume and comparing them, we finally get the best geometry for sandcastle foundation. It is a round platform with a height of about $30~\mathrm{cm}$ and a bottom radius of about $64~\mathrm{cm}$ . You're going to make it a little bit narrower on the top, with a radius of about 20 centimeters on the top. When building the foundation, you can make the side facing the waves steeper to enhance the stability of the foundation.
+
+Since the ratio of sand to water is important to the success of a sandcastle, the golden ratio of sand to water is also a concern for sandcastle builders. To find this best proportion, we also use the sandcastle-erosion model to determine the ratio of raw materials that can make the sandcastle the most stable, and this ratio is about $22\%$ . In other words, during the construction process, foundation of sandcastles with about five parts of sand for one part of water is the most stable.
+
+People usually choose to play by the beach on a sunny day. However, as the weather is unpredictable, sudden rain is often the case. Considering the factor of rainfall, in order to find the most stable geometry of sandcastle in rainy days, we improved the previous sandcastle-erosion model to the sandcastle-rain-erosion model. In this model, we found that the sandcastles that were most resistant to erosion on sunny days does not show the same strength on rainy days, but the basic shape was not much different. The final result is also a round platform with bottom radius of about $52~\mathrm{cm}$ , top radius of about $16~\mathrm{cm}$ , a height of about $50~\mathrm{cm}$ . Our model also shows that sandcastles break down faster in rainy days.
+
+How's that? After looking at the results of our models, I believe you can also build a perfect sandcastle. Try with your friends and family next time!
+
+# References
+
+[1] Gröger, T., Tüzün, U., & Heyes, D. M. (2003). Modelling and measuring of cohesion in wet granular materials. Powder Technology, 133(1-3), 203–215.
+[2] Nowak, S., Samadani, A., & Kudrolli, A. (2005). Maximum angle of stability of a wet granular pile. Nature Physics, 1(1), 50-52.
+[3] Hornbaker, D. J., Albert, R., Albert, I., Barabási, A.-L., & Schiffer, P. (1997). What keeps sandcastles standing? Nature, 387(6635), 765-765.
+[4] Emiroğlu, M., Yalama, A., & Erdogdu, Y. (2015). Performance of ready-mixed clay plasters produced with different clay/sand ratios. Applied Clay Science, 115, 221-229.
+[5] Kudrolli, A. (2008). Sticky sand. Nature Materials, 7(3), 174-175.
+[6] Halsey, T. C., & Levine, A. J. (1998). How Sandcastles Fall. Physical Review Letters, 80(14), 3141-3144.
+[7] Pakpour, M., Habibi, M., Møller, P., & Bonn, D. (2012). How to construct the perfect sandcastle. Scientific Reports, 2(1).
+[8] National Marine Data Information Center. [DB/OL]. https://www.cnss.com.cn/tide/2020.
+
+# Appendix
+
+1. The partial derivative of problem 1
+clear;clc
+ $\mathbf{v} = 0.2$
+a1=0.1;
+a2 $= 2^{*}10^{\wedge}$ (-5.5);
+b=1/6000;
+te=7200;
+syms R r;
+h $= 3^{*}\mathrm{v} / (\mathrm{pi}^{*}(\mathrm{R}^{\wedge}2 + \mathrm{r}^{\wedge}2 + \mathrm{R}^{*}\mathrm{r}))$ .
+tf $= 3^{*}\mathrm{v} / (\mathrm{b}^{*}\mathrm{pi}^{*}(\mathrm{R}^{\wedge}2 + \mathrm{r}^{\wedge}2 + \mathrm{R}^{*}\mathrm{r}))$ .
+hfinal $= h - 2^{*}a2^{*}(te - tf)$
+s21 $= 2^{*}\mathrm{h} / (\mathrm{pi}^{*}\mathrm{sqrt}((\mathrm{R} - \mathrm{r})^{\wedge}2 + \mathrm{h}^{\wedge}2))$ .
+l1=a1\*s21;
+L1 $= 2^{*}\mathrm{l}1 + 2^{*}\mathrm{a}2^{*}\mathrm{te}$ .
+L2 $= 2^{*}\mathrm{a}2^{*}\mathrm{te}$ .
+Rfinal $= (2^{*}\mathrm{R - L1 - L2}) / 2$ .
+rfinal $= \mathbf{r} - (\mathbf{l1} + \mathbf{a2}^{*}(\mathbf{t}\mathbf{e}\text{-}\mathbf{t}\mathbf{f}))$ .
+a $=$ pi\*hfinal\*(Rfinal $^{\wedge}2+$ rfinal $^{\wedge}2+$ Rfinal\*rfinal)/(3\*v);
+f1 $=$ diff(a,R);
+f2 $=$ diff(a,r);
+[R,r]=solve(f1 $= = 0$ ,f2 $= = 0$ ,R,r)
+
+2. The partial derivative of problem 2
+clear;clc
+ $\mathrm{v} = 0.2$ .
+b=1/6000;
+te=7200;
+R=0.65;
+r=0.2;
+dc=2\*10^(-2);
+e=2\*10^(-4);
+syms beita;
+a1=dc\*beita^(-2);
+a2=e\*beita^2;
+h=3\*v/(pi\*(R^2+r^2+R\*r));
+tf=3\*v/(b\*pi\*(R^2+r^2+R\*r));
+hfinal=h-2\*a2\*(te-tf);
+s21=2\*v/(pi\*sqrt((R-r)^2\*(R^2+r^2+R\*r)^2+v^2));
+l1=a1\*s21;
+L1=2\*l1+2\*a2\*te;
+L2=2\*a2\*te;
+Rfinal=(2\*R-L1-L2)/2;
+rfinal=r-(l1+a2\*(te-tf));
+a=pi\*hfinal\*(Rfinal^2+rfinal^2+Rfinal\*rfinal)/(3\*v);
+f=diff(a,beita);
+solve(f==0,beita)
+
+3. The partial derivative of problem 3
+clear;clc
+ $\mathrm{v} = 0.2$
+a1=0.1;
+a2=2\*10^(-5.5);
+b=1/6000;
+te=7200;
+syms R r;
+h=3\*v/(pi\*(R^2+r^2+R\*));
+tf=3\*v/(b\*pi\*(R^2+r^2+R\*));
+s21=2\*h/(pi\*sqrt((R-r)^2+h^2));
+l1=a1\*s21;
+L1=2\*l1+2\*a2\*te;
+L2=2\*a2\*te;
+l41=a1;
+l42=(R-r)/sqrt((R-r)^2+h^2);
+l43=(R-r)/sqrt((R-r)^2+h^2);
+hfinal_1=h-2\*a2\*(te-tf)-l41;
+Rfinal_1=(2\*R-L1-L2-l42-l43)/2;
+rfinal_1=(2\*r-2\*(l1+a2\*(te-tf))-l42-l43)/2;
+a=pi\*hfinal_1\*(Rfinal_1^2+rfinal_1^2+Rfinal_1\*rfinal_1)/(3\*v);
+f1=diff(a,R);
+f2=diff(a,r);
+[R,r]=vpasolve(f1==0,f2==0,R,r)
\ No newline at end of file
diff --git a/MCM/2020/C/2002116/2002116.md b/MCM/2020/C/2002116/2002116.md
new file mode 100644
index 0000000000000000000000000000000000000000..a559dcebd38b1aabbc43e35de9bed62f21b0b991
--- /dev/null
+++ b/MCM/2020/C/2002116/2002116.md
@@ -0,0 +1,890 @@
+# Riddle of Sphinx: Cracking the Secret of Amazon's Ratings and Reviews
+
+Summary
+
+We have witnessed the rise of mass online marketplaces. For example Amazon, one of the biggest online platforms, is worth around $ 915 billion. Guided by the customer obsession principle, it provides an opportunity for the customers to rate the products from 1 to 5. Moreover, buyers can submit a text-based message, namely review, to express their feeling towards the products. The massive data of those ratings and reviews offer a wealth of information remained to be mined. Analysis of text-based messages or rating-based values has received wide attention, yet there is not a method severs as the combination of both, especially for the case of an online marketplace.
+
+To address the above-mentioned challenge, we propose a novel CE-VADER hybrid model for sentiment analysis in reviews, classifying messages into five groups of strong positive, weak positive, moderate, weak negative and strong negative. Empirical results indicate that the proposed five-group classification model correlates to the five-star rating system well. Then a state-of-art informative evaluation model is proposed as the combination of the text-based and rating-based measures. We pick out $1\%$ most informative reviews and ratings of each product to evaluate the properties and propose sales strategies.
+
+We propose the "reputation" rate based on the differential equation model in the literature to evaluate the reputation of the product. Then we employ an Auto Regression (AR) model as the time series forecasting method to predict future "reputation" rate and the potential success or the failure of each product. AR model shows high accuracy on the validation set with a maximum Root Mean Square Error (RMSE) of 0.131. Pacifiers have a good reputation and predicted to be successful while microwaves and hair dryers have bad reputations and predicted to fail. The results show relevance with the proportions of the continuous five-star or one-star rating sequence. Lastly, we analyze specific words and descriptors to find their correlation to the ratings.
+
+According to our empirical results, we propose some confident sales strategies and recommendations for the online marketplace, e.g., the timing choice of introducing products into market, targeted adjustment according to star ratings, etc. We write a letter to the marketing director of Sunshine Company to summarize our analysis and results, together with our recommendations.
+
+Our framework shows a strong accuracy, robustness. It can be easily implemented to other data with our source codes.
+
+Keywords: Text-Based Measure, Informative Text Selection, Reputation Quantification, Sales Strategy Formation.
+
+# Riddle of Sphinx: Cracking the Secret of Amazon's Ratings and Reviews
+
+March 9, 2020
+
+# Contents
+
+1 Introduction 3
+2Assumptions and Notations 4
+
+2.1Assumptions 4
+2.2 Notations 4
+
+3 Informative Evaluation Model 4
+
+3.1 Vector Encoding Forms of Star Ratings 5
+3.2 Contextual Entropy VADER Hybrid Model for Text-Based Measures 5
+
+3.2.1 Manually Annotating the Seed Word 7
+3.2.2 Contextual Entropy Block (CE) 7
+3.2.3 VADER Block 9
+3.2.4 Proposed CE-VADER for Sentiment Analysis 9
+
+3.3 Combination of Text-Based and Rating-Based Measures 10
+3.4 Model Implementation, Sensitivity Analysis and Results 11
+
+4 Difference Equation to Measure Time-Based Pattern 11
+
+4.1 Difference Equation Based Model 11
+4.2 Model Implementation, Sensitivity Analysis and Results 12
+
+5 Predict Potential Success or Failure 14
+
+5.1 Time Series Forecasting for Predicting Future Reputation 14
+5.2 Evaluating the Success or Failure potential 14
+5.3 Model Implementation and Results 14
+
+6 Specific Ratings and Descriptors Analysis 15
+
+6.1 Specific Star Ratings Relevance to Rating Frequency 16
+6.2 Specific Quality Descriptors' Relevance to Rating Levels 18
+
+6.2.1 Naive Bayesian Model for Evaluation
+
+6.2.2 Model Implementation and Results 19
+
+7 Attractiveness Analysis of Design Features 20
+8 Sales Strategies and Recommendations 20
+9 Strengths and Weaknesses 21
+
+9.1 Strengths 21
+9.2 Weaknesses 22
+
+10 Conclusion 22
+11 A Letter to the Marketing Director of Sunshine Company 22
+
+Appendices 24
+Appendix A Annotated Seed Words and Frequency 24
+Appendix B The Number of Keyword Occurrences in Different Keyword Groups 25
+Appendix C Top $1 \%$ Most Informative Ratings and Reviews 26
+Appendix D Source Code for VADER Sentiment Analysis 36
+Appendix E Source Code for Informative Algorithm 36
+Appendix F Source Code for Reputation Calculation 37
+Appendix G Source Code for Beyes Model 37
+Appendix H Source Code for Time Series Prediction 38
+Appendix I Source Code for Wordcloud Picture 38
+
+# 1 Introduction
+
+Our society has witnessed the rise of many online marketplaces, with a total worldwide market value of 4.3 trillion dollars [1]. One salient feature of the online marketplace compared with traditional platforms is the massive review of texts and ratings. Among all of them, Amazon has received the most attention, as its greatest success [1]. Amazon also provides customers with chances to freely express their feeling and rate the products that they have purchased.
+
+Previous work [2] indicates that customers will largely refer to the reviews and ratings before they buy the product on the platforms. Platforms can adjust their sales strategy by checking these comments. Hence, the ratings and the reviews both provide references to other potential buyers and massive data to analyze the demand of the customers, which can help to develop adaptive strategies. By making full use of these data, we can achieve a win-win situation for both the buyers and the platform.
+
+One of the biggest challenges is the complexity and diversity of the texts of the reviews [3, 4]. In this paper, we propose a novel sentiment analysis model as the text-based measure to address this issue. In this paper, we develop a series of models as the combination of text-based, rating-based, and time-based measures to pick out the most informative ratings and reviews to track. We also construct a novel evaluation framework to quantify the reputation of each product and predict potential success or failure. Then, we analyze the correlation between continuous same star ratings, word descriptors and the reputation of the products. We implement our model on the real data set generated from three different types of products, namely the pacifier, microwave, and the hair dryer.
+
+Researchers have pointed out the necessity to study when and how the online platforms should adjust their marketing communication strategy in response to consumer reviews or ratings [5]. We propose several sales strategies and recommendations in this paper based on our analysis and results.
+
+The rest of the paper is organized as follows. In section 2, we list the main assumptions in model construction and introduce the notations which will be frequently used in this paper. In section 3, a novel Information Evaluation Model is proposed. It is made up of a hybrid the state-of-art CE [6] and VADER [7] for sentiment analysis in the review text. Then we propose the "importance" rate as a combination of text-based measure (i.e., our proposed CE-VADER model) and ratings-based measure (i.e., the star-rating and the helpful votes) to indicate how informative the review and the rating are. To the best of our knowledge, we are the first to propose a review-text-based sentiment analysis model. In section 4, we employ a difference equation model as the backbone to measure the time pattern of each product. Moreover, the "reputation" rate is proposed in this section to measure the growth or the decline of the reputation. In section 5, we employ an Auto Regression model (AR) to predict the change of reputation in the future time domain and propose a fuzzy system to predict the potential success or failure of each product. More details about the results of our model implemented on given data can be found in section 6,7,8. The strengths and weaknesses of the proposed model and framework are discussed in section 9. We conclude in section 10. All source codes are attached to the Appendix D-I and can be easily implemented to other data sets.
+
+# 2 Assumptions and Notations
+
+# 2.1 Assumptions
+
+To simplify our model and eliminate the complexity, we make the following main assumptions in this literature. All assumptions will be re-emphasized once they are used in the construction of our model.
+
+Assumption 1. The online marketplace operates stably. And there were no situations such as an outbreak of an epidemic which would seriously affect the production chain of online shopping.
+
+Assumption 2. The ratings and reviews depict customers' real experience and feeling about their purchased products. The sentiment in the review text reflects one's feelings on the products.
+
+Assumption 3. The vast majority of individual differences of customers e.g., economic status and educational level, are ignored.
+
+Assumption 4. It takes some time for shipping the product. Some customers would prefer making reviews sometime after receiving the purchased products.
+
+Assumption 5. Consumers pay more attention to the negative comments e.g., low-star rating or negative reviews when purchasing the products.
+
+# 2.2 Notations
+
+In this work, we use the nomenclature in Table 1 in the model construction. Other nonefrequent-used symbols will be introduced once they are used.
+
+Table 1: Notations used in this literature
+
+| Symbol | Definition | Type |
| id | review id | String |
| sid | Star rate, subscript is its associated review id | Scalar |
| hvid | Helpful votes, subscript is its associated review id | Scalar |
| Rid | Review text, subscript is its associated review id | String |
| rdid | Review date, subscript is its associated review id | Date |
| VEC | Vector encoding of the star rating | Mapping |
| INT | Vector encoding of intensity relevant to 5-class seed words | Mapping |
| IMP | Importance rate of review and associated rating | Mapping |
| REP | Reputation rate of product at some time | Mapping |
+
+# 3 Informative Evaluation Model
+
+In this section, we proposed the "importance" to evaluate how informative the review text and star rates are. The most informative factor we take into account is the sentiment of the review text. In this literature, we propose a CE-VADER model to address the sentiment analysis issue in the review text. Our model will classify the text into five groups: strong positive, weak positive, moderate, weak negative and strong negative in the consistency of the five-star rating scheme. Then our proposed "importance" will incorporate the text-based measure star rating
+
+with their fidelity, correlation. The higher the importance, the more informative it is. The rest of the section is arranged as follows. In section 3.1, we covert the integral star rate to vector form. In section 3.2, we propose the CE-VADER, a hybrid model for the text-based measures. In section 3.3, we introduce the "importance" to calculate how informative the review and the star rating together are. In section 3.4, we implemented our model on real data set of 3 types of products to indicate $1\%$ most informative review and star ratings, and analyze the model sensitivity.
+
+# 3.1 Vector Encoding Forms of Star Ratings
+
+Consumers can freely express their comments on the products on Amazon by rating one to five stars after purchasing. A one-star rating is associated with the least satisfaction while five-star with the highest satisfaction. The one-to-five star rating itself is a sufficient measure. To combine the ratings-based measure with the text-based measure which we will discuss in the next section. We would like to convert the star-rating to vector forms in this section.
+
+Firstly, we calculate the ration of each star rate of hair dryers, baby pacifiers, and microwaves from the given data respectively, as shown in Figure 1. We observe that baby pacifiers have received the highest percentage of high star rating, while microwaves a lower star rating. Products with high technology content also face more quality problems, which is in line with actual expectations, indicating that star ratings can indeed reflect consumer satisfaction.
+
+However, we would like to convert the rating to an equivalent 5-dimension vector encoding forms. Denote the star-rate as $s \in \{1,2,3,4,5\}$ , the vector encoding forms of $s$ can be formulated by $\mathbb{V}\mathbb{E}\mathbb{C}(s) = (vec_s^1,\dots ,vec_s^5)^T \in \mathbb{R}^5$ where the components defined by:
+
+$$
+v e c _ {s} ^ {i} = \frac {e ^ {\frac {| i - s | ^ {2}}{2 \sigma_ {0}}}}{\sum_ {j = 1} ^ {5} e ^ {\frac {| j - s | ^ {2}}{2 \sigma_ {0}}}} \tag {1}
+$$
+
+where $\sigma_0$ is a tunable parameter, determining the robustness of our model, the bigger the more robust. The mapping $\mathbb{V}\mathbb{E}\mathbb{C}$ is one-to-one, hence we claim the converted form is equivalent to the star-rating. Moreover, by our definition, we can find: i) $s = \arg\max_i \{vec_s^i\}$ , ii) $\sum_{i=1}^{5} vec_s^i = 1$ . $\mathbb{V}\mathbb{E}\mathbb{C}(s)$ encoding as a probabilistic vector with each component represents their possibility to be rated by the associated star e.g., the 4-star rate has the highest probability to be rated 4 stars, second-highest possibility to be rated as 3 or 5 stars.
+
+# 3.2 Contextual Entropy VADER Hybrid Model for Text-Based Measures
+
+In this literature, we construct a novel model for sentiment analysis based on the review text. To the sake of simplicity, the sentiment scored by the text is regarded as the only fact to measure the success or the failure of the product e.g., the positive attitudes usually indicate a higher potential of the product success while on the contrary, negative attitudes indicate a higher possibility for the product failure.
+
+In this section, we propose a contextual entropy and VADER [7] hybrid model, namely the CE-VADER to address the sentiment analysis challenge in the review text. The model is made up of the two blocks: the contextual entropy (CE) block and the VADER block. CE model shows its high capability in sentiment analysis of the stock market news [6] but its limitation is short
+
+
+Figure 1: Star rating distribution of pacifier, microwave and hair dryer based on the given data.
+
+context e.g., the review in this literature. While the VADER model outperforms the state-of-art nature language model in short online texture but their accuracy depends a lot on the pre-listed lexicon words. The two blocks of CE-VADER will separate outcome a 5-dimension probabilistic vector with each component represents the probability of being its associated group. After a voting block, our proposed model will classify the review into one of the five groups together with an intensity showing how intense it is to be classified into the group. By hybridizing both models, we show CE-VADER can classify the review context into five groups in the consistency of the star rating.
+
+The rest of this section is arranged as follows. Firstly we show our strategy to generate the seed words for the CE model and expanding the gold-standard list of the VADER block. Then we introduce the CE and VADER model superlatively. Final we propose the hybrid CE-VADER model. Our model will classify the review context into five classes: strong positive, weak positive, moderate, weak negative and strong negative, with its intensity. In the next section, we will propose a review fidelity based on the classification results and the intensity.
+
+
+Figure 2: The overall architecture of our proposed CE-VADER model. The model is made up of two blocks, namely the CE block and the VADER block.
+
+# 3.2.1 Manually Annotating the Seed Word
+
+We put $80\%$ of the data as the training set and all the rest $20\%$ as the testing set of evaluations. Sentences in the review body from the training set are broken down into separated words, among which are statistically calculated their frequency. The high-frequency emotion words are picked out as seed words and manually annotated by us, while the low-frequency ones are discarded. The annotator (one of our group members) will incorporate his expertise natural-language processing knowledge for the classifying all the selected emotion words into five groups i.e., strong positive, weak positive, moderate, weak negative and strong negative. Instead of coarse two-group annotations of either "positive" or "negative" [6], we detailedly sub-classify each one into "strong" and "weak" subgroups and set aside one more group labeled as "moderate". The five-group annotation strategies aim to correlate with the one-to-five-star-rating score e.g., "weak positive" maps to the four-star-rating. We denote the five groups of the seed word as $\mathbf{G}_i$ , with $i = 1, 2, 3, 4, 5$ .
+
+Representative words generated from the training set with "positive" or "negative" labels are depicted in Figure 3 with the cold tone or warm tone respectively. Annotated five-class seed words are attached to Appendix A.
+
+
+Figure 3: Demonstration of some representative seed words. Words annotated as "positive" are colored in light tone while the "negative" ones in a dark tone. The bigger the size, the higher word frequency.
+
+# 3.2.2 Contextual Entropy Block (CE)
+
+The contextual entropy block employs a part-contextual entropy model [6] as the backbone architecture. A part-contextual entropy model can consider both the strength of the co-occurrence and the contextual distribution between the candidate of the most representative words from the review context and the generated seed words.
+
+We employ a vector to encode the strength between word and its context in the review. To be more specific, denote the left and the right context of the $k^{th}$ word $w_{k}$ in the $n$ -word review context $R = w_{1}w_{2}\dots w_{k}w_{k + 1}\dots w_{n}$ are $\{w_{1}w_{2}\dots w_{k - 1}\}$ and $\{w_{k + 1}w_{k + 2}\dots w_{n}\}$ respectively. Note that we set review as a complete target instead of breaking it into sentences as in the ref.[6], in consideration of the short contextual style of online comments. The dimension of the vector depends on the length of the review i.e., $n$ . Denote the vector to record the left context of word $w_{k}$ as $\mathbf{v}^{left}(w_k) = (v_{w_k1}^{left}, v_{w_k2}^{left}, \dots, v_{w_kN}^{left})$ and the vector to record the right context as $\mathbf{v}^{right}(w_k) =$
+
+$(v_{w_1}^{right}, v_{w_2}^{right}, \dots, v_{w_N}^{right})$ , where the subscript $i$ for the $i^{\text{th}}$ component of the vector. The value of $v_{w_k i}$ representing the co-occurrence strength between the $w_k$ and $w_i$ . To calculate the possibility distance we will discuss soon validate, all the vectors have to be in the form of probabilistic representation. $N \leq n$ is the dimension of the context vector, it counts the number of distinct words in the context of the review $R$ .
+
+The weight $v_{w_k i}$ is formulated by:
+
+$$
+v _ {w _ {k} i} = \sum_ {j} e ^ {- \frac {\left| j - k \right| ^ {2}}{2 \sigma}} \tag {2}
+$$
+
+It takes both the spatial factor and the distance factor into consideration. Here we employ a Gaussian function to measure the distance as we hold the assumption that the closer, the more weight it gains. Then to make the vector into probabilistic representations, we normalize it by:
+
+$$
+v _ {w _ {k} i} \leftarrow \frac {v _ {w _ {k} i}}{\sum_ {i} v _ {w _ {k} i}} \tag {3}
+$$
+
+Then we change the candidate words or the seed words into probability vector representations. Then we will employ a Kullback-Leibler (KL) distance[8] (denoted as $\mathbb{KL}(\cdot \| \cdot)$ ) to measure the distance between the vector $\mathbf{v}(c_i) = (\mathbf{v}(c_i)^{\text{left}}, \mathbf{v}(c_i)^{\text{right}})$ of the $i^{\text{th}}$ candidate word and $\mathbf{v}(seed_j) = (\mathbf{v}(seed_j)^{\text{left}}, \mathbf{v}(seed_j)^{\text{right}})$ of the $j^{\text{th}}$ seed word, that is:
+
+$$
+\mathbb {D} \left(c _ {i} \| s e e d _ {j}\right) := \mathbb {K L} \left(\mathbf {v} \left(c _ {i}\right) \| \mathbf {v} \left(s e e d _ {j}\right)\right) = \sum_ {k = 1} ^ {N} P \left(d _ {k} \mid c _ {i}\right) \log \frac {P \left(d _ {k} \mid c _ {i}\right)}{P \left(d _ {k} \mid s e e d _ {j}\right)} \tag {4}
+$$
+
+where $P(d_{k}|c_{i})$ and $P(d_{k}|seed_{j})$ denote probabilistic weights of the $k^{th}$ dimension of the left (or right) context vector of $c_{i}$ and $seed_{j}$ respectively and $N$ is the dimensional of the context probabilistic vector. Due to the non-symmetry of the KL-distance i.e., $\mathbb{D}(c_i\| seed_j)\neq \mathbb{D}(seed_j\| c_i)$ , we employ the following symmetry distance (denoted as $\mathbb{SD}(\cdot ,\cdot)$ ):
+
+$$
+\mathbb {S} \mathbb {D} (s e e d _ {j}, c _ {i}) = \mathbb {S} \mathbb {D} (c _ {i}, s e e d _ {j}) := \mathbb {D} (c _ {i} \| s e e d _ {j}) + \mathbb {D} (s e e d _ {j} \| c _ {i}) \tag {5}
+$$
+
+The addition of $\mathbb{D}(c_i\| seed_j)$ and $\mathbb{D}(seed_j\| c_i)$ not only makes the symmetry of the $\mathbb{SD}(\cdot ,\cdot)$ but also accounts for both their left and right contextual distributions. Then we will define the similarity (denoted as $\mathbb{S}\mathbb{I}(\cdot ,\cdot))$ between the candidate word $c_{i}$ and seed word $seed_{j}$ based on the symmetry distance by:
+
+$$
+\mathbb {S I} \left(c _ {i}, s e e d _ {j}\right) := \frac {1}{1 + \mathbb {S D} \left(c _ {i} , s e e d _ {j}\right)} \tag {6}
+$$
+
+$\mathbb{S}\mathbb{I}(c_i, seed_j) \in (0, 1]$ measures the similarity between two words, with the bigger the more similarity they share. In particular, $\mathbb{S}\mathbb{I}(c_i, seed_j)$ equals 1 if and only if $c_i = seed_j$ . After the calculation of the similarity of candidate word and seed word, we will propose the similarity between the candidate word $c_i$ and one of the five groups $\mathbf{G}_j$ by:
+
+$$
+\mathbb {S I} \left(c _ {i}, \mathbf {G} _ {j}\right) := \frac {1}{\left| \mathbf {G} _ {j} \right|} \sum_ {\text {s e e d} _ {k} \in \mathbf {G} _ {j}} \mathbb {S I} \left(c _ {i}, \text {s e e d} _ {k}\right) \tag {7}
+$$
+
+$\mathbb{S}\mathbb{I}(c_i,\mathbf{G}_j)\in (0,1]$ measures the similarity of candidate word $c_{i}$ to the $j^{th}$ group of the sentimental seed words. The similarity of review $R$ to $\mathbf{G}_j$ is defined by:
+
+$$
+\mathbb {S I} (R, \mathbf {G} _ {j}) := \frac {1}{N} \sum_ {i = 1} ^ {N} \mathbb {S I} (c _ {i}, \mathbf {G} _ {j})
+$$
+
+$\mathbb{S}\mathbb{I}(R,\mathbf{G}_j)\in (0,1]$ measures the similarity of review $R$ to the $j^{th}$ group of the sentimental seed words, the bigger the more similarity. The output of the CE block is a 5-dimension probabilistic vector $CE(R) = (CE_R^1,\dots ,CE_R^5)^T\in \mathbb{R}^5$ . The $j^{th}$ component $CE_{R}^{j}$ is the intensity associated with group $\mathbf{G}_j$ , it is defined by:
+
+$$
+C E _ {R} ^ {j} = \frac {\mathbb {S I} (R , \mathbf {G} _ {j})}{\sum_ {i = 1} ^ {5} \mathbb {S I} (R , \mathbf {G} _ {i})} \tag {9}
+$$
+
+# 3.2.3 VADER Block
+
+VADER [7] is a simple rule-based model for general sentiment analysis, especially for the social media text style. Based on manually generated a gold-standard list of lexical features [7], VADER does not require any training data, which shows its promising potential to extend to a wide range of tasks of sentiment analysis. Hence, in this literature, we extend the good-standard list of lexical features based on our generated seed words to address the sentiment analysis of the review context. Moreover, we also extend the original four group prediction results to five group prediction results, i.e., strong positive, weak positive, moderate, weak negative and strong negative in the consistency of our CE block. Readers are referred to ref.[7] for more details about the framework of the VADER. The extended gold-standard list is attached to Appendix A. The outcome of the VADER Block is also a 5-dimension probabilistic vector with the $j^{th}$ component is the intensity corresponds to the $j^{th}$ seed word group $\mathbb{G}_j$ . Denoted the outcome vector of review $R$ as $VADER(R) = (VADER_R^1, \dots, VADER_R^5)^T \in \mathbb{R}^5$ .
+
+# 3.2.4 Proposed CE-VADER for Sentiment Analysis
+
+Our proposed CE-VADER hybrid model is made up of two blocks: CE block and VADER block. Both of the two blocks will provide a 5-dimension probabilistic vector $CE(R)$ and $VADER(R)$ to a review context $R$ with each component is the intensity to its corresponding group $\mathbf{G}_j$ . We will employ a smoothing convex linear combination of the two vectors as the final intensity probabilistic vector of our CE-VADER model (denoted as $\mathbb{N}\mathbb{T}(\cdot)$ ) i.e.,
+
+$$
+\mathbb {I N T} (R) := \operatorname {s o f t m a x} (\lambda C E (R) + (1 - \lambda) V A D E R (R)) \tag {10}
+$$
+
+where lambda is the fused coefficient, in this literature we set $\lambda = 0.5$ to equally weigh the outcome of the two blocks. We employ softmax(\cdot) as our smoothing function, which is defined by:
+
+$$
+\operatorname {s o f t m a x} \left(x _ {1}, x _ {2}, \dots , x _ {s}\right) = \frac {1}{\sum_ {i = 1} ^ {s} e ^ {x _ {i}}} \left(e ^ {x _ {1}}, e ^ {x _ {2}}, \dots , e ^ {x _ {s}}\right) \tag {11}
+$$
+
+As our empirical results show after smoothing, the intensity vector will show a strong consistency to the star-rating.
+
+The sentiment classification result of the proposed CE-VADER on the review $R$ is the group name of $\mathbf{G}_j$ associated with the component of $\mathbb{NT}(R) = (int_R^1, \dots, int_R^5)$ with the largest value, formulated by $j_0 = \text{argmax}_j \{int_R^j\}$ , with its corresponding intensity $int_R^{j_0}$ .
+
+# 3.3 Combination of Text-Based and Rating-Based Measures
+
+The unique review id is denoted as $id$ . Review headline $Rh_{id}$ , review body $Rb_{id}$ , star rating $s_{id}$ , review date $rd_{id}$ , helpful votes $hvd$ , product title $pt_{id}$ , product id $p_{id} \in \{B, M, H\}$ ( $B, M, H$ stand for baby pacifier, microwave, and hair dryer respectively) are all associated with the subscript review id. Due to the strong relevant relationship (difference occurs lease than $0.01\%$ in total) between product title, product parent and product id, namely, once one of them is given we can almost uniquely tell the other two, we only use the product title to depicts the product in this literature. We do not take the marketplace into account, since all of them are from the US. Denote the pair $P(id) = (R_{id}, s_{id}, hvd, rd_{id}, pt_{id}, p_{id})$ , where $R_{id} = (Rh_{id}, Rb_{id})$ is the whole review text including the headline and body.
+
+We propose the importance rate (denoted as $\mathbb{IMP}$ ) of each review by taking its helpful votes, correspondence between the star-rating measure $\mathbb{VEC}(s_{id})$ and the text-based measure $\mathbb{INT}(R_{id},$ review text clarity. It is defined by the following formula.
+
+$$
+\mathbb {I M P} (i d) := \left(1 + h v _ {i d}\right) \cdot e x p \left[ - \alpha \left(1 - \frac {\mathbb {I N T} \left(R _ {i d}\right) \cdot \mathbb {V E C} \left(s _ {i d}\right)}{\| \mathbb {I N T} \left(R _ {i d}\right) \| \| \mathbb {V E C} \left(s _ {i d}\right) \|}\right) \right] \cdot e x p \left[ \beta \left(\sum_ {i = 1} ^ {5} i n t _ {R} ^ {i} l o g \left(i n t _ {R} ^ {i}\right)\right) \right] \tag {12}
+$$
+
+Where the $1 - \frac{\mathbb{INT}(R_{id})\cdot\mathbb{VEC}(s_{id})}{\|\mathbb{INT}(R_{id})\|\cdot\|\mathbb{VEC}(s_{id})\|}$ is the cosine distance between text-based measure and rating-based measure, calculating the fidelity of correspondence between two, $\alpha >0$ is its associated weight. $-\sum_{i = 1}^{5}int_{R}^{i}\log (int_{R}^{i})$ is the entropy of the text context, the low the more clarity, $\beta >0$ is the associated weight. The higher the importance, the more informative the review text and the rating are.
+
+
+Figure 4: Implemented our proposed model on "Hair dryer" data set. We demonstrate the distribution of the top $1\%$ most informative reviews and associated ratings generated by different parameters $(\alpha, \beta)$ . The longer the liner, the more informative our model indicates. The settings of $(\alpha, \beta)$ are: a)(1,1); b)(1,3); c)(1,5); d)(2,1); e)(4,1); f)(5,5). As depicted in the figure, our model shows robustness on the two parameters.
+
+# 3.4 Model Implementation, Sensitivity Analysis and Results
+
+We implement our proposed model for the given data. We set the parameter $\sigma_0 = 1$ in Eq.(1). The model proposed in section 3.3 contains two parameters $\alpha, \beta$ . We first analyze the sensitivity of these two parameters.
+
+As shown in Figure 4, we implemented our proposed model on "Hair dryer" data set, the top $1\%$ most informative reviews indicated by our model show strong robustness on two parameters $\alpha$ and $\beta$ . We also employ the DTW similarity to quantify the robustness of our model on two parameters, as shown in Figure 5. The smaller DTW similarity, the more similar two rankings are. Readers are referred to ref.[9] for more details about the DTW similarity. We set $\alpha 1 = 1, \beta = 1$ as the baseline and calculate the DTW similarity, the maximum is 14.3, which is a small value in the case of 11470 pieces of reviews. Again, our model shows a strong robustness of $\alpha$ and $\beta$ .
+
+Top $1\%$ most informative reviews and the associated ratings listed by their rankings of three kinds of products are attached to Appendix C.
+
+
+Figure 5: DTW similarity of our model implemented on "Hair dryer" data set with different value of $\alpha$ and $\beta$ . We set $\alpha = 1, \beta = 1$ as the baseline. The maximum similarity is 14.3 which is a small value considering our case of 11470 total reviews. It shows the robustness of our model.
+
+# 4 Difference Equation to Measure Time-Based Pattern
+
+In this section, we construct a difference equation based model to formulate the change of product's reputation. The rest of this section is arranged as follows. In section 4.1, we detailedly formulated our model. In section 4.2, we analyze the model sensitivity and implement the model on three kinds of products.
+
+# 4.1 Difference Equation Based Model
+
+We propose the "reputation" to formulate the reputation of product $P$ around time $T$ . Denote reputation as $\mathbb{R}\mathbb{P}$ . However, we assume that the reputation of products will change gradually
+
+by buyer's review text and the star rates. We will employ a difference equation to formulate the reputation. The difference of the reputation (i.e., the growth rate of the reputation) can be formulated as follows.
+
+$$
+\Delta \mathbb {R} \mathbb {E} \mathbb {P} _ {t _ {0}, \theta} (T, P) := \frac {1}{2 z} \left(\sum_ {T - t _ {0} \leq r d _ {i d} \leq T, p _ {i d} = P} \mathbb {I M P} (i d) \cdot \left(\theta s _ {i d} + (1 - \theta) a r g m a x _ {j} \left\{i n t _ {R _ {i d}} ^ {j} \right\} - 3\right)\right) \tag {13}
+$$
+
+where $Z$ is the normalization constant defined by:
+
+$$
+Z = \sum_ {T - t _ {0} \leq r d _ {i d} \leq T, p _ {i d} = P} \mathbb {I M I P} (i d) \tag {14}
+$$
+
+For the sake of simplicity, we normalize the value of $\Delta \mathbb{R}\mathbb{E}\mathbb{P}_{t_0,\theta}(T,P)$ to $[-1,1]$ , with negative value associated with the negative feels or one-star and two-star ratings while the positive value with the positive feels or four-star or five-star ratings.
+
+Parameter $\theta_0\in [0,1]$ is the weight coefficient for the star rating and the text-based measure by our proposed CE-VADER model in section 3.2. We hold the assumption that it takes some time for shipping the product and some customers prefer making review sometime later after purchasing. Hence, we consider all $id$ satisfying $T - t_{0}\leq rd_{id}\leq T$ as the time tag, where $t_0$ is a threshold. Note that buyers have almost 90 days to leave feedback, hence $t_0\leq 90$ . In this literature, we set $\theta = 0.5$ for the equal weight of star rate and the review context. And we set $t_0 = 10$ is this literature. Then, the reputation can be formulated by the following difference equation:
+
+$$
+\mathbb {R} \mathbb {E} \mathbb {P} _ {t _ {0}, \theta} (T, P) - \mathbb {R} \mathbb {E} \mathbb {P} _ {t _ {0}, \theta} (T - 1, P) = \Delta \mathbb {R} \mathbb {E} \mathbb {P} _ {t _ {0}, \theta} (T, P) - \mathbb {P} \mathbb {E} \mathbb {N} (T, P) \tag {15}
+$$
+
+Where the $\mathbb{PEN}(T,P)$ is the penalty factor, it is the penalty factor that determines how much the low-star rating or negative reviews destroys the reputations as we assume buyers pay more attention to those negative comments. It is formulated as follows.
+
+$$
+\mathbb {P} \mathbb {E} \mathbb {N} (T, P) = k _ {2} \times \operatorname {s i g m o i d} \left(k _ {1} \times \mathbb {R} \mathbb {E} \mathbb {P} _ {t _ {0}, \theta} (T - 1, P)\right) \times \frac {\# \left\{i d \mid \theta s _ {i d} + (1 - \theta) \operatorname {a r g m a x} _ {j} \left\{\operatorname {i n t} _ {R _ {i d}} ^ {j} \right\} \leq k _ {3} \right\}}{\# \left\{i d \mid T - t _ {0} \leq r d _ {i d} \leq T , p _ {i d} = P \right\}} \tag {16}
+$$
+
+$k_{1}, k_{2}, k_{3}$ are the thresholds for the penalty factor. We set $k_{3} = 2$ as the threshold to quantify the "negative comments". Hence, $\frac{\#\{id|\theta s_{id} + (1 - \theta)\argmax_{j}\{int_{R_{id}}^{j}\} \leq k_{3}\}}{\#\{id|T - t_{0} \leq r d_{id} \leq T, p_{id} = P\}}$ is the propagation of "negative comments". The sigmoid is defined by $s i g m o i d(x) = \frac{1}{1 + e^{-x}} \in [0,1]$ . We set the sigmoid items to intimate the social behavior that once a product wins a good reputation, people will put more emphasis on the "negative comments". $k_{2}$ values how much the penalty factor is comparing with the growth rate i.e., $\Delta \mathbb{R}\mathbb{E}\mathbb{P}$ .
+
+# 4.2 Model Implementation, Sensitivity Analysis and Results
+
+As the construction of our model in 4.1, there are two tunable parameters i.e., $k_{1}, k_{2}$ . We implement our proposed model on the data of three types of products. And show the sensitivity of our model on these two parameters.
+
+Figure 6 depicts the reputation curve with the parameter settings of $k_{1} = 0.5$ and $k_{2} = 20$ . As shown, "microwave" owns a bad reputation for the negative valued reputation rate; while "pacifier" wins a good reputation. None of the products have a steady or monotonic growth or a
+
+
+Figure 6: The reputation curve of products with $k_{1} = 0.5$ and $k_{2} = 20$ .
+
+decline in reputation rate. Then, we analyze the sensitivity of our model on the two parameters i.e., $k_{1}$ and $k_{2}$ . Our model shows extremely strong robustness on $k_{1}$ and a gentle sensitivity on $k_{2}$ in Figure 7. As depicted in Figure 7-B, different settings of $k_{2}$ will lead to a different growth rate of the reputation rate on the data of "pacifier", but sharing the same tendency. But, the small value of $k_{2}$ will lead to an exponential large reputation growth rate. As we have discussed before, the small setting of $k_{2}$ indicates less attention to negative comments.
+
+
+Figure 7: Sensitivity analysis of the parameters $k_{1}$ and $k_{2}$ . A)Our model shows extreme strong robustness on $k_{1}$ . B)Our model shows sensitivity on $k_{2}$ .
+
+
+
+# 5 Predict Potential Success or Failure
+
+In the last section, we construct a mapping $\mathbb{RP}_{t_0\theta} : [T_0, T_1] \times \{B, M, H\} \to \mathbb{R}_+$ where $[T_0, T_1]$ is the time range of the given review date. In this section, we will employ time series forecasting method to predict the value of $\mathbb{RP}_{t_0\theta}$ on extended future domain, namely $[T_1, T_2] \times \{B, M, H\}$ for some $T_2 > T_1$ in section 5.1. Based on the forecasted $\mathbb{RP}_{t_0\theta}$ in the future time domain, we will evaluate the success or failure potential of each product in 5.2. We show the results on the real data and predicted their potential success or failure in section 5.3.
+
+# 5.1 Time Series Forecasting for Predicting Future Reputation
+
+We employ an auto regressive model (AR) as the time series forecasting to predict future growth or the decline of the reputation rate. The AR model can be depicted by the following difference equation.
+
+$$
+\mathbb {R} \mathbb {E} \mathbb {P} _ {t _ {0}, \theta} (T, P)) = a _ {0} + \sum_ {k = 1} ^ {p} a _ {k} \mathbb {R} \mathbb {E} \mathbb {P} _ {t _ {0}, \theta} (T - k, P)) + \varepsilon_ {T} \tag {17}
+$$
+
+where $\varepsilon_{T}$ is the white noise. AR model goes to search the coefficient $(a_0, a_1, \dots, a_p)$ in Eq.(17) to fit the given data in the time domain $[T_0, T_1]$ with the least root mean square error (RMSE).
+
+# 5.2 Evaluating the Success or Failure potential
+
+We define the average reputation of product $P$ from $T_0$ to $T_1$ as:
+
+$$
+\overline {{\mathbb {R} \mathbb {E} \mathbb {P} _ {t _ {0} , \theta} ^ {T _ {0} T _ {1}} (P)}} := \frac {1}{T _ {1} - T _ {0}} \int_ {T _ {0}} ^ {T _ {1}} \mathbb {R} \mathbb {E} \mathbb {P} _ {t _ {0}, \theta} (T, P) d T \tag {18}
+$$
+
+Similarly, we can define the predicted average reputation of product $P$ in the future time domain $[T_1, t]$ for $t < T_2$ based on the model constructed in section 5.1 as follows.
+
+$$
+\overline {{\mathbb {R} \mathbb {E} \mathbb {P} _ {t _ {0} , \theta} ^ {T _ {1} t} (P)}} := \frac {1}{t - T _ {1}} \int_ {T _ {1}} ^ {t} \mathbb {R} \mathbb {E} \mathbb {P} _ {t _ {0}, \theta} (T, P) d T \tag {19}
+$$
+
+The ration of the calculated average reputation by the given data and the predicted average reputation from Eq. (18) and Eq. (18) is defined as the reputation change rate. It is defined as:
+
+$$
+\gamma (t, P) := \overline {{\mathbb {R} \mathbb {E} \mathbb {P} _ {t _ {0} , \theta} ^ {T _ {1} t} (P)}} / \overline {{\mathbb {R} \mathbb {E} \mathbb {P} _ {t _ {0} , \theta} ^ {T _ {0} T _ {1}} (P)}} \tag {20}
+$$
+
+We propose a fuzzy evaluation system for predicting the success or the failure of the product $P$ . The overall architecture of the system is depicted in Figure 8. We category the outcome into four classes, namely "strong success", "weak success", "weak failure", "strong failure".
+
+# 5.3 Model Implementation and Results
+
+We implement our model to the given data with $p = 50$ in Eq.(17). We set $\frac{3}{4}$ of the data in given time domain $([T_0, T_1])$ i.e., $[T_0, \frac{3}{4} T_1 + \frac{1}{4} T_0]$ to fit the regression coefficients $(a_0, a_1, \dots, a_p)$ .
+
+
+Figure 8: The overall architecture of our proposed fuzzy evaluation system for predicting the potential success or the failure of each product.
+
+The latter $\frac{1}{4}$ to evaluate the AR model i.e., on the time domain $\left[\frac{3}{4} T_{1} + \frac{1}{4} T_{0}, T_{1}\right]$ . In the evaluation domain, we find the model can fit and predict the "reputation" rate surprisingly well, with the maximum RMSE rate of 0.031. We evaluate the potential success or failure on the domain $[T_{1}, T_{2}]$ with $T_{2} - T_{1} = \frac{1}{2}(T_{1} - T_{0})$ . The predicted reputation rates of pacifier, microwave and hair dryer are shown in Figure 9,10,11 respectively. They are predicted to be "weak success", "strong failure" and "strong failure" based on the proposed framework in section 5.2.
+
+
+Figure 9: Model implementation on "pacifier" data. Predicted to be a weak success.
+
+# 6 Specific Ratings and Descriptors Analysis
+
+In this section, we analyze the specific ratings and descriptors repetitively in section 6.1 and section 6.2.
+
+
+Figure 10: Model implementation on "microwave" data. Predicted to be a strong failure.
+
+
+Figure 11: Model implementation on "hair dryer" data. Predicted to be a strong failure.
+
+# 6.1 Specific Star Ratings Relevance to Rating Frequency
+
+In this section, we first analyzed the relationship between the specific star-rating with the reputation rate i.e., strong negative cases of one-star rating and strong positive case of five-star rating. We employ our propose reputation rate as the text-based measure. As shown in Figure 12, we can observe that the growth in the five-star rating proportion leads to an instant increase in the "reputation" rate. While after the peak of the five-star rating proportion, the reputation rate decay immediately. The growth of the reputation rate is also strongly associated with the decline of the one-star rating proportion.
+
+Then, we evaluate the correlation between star-ratings to the helpful votes in Figure 13. Once again, the five-star rating shows a much strong correlation to the helpful votes than the other four ratings among all types of the data i.e., baby pacifiers, microwaves and hair dryers.
+
+Moreover, researches in previous work [10] have also pointed out that people perceive extreme cases among all online comments (both positive and negative) as more useful information instead of the moderate ones e.g., the review text classified as "strong positive" or "strong negative" and the one-star or five-star rate in our issue.
+
+Hence, in this literature, we would like to lay our emphasis on the impact of extreme ratings
+
+
+
+
+
+
+Figure 12: Correlation between the "reputation" rate and one-star or five-star rating proportion. The blue line indicates the reputation time series, the green line for the five-star rating proportion and the orange line for the one-star rating proportion. Red circles highlight typical examples that a higher percentage of five-star rating leads instant an increase in the reputation rate while one-star rating leads drop.
+
+i.e., the one-star and five-star rating rather than two to four stars' rates based on the above mentioned three facts.
+
+
+Figure 13: Correlation between the star ratings with the helpful votes. Reviews associated with five-star ratings are always voted as helpful.
+
+By verifying the pattern of review text word-frequency related to products rated as one or five stars we find that consumers are less likely to purchase those products associated with low-star ratings and are more delighted to buy those with high-star ratings. We take a specific case of the consecutive occurrence of $x$ reviews associated with one-star or five-star rates presented
+
+to people as the baseline with $x = 10$ for hairdryer, pacifier, and $x = 5$ for microwave. Due to the low percentage of five-star rating and the bad reputation it has (as we have previously discussed in section 4.2) the microwave compared to the other two cases, we set the $x = 5$ for it specifically for the low five-star rating proportion. Our empirical results show that a smaller $x$ leads to a higher frequency while a bigger one leads to the lower frequency. For the sake of simplicity, we set a moderate $x$ as mentioned above.
+
+As shown in Table 2, more reviews were incited after the consecutive occurrence of five-star ratings in both cases of owning a good or bad reputation (reputation is formulated by our model in section 4). It suggests that people are much more willing to rate the products after seeing a consecutive occurrence of high-star ratings. while after the consecutive occurrence of one-star ratings, people were less incited to rate in the case of products with bad reputation i.e., microwave and hair dryer. People were strongly incited to rate the products with a good reputation on seeing the consecutive occurrence of one-star ratings.
+
+Table 2: Comparison between rating frequency with a series of extreme ratings cases, overall rating frequency and the average reputation of a baby pacifier, microwave, and hair dryer data. The smaller the frequency, the more frequent buyers rate the products. The consecutive occurrence of five-star ratings always incites people to rate. The consecutive occurrence of one-star ratings of bad-reputation products leads to a much lower rate frequency while strongly incited people to vote for the good-reputation products.
+
+| Product | Rating Frequency after a Series of Five-star Ratings(day) | Rating Frequency after a Series of One-star Ratings(day) | Overall Rating Frequency(day) | Reputation |
| Baby pacifier | 1.9444 | 1.56000 | 2.37828 | good |
| Microwave | 8.66667 | 30.61904 | 24.39975 | bad |
| Hair dryer | 1.58207 | 16.37500 | 4.29293 | bad |
+
+Figure 14 indicates the total number of monthly star ratings and the number of one-star and five-star ratings when the total number of ratings changed significantly on the hair dryer from August 31, 2012, to August 31, 2015. We can observe that the concentration of five-star ratings is always associated with an increase in the total number of ratings, while a series of one-star ratings lead to a decline.
+
+# 6.2 Specific Quality Descriptors' Relevance to Rating Levels
+
+# 6.2.1 Naive Bayesian Model for Evaluation
+
+Some words are often strongly associated with the emotional feeling which leads to different rating levels as e.g., "enthusiastic" with positive and delighted feeling may lead to higher stars' rating and "disappointed" with negative feelings. In this section, we employ the Naïve Bayesian Model to identify how the specific word is associated with the rating levels. The Naïve Bayesian distribution probability of star-rating $s$ of specific word $w$ is depicted as follows:
+
+$$
+P (s \mid w) = \frac {P (w \mid s) P (s)}{P (w)} \tag {21}
+$$
+
+By evaluating the probability $P(s|w)$ , we can measure the correlation between specific word $w$ and star rate $s$ .
+
+
+Figure 14: A representative example showing the correlation between the one-star rates or five-star rates with the overall star ratings on hair dryer data from August 31, 2012, to August 31, 2015.
+
+# 6.2.2 Model Implementation and Results
+
+The higher $P(s|w)$ , the stronger correlation of word $s$ and star rating $s$ . With provided the massive data, we use the frequency to approximate the probability in Eq. (21) i.e., $p(s) = \frac{\#\{id|s_id=s\}}{\#\{id\}}$ . We demonstrated the top ten words associated with five-star ratings and one-star ratings respectively in Table 3.
+
+Table 3: top ten words associated to five-star ratings and one-star ratings respectively.
+
+| Words strongly associated with 5-star rating | P(s = 5|w) | Words strongly associated with 1-star rating | P(s = 1|w) |
| easy | 0.8518 | or | 0.1416 |
| popcorn | 0.8333 | out | 0.1434 |
| love | 0.7809 | not | 0.1483 |
| feature | 0.7778 | no | 0.1513 |
| button | 0.7692 | back | 0.1546 |
| great | 0.7685 | hot | 0.1718 |
| much | 0.7452 | because | 0.1759 |
| high | 0.7250 | months | 0.1940 |
| can | 0.7200 | off | 0.2083 |
| also | 0.7196 | dryer | 0.2125 |
+
+# 7 Attractiveness Analysis of Design Features
+
+In this section, we extract keywords describing the performance, appearance, etc. of the baby pacifier, microwave and hair dryer from the top $1\%$ most informative reviews. By calculating the frequency of these keywords in all reviews and analyzing the content of the top $1\%$ most informative reviews, we inform some design features attractive to consumers, and the groups of keywords and number of keyword occurrences are present in Appendix B.
+
+1. Baby pacifier: Size, appearance, convenience, and safety of baby pacifiers are most concerned by consumers according to Table 4. People are more concerned about whether it is suitable and safe for babies, so various alternatives of size and safe materials are necessary. Besides, cute patterns are popular with babies.
+2. Microwave: Appearance, price, and component of microwaves are most concerned by consumers according to Table 5, which shows that people focus on the cost performance of the microwave. Many reviews welcome the multifunctional but expensive microwave. People prefer to get as many features as possible when the product costs a lot, such as rotating grills, accurate timers, and multiple usage patterns.
+3. Hair dryer: Power and appearance of the hair dryer are most concerned by consumers according to Table 6, indicating that consumers prefer a portable hair dryer with high power. Simultaneously, safety and working volume affect the user experience, making it significant for Sunshine Company to maintain the balance between increased power a safe and quiet operating environment. In particular, some special designs may stimulate consumers to buy, e.g. a folding handle can improve the portability of the hair dryer.
+
+# 8 Sales Strategies and Recommendations
+
+In this section, we propose confident online marketing strategies and recommendations for Sunshine Company based on results generated in previous sections for three precise products: baby pacifier, microwave and hair dryer. Our most confident recommendations including the specific justifications (to the Sunshine Company) are listed as follows:
+
+- (General recommendations) Reviews with high helpful votes and associated with strong feelings are often informative ones. Collect those reviews and star ratings in real-time. Pay attention to text-based reviews equally as the star ratings.
+- (General recommendations) Always keep high proportion five-star ratings, as it is strongly associated with the reputation of the products.
+- (General recommendations) Lay more emphasis on the negative reviews or ones associated with low-star ratings if the products have a bad reputation e.g. high proportion of low-star ratings. And make feedbacks and adjustments immediately, as those reviews or ratings will keep ruining the reputation of the product. Most people who rate one-star state that they will update my review once they have heard or not head from the company. Around $30\%$ of one-star ratings are associated with the untimely warranty service
+
+or feedback from the companies.
+
+- (Specific justification for hair dryer) The condition for the hair dryer seems to be poor. From the top 100 most informative review we evaluated, 13 reviews have reported a case of smoking or throwing out sparks or breaking down, which may lead to some danger. Due to the decreasing reputation of the hair dryer, we recommend you do not put hair dryers into the market until its reputation increases, which is beneficial for the smooth start of the product.
+- (Specific justification for microwave) Microwave has the worst reputation among all of the three products. It has the highest percentage of one-star ratings and strong negative reviews. Among the top 100 most informative reviews, people have widely complained about the bad condition, the potential danger of catching fire. Our model suggests its potential failure. Hence, we recommend that wait until the reputation of microwaves recovers and remains stable.
+- (Specific justification for pacifier) Pacifier has a good reputation, with the highest proportion of positive reviews and five-star ratings. It is also predicted to be a success by our model, so it's good timing to put it into the market. However, among the top $1\%$ most informative reviews, they have reported cases of leaking water. Companies should pay attention to the connection or the joint part.
+- (General recommendations) As low-cost products, pacifiers, microwave and hair dryer have low professional requirements on consumers at the same time. The more complete the product information is, the less loss will be caused by the unequal information between buyers and sellers. Of course, the authenticity of the information needs to be guaranteed.
+- (General recommendations) Our results indicate that a series of five-star ratings can incite consumer reviews. Therefore, we recommend that you increase your promotional efforts when there are more five-star ratings of your products to form positive feedback.
+- (General recommendations) Creative and engaging advertising will increase product sales. The online marketplace makes it easier for consumers to browse multiple alternatives of the commodity, which leads to fierce competition among companies. Designing interesting and eye-catching advertisements can effectively enhance the competitiveness of products, so Sunshine Company should make the appropriate investments in advertising to make more profit. Top $1\%$ of most informative reviews suggest some consumers pay much attention to the company websites.
+
+# 9 Strengths and Weaknesses
+
+# 9.1 Strengths
+
+1. Novelty. To the best of our knowledge, we are the first to propose a CE-VADER hybrid model for review text-based sentiment evaluations on an online marketplace.
+2. Accuracy. The maximum RMSE of the 50 order AR model is 0.031 on the validation time domain. The text-based measures correlate well with the rating-based measures.
+
+3. Generalization. Our proposed framework can freely be implemented to any data set e.g., reviews and star-ratings of any products from any online platforms.
+4. Robustness. Our model shows great robustness to most of the parameters.
+
+# 9.2 Weaknesses
+
+1. Time consuming manual annotations. Annotating the seed words generated from reviews for CE-VADER model manually is time-consuming.
+2. CE-VADER model can not identify different forms of the same word with special variation rules. CE-VADER model cannot identify the past form of verbs, plural of nouns and comparative forms of adjectives with special variation e.g., is (v.s. was), children (v.s. child) and better (v.s. good).
+3. Missing other potentially relevant factors. We do not take marketing strategies of Amazon-like sales promotion into consideration when analyzing specific ratings and descriptors.
+
+# 10 Conclusion
+
+To crack the secret of Amazon's ratings and reviews, we proposed a series of novel models to address the sub-issues from selecting the most informative reviews to identifying reviews' quality descriptors. The proposed model achieves high accuracy and robustness.
+
+1. Information Evaluation Model can combine the text-based measure with the rating-based measure, where we propose a novel CE-VADER hybrid model for the sentiment analysis as the text-based measure. We can rank how informative each review and the rating is with the proposed model. The informative rate correlates to the helpful votes. To be more specific, the more help votes there the more five-star ratings they own and the longer review bodies are, the more likely they are evaluated as informative reviews. However, PLEASE REMEMBER that moderately-rated reviews with high information entropy, which have both positive and negative comments, marked by words like "however" and "but" also have great reference value.
+2. We employ the Difference Equation Model to construct a "reputation rate" to quantify the reputation of three products, namely baby pacifier, microwave and hair dryer. Baby pacifier has a positive reputation, hair dryer has a weak negative one, and the microwave has the worst reputation. With a modified AR algorithm, we predicted the future reputation tendency of these three products.
+3. In analyzing the distribution of star ratings and specific words, we identified special review descriptors by employing a continuous extreme rating and a set of special words. Continuous extreme ratings can obviously affect the total sale volume and special words' appearance can judge the rating of reviews with high probability.
+
+11 A Letter to the Marketing Director of Sunshine Company
+
+Dear Marketing Director of Sunshine Company,
+
+According to your requirements, we analyze the ratings and reviews of competitive products on Amazon for baby pacifier, microwave and hair dryer to be introduced and sold by your company. We form three models dubbed as Informative Evaluation Model to assess the amount of information of each review. Difference Equation Based Model to formulate the change of product's reputation and Time Series Forecasting Based Evaluation Model to predict potential success or failure of the product. And we get some meaningful results, which contribute to attract consumers and develop adaptable online sales strategies.
+
+First of all, We construct the Informative Evaluation Model to help you track informative reviews and ratings conveniently. This model can assess whether each review is informative based on star rating, review text, and helpful votes, then rank the reviews based on the amount of information. We believe this informative evaluation is critical to your work when handling a large amount of review information, for informative reviews often provide more constructive input into the design features and therefore are more referential in reputation analysis.
+
+We get consumer's preferences and concerns by sifting through the top $1\%$ most informative ratings and reviews of baby pacifier, microwave and hair dryer. The popular design features by our analysis are as follows:
+
+- Baby pacifier: Various alternatives to size and safe materials are necessary. Besides, cute patterns are popular with babies.
+- Microwaves: Multifunctional microwaves with rotating grills, accurate timers, multiple usage patterns, etc. are welcomed by consumers.
+- Hair dryer: Consumers prefer hair dryers small with high-power (on the premise of safety). Some additional designs like folding handle are beneficial to attracting consumers.
+
+Then we establish time-based measures to predict the increase or decrease of product reputation in the online marketplace. Our model can accurately predict the reputation change of a product over a long period of time in the future. This precise prediction makes sense for you to generate strategies before the decrease of the reputation.
+
+Based on the reputation prediction, we stipulate that the degree to which reputation will increase or decrease in the future as judgment of the potential success or failure of the product. Our analysis shows that, overall, baby pacifiers are potentially successful products, while microwaves and hair dryers are at greater risk of failure.
+
+According to our analysis results, we formulate reasonable sales strategies for your company: (1) We recommend that you put microwaves and hair dryers into the market when the reputation rises. While the reputation of baby pacifiers is on the rise, and it's best to put them on the market now. (2) The more complete the product information is, the less loss will be caused by the unequal information between buyers and sellers. (3) We recommend that you increase your promotional efforts when there are more five-star ratings of your products to form positive feedback. (4) When your product's reputation declines, focus on one-star ratings and reviews.
+
+Thanks for taking the time out of your busy schedule to read my letter. Hope our advice can help.
+
+MCM Team # 2002116
+
+# References
+
+[1] Peter C Evans and Annabelle Gawer. The rise of the platform enterprise: a global survey. 2016.
+[2] Seyed Pouyan Eslami, Maryam Ghasemaghaei, and Khaled Hassanein. Which online reviews do consumers find most helpful? a multi-method investigation. Decision Support Systems, 113:32 - 42, 2018.
+[3] Understanding the determinants of online review helpfulness: A meta-analytic investigation. Decision Support Systems, 102:1 - 11, 2017.
+[4] Ya-Han Hu and Kuanchin Chen. Predicting hotel review helpfulness: The impact of review visibility, and interaction between hotel stars and review ratings. International Journal of Information Management, 36(6, Part A):929 - 944, 2016.
+[5] Y.Chen and J.Xie. Online consumer review: word-of-mouth as a new element of marketing communication mix. Management Science, 54(3):477-491, 2008.
+[6] Liang-Chih Yu, Jheng-Long Wu, Pei-Chann Chang, and Hsuan-Shou Chu. Using a contextual entropy model to expand emotion words and their intensity for the sentiment classification of stock market news. Knowledge-Based Systems, 41:89 - 97, 2013.
+[7] Clayton J Hutto and Eric Gilbert. Vader: A parsimonious rule-based model for sentiment analysis of social media text. In *Eighth international AAAAI conference on weblogs and social media*, 2014.
+[8] Solomon Kullback. Information theory and statistics. John riley and sons. Inc. New York, 1959.
+[9] Ira Assent, Marc Wichterich, Ralph Krieger, Hardy Kremer, and Thomas Seidl. Anticipatory dtw for efficient similarity search in time series databases. Proceedings of the VLDB Endowment, 2(1):826-837, 2009.
+[10] Sangwon Park and Juan L. Nicolau. Asymmetric effects of online consumer reviews. Annals of Tourism Research, 50:67 - 83, 2015.
+
+# Appendices
+
+# Appendix A Annotated Seed Words and Frequency
+
+We manually annotated seed words in to five groups, shown as follows.
+
+1. **Strong Positive:** great (3006), recommend (1174), perfect (761), best (656), favorite (368), perfectly (183), awesome (125), excellent(114).
+
+2. Weak Positive: like (3624), love (2766), loves (2753), easy (2154), good (1424), well (1286), could (1054), cute (1002), loved (876), nice (855), better (821), able (738), fit (731), right (687), likes (647) wish (644), want (585), happy (577), easier (521), fits (395), liked (360), helps (311), glad (229), clean (193), adorable (191), cute (191), fun (181), Love (163), well (157), fine (153), helped (152), pleased (141), Easy (131), wonderful (131), excited (118), comfort (114), warm (114), LOVES (103), cute! (99), enjoy (96), enjoys (86), cool (85).
+3. Moderate: baby (4998), her (3852), she (3678), he (3537), can (2913), his (2658), use (2446), son (2239), daughter (2114), more (1867), him (1184), She (763), babies (626), easily (619), gift (556), worth (513), super (489), toy (448), wanted (437), well (407), won't (400), smaller (302), wouldn't (303), girl (293), attached (293), exactly (282), given (251), might (218), throw (213), heavy (210), friend (205), bigger (189), brands (171), issue (169), larger (167), anyone (166), worry (166), attach (165), clear (160), fast (159), prefers (158), lose (157), care (157), replace (157), stuck (151), granddaughter (150), issues (146), quick (142), cheap (142), daughter's (140), safety (139), tiny (136), deal (136), expecting (131), wants (129), expected (128), prefer (128), clean (123), holes (123), gift (120), easily (118), simple (118), strong (113), cheaper (110), unless (106), original (107), slightly (105), nursing (103), gifts (98), thin (94), nephew (94), save (94), loose (93), adjust (92), better (92), dirty (87).
+4. Weak Negative: doesn't (1283), didn't (1011), can't (780), hard (734), less (354), problem (346), wasn't (297), couldn't (285), however (252), difficult (248), yet (244), cannot (211), however (175), Not (173), problems (149), aren't (143), However (132), hit (126), against (124), harder (118), waste (118), wrong (102), trouble (101), pain (90), though (180).
+5. Strong Negative: but (7128), disappointed (170), NOT (90), No (113).
+
+# Appendix B The Number of Keyword Occurrences in Different Keyword Groups
+
+The number of keyword occurrences of baby pacifier is listed in Table 4 in a total of 18937 reviews.
+
+Table 4: The number of keyword occurrences in different keyword groups of baby pacifier
+
+| Group | The number of keyword occurrences |
| Size | 3402 |
| Appearance | 3399 |
| Convenience | 3138 |
| Safety | 2374 |
| Component | 706 |
+
+The number of keyword occurrences of hair dryer is listed in Table 5 in a total of 1616 reviews.
+
+The number of keyword occurrences of hair dryer is listed in Table 6 in a total of 11470 reviews.
+
+Table 5: The number of keyword occurrences in different keyword groups of microwave
+
+| Group | The number of keyword occurrences |
| Appearance | 2350 |
| Component | 2289 |
| Price | 2234 |
| Setting | 1049 |
| Safety | 617 |
| Power | 594 |
+
+Table 6: The number of keyword occurrences in different keyword groups of hair dryer
+
+| Group | The number of keyword occurrences |
| Power | 6774 |
| Appearance | 4656 |
| Special design | 3973 |
| Hair quality | 2666 |
| Safety | 1544 |
| Working volume | 1542 |
+
+# Appendix C Top 1% Most Informative Ratings and Reviews
+
+We list following top $1\%$ most informative reviews and their associated ratings according the the "importance" rate our model predicted on "Hair dryer" data set.
+
+1. (5-star rating): Better than expected. Great product, excellent delivery.
+2. (5-star rating): I am small with shoulder problems and this is very compact and light weight. It packs a punch with power!
+3. (5-star rating): I have frizzy thick hair and using this product makes it more manageable. Adds shine and softness.. not oily/heavy.. love it.
+4. (5-star rating): I use it a lot, i am a stylist and use this with my really curly hair clients and it never disappoint me. great price, great quality, great buy.
+5. (5-star rating): Love this dryer, works so great and fast too. Bought it for my mom as a gift and she absolutely loves it! She has very thick hair and hair dryer worked great for her.
+6. (5-star rating): I have thick shoulder length hair, that usually takes 35 - 40 minutes to dry. With this dryer, I'm done in 20 minutes and my hair looks smooth and soft. I was skeptical about the whole thing, but it has made a big difference with my hair. The removable
filter was the deciding factor. I've thrown away dryers just because I couldn't clean them. Excellent product!
+7. (5-star rating): being a professional hairystylist i got so tired of expensive blowdryers that dont dry the hair any better than the cheap ones. THEN, i read the reviews for this dryer and again i thought, yeah right. SOOO, i bought it and i must
+
+say, this blow dryer rocks. it dries the hair in half the time and leaves it smooth and shiny!!! FINALLY, i found one that delivers on its promise
+
+8. (5-star rating): great dryer, works well. Lots of heat and volume when I need it, nice cooling button. Dries my hair fast. Amazon got it to me overnight when mine died, so I only went 1 day with bad hair. Not bad, because I don't have time to shop at the brick and mortar.
+9. (5-star rating): This was so cheap but the product is NOT - excellent quality!! LOVE it and use it a lot!
+10. (5-star rating): We've been using this hair dryer for a few weeks now on a constant basis. Overall, this is a powerful blower that gets your hair dry quickly with excellent balance of heat. No hot spots that cheapo dryers can develop. Yes, it is on the heavy and bulky side but that's because of the quality construction. I have no doubt this dryer will continue to last for years to come.
+11. (5-star rating): My wife used one of these at a motel we stayed in. She really wanted one so I bought it. Your old lady will dig it too because it is powerful and has a cord reel. You will hate yourself later when you have to disassemble it to dig her hair out of it because she complains about it not blowing hard like it used to. But women are like that......blah blah blah complain blah......
If you can disassemble this thing, clean it and re-assemble it in working order before she needs it to dry her wet head consider yourself lucky. I think a nuclear bomb would be easier to work on.
+12. (5-star rating): This is my second hair blower of the exact same model. (Conair Ionic Conditioning Pro Style 1875-Watt Hair Dryer)[[ASIN:B00005O0MZ Conair Ionic Conditioning Pro Style 1875-Watt Hair Dryer]] I would by it again.
+13. (5-star rating): It let my hair so nice and without freeze!!! im so sad that i drop mine by mistake after 3 years that i have it with me, but ill defenently buy it again!!!
+14. (5-star rating): I have very fine hair that doesn't like to be styled. Using this dryer, I have;
Volume
Body
Curl
AND, I don't have to use any product to get it that way. I believe the key is not only the dryer, but, the shampoo you use. I use a Keratin infused shampoo with Ion ultra light weight conditioner. Bottom line- this blow dryer is incredible!
+15. (5-star rating): I have VERY thick hair, and this really did cut down on my drying time. I use the cold button when I'm done and it seems to make my hair a little more manageable.
+16. (5-star rating): I will now be saving money by going less to my hairstylist. This blowdryer dried my hair 2x faster than a previous blowdryer I had. I wished I had bought this blowdryer years ago!
+17. (5-star rating): I have one in my guest bathroom and my visitors can use it. You dont have to looking one in the lower of the lavatory.
+
+18. (5-star rating): This dryer is wonderful, the power you have is second to none ..., a little heavy but worth it for the results it leaves. I am very happy.
+19. (5-star rating): It will seriously dim the lights in your house, but man can it dry hair!
+20. (5-star rating): It dries my hair quickly, is small enough to pack in my suitcase, has a retractable cord, and makes my hair incredibly silky and smooth. And you can't beat the price either. I'm never buying another dryer brand/model as long as this one is in production.
+21. (5-star rating): Small but packs some serious heat and power!
+22. (5-star rating): I probably bought and returned five other hair dryers before keeping this one. My kids and I all have LOTS of hair that takes forever to dry. This hair dryer is the best; it dries our hair much faster than our old one did. No worrying about accidentally turning it off or changing the setting, you don't have to hold the button for the cold shot, and if the regular setting isn't quick enough for you, there's even a turbo setting. This saves us so much time in the mornings! I'm thinking about buying a second one to have in the other bathroom so two of us can dry our hair at once. It did seem a bit big at first, but it didn't take long to get used to it. Other dryers had too much air flow and I couldn't control how I was styling my hair, or the high heat setting was too hot, but the settings on this one are just right.
+23. (5-star rating): Exactly what we were looking for.
+24. (5-star rating): I bought this because of the great reviews. I color my hair (to cover gray) and blow drying always takes too long and hair looks dry and have some sticking up all over. I bought this and OMG!!! It took half the time to dry and no fly-away hair!! I have gotten so many compliments. I loved it but never thought anyone would notice. I really do get such nice compliments. Remember I am over the hill and hair seems to get dryer with the years and coloring doesn't help that except cover the gray which I am so happy for that. But if you feel your hair is getting too dry and/or frizzy, fly-away hair — this will take care of it. Its a one-time purchase and so worth it. 3-temperatures and a cool button which I love. What can I say, oh it is not all that heavy, regular weight as other unless you want a little blow and a lot of noise from a much cheaper hair blower. Been there done that... this is the BOMB!!! Don't waste more time or money on frustrating hair blowers, be good to yourself and buy this awesome hair blower. Take care and you will not regret it for one second.
+25. (5-star rating): Very affordable, reviews were accurate. It is quieter than my old one and quick. No need to spend a lot of money on a dryer when you have the quality with Conair.
+26. (5-star rating): Discovered this little gem in a hotel in NYC. Dries hair without leaving frizz or fly-aways. Folding handle makes it convenient for storing and travel.
+
+27. (5-star rating): This hairdryer is the lightest and quietest I've ever used. In fact, my husband now wants one of his own!
+
+28. (5-star rating): I love love love this hair dryer....wash my hair seat for 30 min Nd I'm done...
+
+29. (5-star rating): Every day I used to dread blow drying my hair. It's long and thick and after a half an hour it still wouldn't be dry. This hair dryer gets my hair completely dry in 3-4 minutes max. It has totally changed my life. I have bought one more just to have on hand. You never know how long a hair dryer will last.
+
+30. (5-star rating): I got this dryer to replace my old one and wow does it make a difference. My hair is so soft and shiny. I can even use this to dry my 1 year old daughter's hair and I couldn't to that with my old one because it would get too hot even on the lower setting. It dries fast without burning. Definitely worth every cent!
+
+31. (5-star rating): I already have one of these, but because I travel a lot, I have one at home and one for my travel trailer. Lots of heat & power in a small little dryer.
+
+32. (5-star rating): I LOVEAAAAAAAA IT
+
+33. (5-star rating): Still using this after a year....love the retractable cord
+
+34. (5-star rating): It is faster compared to my old one, the cord doesn't use extra space as it rolls inside, which I find very convenient.
+
+35. (5-star rating): use it every day
+
+36. (5-star rating): Was everything they said it was takes up very little space and does the job
+
+37. (5-star rating): This is a replacement for the one I have had for many years.
+
+38. (5-star rating): Expensive but worth every penny. professional hair dryer with professional results. Cuts time spent in the bathroom drying my hair
+
+39. (5-star rating): I have thick hair and this baby gets my hair dry quickly and WITHOUT frizzies! Love it!
+
+40. (5-star rating): This blow dryer is great for traveling. The fold up handle allows for easy packing. The cord retractor is great
+
+41. (5-star rating): I don't use my hair dryer for much more than drying purposes... Lol... It's simple, in-expensive, and it blows a lot of hot air... ;)
It serves its purpose well...
+
+42. (5-star rating): It dries so fast it is great! I had brought one over from France that my mom had gotten years and years ago but unfortunately I was never able to get the same power due to the voltage difference. I ordered this one and it is the same to the T (except color). Cut down my hair routine in half.
It does fume and heats up A LOT
+
+so be careful!
(I have black relaxed hair)
+
+43. (5-star rating): I considered shelling out some serious cash to buy a high end blow-dryer but then I realized that's stupid...and I'm cheap, so I bought this instead and I honestly could not be happier! I have long hair (like to the middle of my back) and it's pretty thick and this dryer dries my hair in under 10 minutes. My last dryer took FOR.EV.ER so to avoid getting stared down by my husband and watching the hate build in his eyes because I was taking so long to get ready and thus withholding whatever meal we are trying to go eat from him, I would just not bother drying and curling my hair and instead would just rock my natural fro but with this new dryer I can do my hair and still get out of the house quickly (relatively, anyway).
I recommend this dryer highly! No need to waste your hard earned cash on a high end dryer; this one does the job!
+
+44. (5-star rating): Return is no longer needed
+
+45. (5-star rating): We had one of these over 18yrs old and it finally gave out! The new one outdoes the previous model!
+
+46. (5-star rating): This was purchased to replace the one that got stomped (by accident). It is of a very professional quality, without the high price tag.
+
+47. (5-star rating): Fast drying with lots of shine - I was first introduced to this hair dryer at a 5-star hotel I stayed at - it dries hair fast with lots of shine.
+
+48. (5-star rating): Wanted same brand and wattage they have on the wall at our Y.
Perfect. Just what my spouse was looking for.
+
+49. (5-star rating): My hair is so soft with this dryer and it doesn't frizz out. Also, it isn't as loud as my previous cheaper dryer.
+
+50. (5-star rating): This hair dryer has a lot of power. The wall mount keeps it out of the way, but readily accessible when needed. This is a replacement hair dryer for us and we purchased the same one when the other quit working.
+
+51. (5-star rating): I read about the John Frieda Full Volume Hair Dryer in a magazine, and it has more than lived up to its review. Dries quickly, leaves hair soft, WITH NO FRIZZIES, and with lots of body. It's made my wavy, mind-of-its-own hair behave for the first time. I highly recommend this product. Couldn't live without it.
+
+52. (5-star rating): This dryer is exactly what I wanted very lightweight and does the job
+
+53. (5-star rating): Exactly as expected
+
+54. (5-star rating): This hairdryer has great power and different heat settings. The price and quality are great, it is very heavy duty. The automatically recoiling cord is a bonus!
+
+55. (5-star rating): I bought this hair dryer to replace my previous Hot Tools hair dryer that had broken after many years of use. This hair dryer is even better than my old one! It is much lighter and still has the same fast drying power of the last one. I take it everywhere because no other hair dryer can dry my long layered hair as quickly and
+
+easily.
+
+56. (5-star rating): Sets up very easily and quickly. No tools needed. Literally just a few minutes. I put in a bar stool to sit on and a towel over the vent holes. As others have said, the towel over the fresh air hole is needed in order to keep the steam in. I don't know why people complain about the air vent holes. For 2 bucks you can buy a towel for them or for $2000 you can buy a different style of steam sauna. Seriously folks, buy a towel. Add a little aromatherapy salts and it is bliss! Hard to find this kind of bliss for a mere 200 bucks. I get mine almost painfully hot in 30 mins then lower the temp, get in, and reset the timer. The picture displayed is a little off as the door opens from one side and not down the middle as pictured. No worries though. It's nice. I would buy it again.
+57. (5-star rating): awesome!!!!!!!!!!!!!!
+58. (5-star rating): This hair dryer is great-it's lightweight and dries my hair in half the time of all other hair dryers I've used.
+59. (5-star rating): I was really surprised when I received this hair dryer because the quality of the product IT IS GREAT
The dryer is very affordable and has a great look and feel and puts out nice heat. I would definitely recommend this product to friends and am very happy with it.
+60. (5-star rating): I add the conditioner to my routine. My husband found that it really helped when he use both the shampoo and the conditioner and left the conditioner on for a couple of minutes
+61. (5-star rating): I couldn't live without it!
+62. (5-star rating): This is the third wall mount I have purchased. Love the convenience and that I don't have to keep in a drawer.
+63.