content
stringlengths
86
994k
meta
stringlengths
288
619
Design of 500 kV AC Double-Circuit Transmission Line on the Same Tower Located at 20 mm Heavy Icing Area - JUCRO Design of 500 kV AC Double-Circuit Transmission Line on the Same Tower Located at 20 mm Heavy Icing Area Design of 500 kV AC Double-Circuit Transmission Line on the Same Tower Located at 20 mm Heavy Icing Area ZHANG Haiping, ZHANG Chi, WANG Jiangtao, ZHAO Qingbin, REN Deshun, GOU Jie (Sichuan Electric Power Design & Consulting Co., LTD., Chengdu 610016, Sichuan Province, China) ABSTRACT:During the construction of the first 500 kV AC double-circuit transmission line on the same tower located at 20 mm heavy icing area in China, the dynamic response of the transmission lines under various ice-shedding cases are analyzed by numerical simulation, the engineering formula for calculating ice-shedding jump height and swing distance are obtained, and also variation of dynamic load on the tower. Based on those conclusion, routing, insulation coordination, tower head design, tower load and tower type of 500 kV AC double-circuit transmission line on the same tower located at 20 mm heavy icing area are researched. The summarized design method and thought can provide an important reference for the double-circuit transmission line on the same tower located at heavy icing 0 Preface The western Sichuan, Yunnan, and Tibet regions are rich in hydropower resources, but these areas have a harsh natural environment and a complex human and social environment. With the increase of water and electricity transmission lines, the resources of the corridors are becoming scarcer, so that the high-altitude and heavy-ice lines that have been traditionally erected in a single circuit have also been required to be erected in the same tower. The Maoxian-Mao County II500 kV double-circuit line (referred to as the Maomao 500 kV line) has been put into operation. It is the first double-circuit line on the same tower in the 20 mm heavy ice area in China. Restricted by factors such as Maoxiang County Maoxiangping Industrial Development Zone, Baodingshan National Nature Reserve and established or planned high-voltage line corridors, the route path of the 20 mm heavy ice area crossing the land ridge section is the only one, which must be designed according to the double-circuit transmission line on the same tower. There is no precedent for the application of the double-circuit line on the same tower in the heavy ice area. The main technical bottleneck is the design of the double-circuit tower on the same tower in the heavy ice area.Because it is impossible to accurately grasp the dynamic response law of wire deicing under different working conditions, it lacks the guiding principles for the design of the tower head size and internal stress of the double-circuit line on the same tower in the heavy ice area.In recent years, domestic and foreign scholars have carried out a lot of research on the dynamic response of wire deicing by simulation test and numerical analysis [1-17], and some conclusions with engineering practical value are obtained. Especially after the ice disaster in southern China in 2008, domestic scholars conducted a more in-depth study on the dynamic response law of wire deicing in order to explore the mechanism of ice disaster accidents: Chen Kequan et al [14] used ABAQUS software to establish a finite element model of a 500 kV transmission line, and obtained the influence of different factors on the height of the wire depilation jump;Yi Wenyuan [15] used ABAQUS software to analyze the deicing dynamic response of UHV 8 split conductor, and gave a simplified formula of ice jump height and yaw;Yan Bo et al. [16] used numerical simulation method to analyze the numerical results of wire dynamic response under various deicing conditions, and obtained a practical simplified calculation formula for the maximum height of wire deicing jump.Yang Fengli et al. [17] established a deicing jump analysis model for UHV lines in heavy ice areas, focusing on the dynamic response law of overhanging towers, and obtained the unbalanced tension and vertical loads of overhanging towers in heavy ice areas. Recommended value.The above models all adopt a simplified model of 3-degree-of-freedom system that ignores the bending and torsional stiffness of the wire, and most of them do not consider the influence of the tower constraint. However, the research methods and ideas lay a good foundation for grasping the dynamic response law of wire de-icing dynamic Based on the actual situation of the Maomao 500 kV heavy ice area line project, this paper uses Ansys software to establish a refined finite element model including iron tower, grounding wire, insulator string, spacer bar, de-icing load, wind load, etc. The simulation method is used to study the dynamic response law of de-icing of transmission lines in heavy ice area, and then carry out research work on path selection, insulation configuration and tower design of 500 kV double-circuit line on the same tower in 20 mm heavy ice area. 1. De-icing dynamic response of transmission line in 20 mm heavy ice area 1.1 Transmission line finite element model The wire is twisted from a steel core with high strength and an outer aluminum wire. It has certain bending and torsion resistance and cannot be simply considered as a cable unit.Therefore, the space beam Beam4 unit can be used to more realistically simulate the stress state of the wire. The insulator string is also simulated by the Beam4 unit. For the disc insulator, the connection structure is considered by releasing the angular displacement coupling of the node every 0.17 m.The spacer bars are simplified to the coupling of the unit degrees of freedom of the unit nodes and a concentrated force converted from the weight of the spacer bars. The tower member is also simulated by the Beam4 unit. The multi-speed tower-line coupling model is established according to the tension tower at both ends and the middle straight tower. 1.1.2 Line load In view of the lack of mechanical parameters of ice coating, the mass method is used to simulate, that is, increase the density of the wire to simulate ice coating: Where:ρci the equivalent density of the wire after ice coating;ρc、Δc the density and cross-sectional area of the wire; ρi、Δithe density and cross-sectional area of the ice coating. The de-icing load is simulated by changing the density method, that is, at the moment of deicing, the wire density is changed from the equivalent density after ice coating to the actual density of the wire,at the same time, the wire defrosting range and the ice removal rate are controlled by controlling the range of the density unit. The horizontal wind load of the grounding line is calculated according to the method in Design Code for 110 kV~750 kV Overhead Transmission Lines (referred to as Design Code) [18], and the wind load is simulated by applying concentrated force at the element node.Concentration force is calculated as follows: [ ] Where: F is the unit node concentration force; Lp is the unit length. 1.2 脱冰跳跃高度公式 1.2 De-icing jump height formula In order to facilitate engineering design, the engineering community has been working on a simplified calculation formula for the height of the wire ice jump.In [16], the engineering simplified formula based on the two kinds of static state arc Δ before and after the wire deicing is given. For the multi-speed transmission line, Δ can not be directly calculated, and only the arc drop δ before and after the ice can be obtained. Through a large number of simulation calculations, the ice jump height of the wire is proportional to the ice thickness, the gauge distance, the ice removal rate, and inversely proportional to the wire cross section, and according to the calculation results, the calculation formula of δ-based deicing height based on δ is obtained by fitting the continuous five-speed tensile section model [19]. Where: δ is the arc sag before and after icing; Ɯ is the icing rate; r is the wire type coefficient, the value is shown in Table 1; η is the gear distance correction coefficient, η=[(L-500)/100]* 0.06, where L is the span. Tab. 1 Value of g │ Wire type │Ice thickness/mm │ gg │ │LGJ-500/45 │ 20 │-0.08│ According to the calculation of Table 2, the calculation error of formula (3) is within 8%, which can meet the requirements of engineering application, and the formula can achieve good scalability according to the values of different parameters. Tab. 2 Difference of ice-shedding jump height between numerical simulation value and calculated pressure │Ice thickness/│De-icing rate/│Arc sag before and │Span/│ Wire ice jump height │ │ │ │ │ ├────────────┬───────────────┬───────┤ │ mm │ % │ after icing/m │ m │Test value/m│Formula value/m│error/%│ │ │ │ 0.95 │ 400 │ 21.23 │ 20.84 │ 1.84 │ │ │ ├───────────────────┼─────┼────────────┼───────────────┼───────┤ │ 20 │ 100 │ 1.40 │ 500 │ 31.41 │ 30.91 │ 1.59 │ │ │ ├───────────────────┼─────┼────────────┼───────────────┼───────┤ │ │ │ 1.82 │ 600 │ 43.91 │ 40.45 │ 7.88 │ 1.3 De-icing jump yaw distance formula The simulation results show that the wire icing yaw distance is proportional to the ice thickness, wind speed, horizontal span and de-icing rate, inversely proportional to the wire cross-section, and is related to the insulator string type, and fits the 5 mm heavy ice zone for 5 consecutive times. The yaw formula of the tensile-resistance model wire when it is 100% detached. Where: B is the traverse distance of the wire; v is the wind speed at the same time as deicing; l is the horizontal gear; m, t is the coefficient related to the wire type and the insulator string type, and its value is shown in Table 3. Tab. 3 Value of m and t │Wire type and string type │ m │ t │ │ LGJ-500/45(I string) │ │-1.966 5 │ ├──────────────────────────┤0.010 0├─────────┤ │ LGJ-500/45(V string) │ │-1.582 5 │ It can be seen from Table 4 that the calculation error of equation (4) is within 9%, which can meet the requirements of engineering application, and the formula can achieve good scalability according to the values of different parameters. Tab. 4 Difference of ice-shedding swing distance between numerical simulation value and calculated pressure │ │Wind speed/│ │ │ Wire ice jump yaw distance │ │Ice thickness/ mm │ │Span/m│String type├────────────┬───────────────┬───────┤ │ │ (m/s) │ │ │Test value/m│Formula value/m│error/%│ │ │ │ 400 │ │ 4.51 │ 4.58 │ 1.45 │ │ │ ├──────┤ ├────────────┼───────────────┼───────┤ │ │ │ 500 │ I String │ 7.41 │ 6.83 │ 7.89 │ │ │ ├──────┤ ├────────────┼───────────────┼───────┤ │ │ │ 600 │ │ 9.96 │ 9.08 │ 8.88 │ │ 20 │ 15 ├──────┼───────────┼────────────┼───────────────┼───────┤ │ │ │ 400 │ │ 5.43 │ 5.44 │ 0.17 │ │ │ ├──────┤ ├────────────┼───────────────┼───────┤ │ │ │ 500 │ V String │ 8.06 │ 7.69 │ 4.60 │ │ │ ├──────┤ ├────────────┼───────────────┼───────┤ │ │ │ 600 │ │ 10.40 │ 9.94 │ 4.43 │ 1.4 Change law of dynamic load of tower during deicing The load at the connection point between the insulator string and the tower is decomposed into three directions of X, Y, and Z. FX, FY, and FZ represent the forces in the direction of the line, the direction of gravity, and the direction of the horizontal, respectively. During the deicing process, the dynamic load changes with time, and the change process of the dynamic load in different directions can be obtained through the observation window (taking the FX direction dynamic load value as an example). Comparing the dynamic load observations under different working conditions with the static standard load or design load, the following rules are obtained: 1) During the deicing process, the force in the X direction of the linear tower and the tensile tower in the continuous gear is less than the static standard load calculated according to the Technical Specification for Heavy Overhead Transmission Line Design (referred to as the Heavy Ice Regulation) [20]. And the tension difference on both sides of the tower increases as the gear difference increases. 2) During the deicing process, the forces in the Y and Z directions of the linear tower and the tensile tower will be greater than the static standard loads calculated according to the heavy ice regulations, but still less than the load design value after considering the dynamic amplification factor. 3) The unbalanced tension of the tensile towers with large differences in the spans on both sides may exceed the static standard load calculated according to the maximum used tension percentage of the heavy ice regulations, and the values shall be checked according to the corresponding de-icing rate specified in the regulations. 2 Principle of path selection for double-circuit lines on the same tower in heavy ice area From the perspective of safety and economy, the lines in the heavy ice area should be routed in a single line. Therefore, the line selected by the same ice tower in the heavy ice area must have uniqueness on the path channel. Combined with the conclusions of the de-icing dynamic response of the transmission line on the 20 mm heavy ice area in the previous section, the route selection of the heavy ice area is supplemented on the basis of the heavy ice regulation: 1) In view of the fact that the ice-skiping jump height and the yaw distance in the heavy ice area are proportional to the continuous number, the number of continuous lines in the heavy ice area should not be too much, and should not exceed 5 files. The resistance section should not be too long and should not exceed 3 km. 2) In view of the fact that the ice-skiping jump height and the yaw distance in the heavy ice area are proportional to the gear distance (horizontal gear distance), the unbalanced tension increases with the increase of the difference between the two sides of the tower, so the line distance of the heavy ice area should not be too large. The distance between adjacent gears should not be too large, and the recommended range of continuous gears should not exceed 500 m. 3) Since the horizontal and vertical forces have power amplification during the deicing process, the number of turns in the heavy ice zone should not be too large, and the conditions of the tower must be controlled. 4) The ice boundary is the weak link of the line. When the 20 mm ice area is separated from the 10 mm ice area, the 20 mm ice area should extend to the 10 mm ice area for 1 to 3 or the 15 mm ice area for transition. 5) Pay attention to the investigation and discrimination of the micro-topography micro-meteorological zone, carry out the verification of extreme meteorological conditions when necessary, and appropriately strengthen the structural strength of the ground wire suspension system and the tower structure according to the verification results. 3 High altitude heavy ice area insulation configuration 3.1 Stain division and insulator type Judging from the 500 kV lines already built in Sichuan, heavy ice areas are generally accompanied by high altitudes. The pollution level is not very serious and does not exceed c grade, but the Maomao 500 kV line has special characteristics. The heavy ice section crosses the industry. In the park, the pollution situation is obviously increasing. Therefore, depending on the heavy ice section of the project, insulation should be arranged according to the d-class pollution area. At the same time, considering the characteristics of poor terrain and inconvenient operation and maintenance of the heavy ice area, the overhang and tensile insulators are made of high quality stain-resistant glass disc insulators. 3.2 Number of insulators The Mao-Mao heavy ice area has the characteristics of high altitude, heavy ice area and heavy pollution area. The selection process of insulator number selection is very representative. According to the provisions of 9.9.1 of the heavy ice regulation: “The insulation coordination of the transmission line shall enable the line to operate safely and reliably under various conditions such as power frequency voltage, operating overvoltage, lightning overvoltage, etc. The repeated ice line shall also be in accordance with the insulator string. The power frequency and wet compressive strength after icing is checked. Therefore, the insulation configuration of the double-circuit line on the same tower in the heavy ice area is developed. 1) Select the number of insulators according to the power frequency creepage distance. According to the design specifications 7.0.5 and 7.0.8: the creepage ratio method (λ>3.2 cm/kV) is used to select the number of insulators and the high altitude correction is performed,the number of insulators selected according to the power frequency creepage distance is shown in Table 5. Tab. 5 Number of the insulators │Insulator type│Structural height/mm│Disk diameter/│Monolithic leakage distance/mm│Select by creepage distance│Check by internal│External check│Press ice pressure proof│ │ │ │ mm │ │ │ │ │ │ │ U160BP │ 155 │ 320 │ 550 │ 36 │ 26 │ 35 │ 41 │ │ U300BP │ 195 │ 380 │ 635 │ 31 │ 21 │ 28 │ 33 │ 2) Operation overvoltage verification. According to the method of "Overvoltage protection and insulation coordination of AC electrical equipment" [21], verify the positive polarity of the insulator string. The 50% discharge voltage of the surge voltage wave meets the requirements of the operating overvoltage. At the same time, it is corrected by high altitude: At an altitude of 2,200 m, 26 XP-160 (structure height 155 mm) standard insulators can meet the operating overvoltage requirements. It can be seen that the operating overvoltage does not control the number of insulators. 3) Lightning overvoltage verification. From the perspective of the operation of the double-circuit line project on the same tower at home and abroad, the reverse-phase sequence is used to balance the high-insulation design, and the lightning strike rate is significantly reduced. Therefore, the reverse-phase sequence is used to balance the high-insulation method. See Table 5 for the selection results of the number of insulators according to the requirements of the lightning protection level of the tower. 4) Ice withstand voltage verification. In view of the fact that the insulation strength of the insulator string in the heavy ice area will be significantly reduced under the condition of ice coating, the insulation flashover occurs under the condition of power frequency voltage, and several 500 kV and 220 kV lines in the southwestern region have experienced such tripping accidents in the heavy ice section. Therefore, the determination of the number of insulators in the heavy ice zone requires verification of the power frequency ice pressure resistance capability based on the above principles. Its calculation formula [20] is as follows: Where: m is the number of insulators per string; Um is the highest operating phase voltage of the system, kV; Un is the ice pressure resistance, kV/m; H is the height of each structural insulation, According to the experience of insulation design of heavy ice area in China, in the area where the pollution is not serious, the power frequency withstand voltage gradient verification standard of iced insulator string is 70 kV/m. The two-wire and Pusi lines designed according to this method have been in operation for many years, and there has been no incident of insulator-chain ice-flashing. Considering that the application engineering is the coexistence of heavy ice areas and heavy pollution areas, the ice withstand voltage should be appropriately increased, and the calibration standard value is 60 kV/m. In addition, the ice flashover discharge voltage decreases with increasing altitude, and the relationship [20] is as follows: Where: Uo is the ice flash voltage at standard atmospheric pressure, kV; Uh is the ice flash voltage at altitude h (unit m), kV; P is the standard atmospheric pressure, 101.325 kPa; Ph is the atmospheric pressure at altitude h (unit m) , kPa; n is the characteristic index, according to the relevant research conclusions of Chongqing University [20], the recommended characteristic index is After the altitude correction, the ice withstand voltage calibration standard value is 50 kV/m, so the number of insulators after the ice withstand voltage verification is calculated is shown in Table 5. It can be seen from Table 5 that the number of insulation insulators is selected to be controlled by ice withstand voltage. 3.3 Air gap Relying on the engineering heavy ice area line at an altitude of 1 700~2 200 m, the value of the tower head and the phase air gap corrected by the altitude are shown in Table 6. Tab. 6 Air gap of 500 kV AC double circuit transmission line on the same tower located at heavy icing area │ project │I String type│V String type│ │ Atmospheric overvoltage gap/m │ 4.6 │ 4.6 │ │ Operating overvoltage gap/m │ 3.2 │ 3.8 │ │ Power frequency voltage gap/m │ 1.65 │ 1.75 │ │ Charge maintenance clearance/m │ 3.6 │ 4.0 │ │Phase-to-phase power frequency voltage gap/m │ 2.6 │ │ Phase-to-phase operation overvoltage gap/m │ 5.5 │ Note: The electrification maintenance should also consider the human activity range of 0.5 m. 4 Design of the double tower of the same tower in the heavy ice area 4.1 Tower size control condition Considering the safety and economy of the tower, it is recommended to use a vertically arranged drum tower in the double-circuit tower of the same tower in the heavy ice area. The straight tower adopts the "VIV" arrangement, and the argument of the tower size control condition is developed. According to the heavy ice regulation 10.0.1: "In order to reduce or avoid the flashover accident between the wires, if the non-horizontal arrangement is adopted, there should be sufficient vertical line spacing and horizontal displacement on the tower to meet the wire and ground wire or wire. Static and dynamic clearance clearance requirements between deicing during different periods.The static approach distance should not be less than the gap value of the operating overvoltage; the dynamic approach distance should not be less than the gap value of the power frequency voltage. Therefore, the gap design of the double-circuit tower tower in the heavy ice zone is based on this. 1) Horizontal offset distance. According to the project data, the maximum gear distance in the continuous section of the heavy ice section is 454 m, and the maximum horizontal span is 316 m. According to the general calculation of mechanics, the arc drop before and after ice coating is 1.19 m. According to the ice jump height formula (3), the maximum ice jump height is 24.34 m and 25.49 m, respectively, when the ice removal rate is 80% and 100%. Since the height of the ice-skating jump increases with the increase of the span, it is uneconomical to consider only increasing the height of the tower to meet the dynamic safety distance of the ice jump, and the safety and reliability of the tower will also be reduced. If the lower conductor is allowed to jump to the upper conductor or the ground line that is not de-iced, in order to avoid phase-flashing or phase-flashing, it must be ensured that the power-frequency phase-to-phase gap is satisfied between the conductors or between the grounding conductors under the dynamic conditions of the deicing jump. Or the power frequency phase gap, this requirement is mainly guaranteed by the horizontal offset between the grounding lines and the wires. According to the heavy ice procedure: Horizontal offset value ≥ working voltage gap value + combined wire radius + maximum offset distance during wire ice jump. The working voltage gap value is shown in Table 6. The combined wire radius is 0.225 m (split spacing is 0.45 m), and the maximum yaw distance of the wire at a horizontal span of 316 m is 3.55 m according to the de-icing yaw formula (4). Therefore, the minimum horizontal offset between the double-circuit iron tower wires and the grounding wire in the heavy ice area of this project is shown in Table 7. Tab. 7 Distance of horizontal displacement │Working condition│ project │Power frequency voltage gap/│Combined wire radius/│Wire ice jump maximum offset distance/│Calculation requires minimum horizontal offset distance/│Recommended value/m│ │ │ │ m │ m │ m │ m │ │ │ ice: │ Guide line │ 1.75 │ 0.225 │ 3.55 │ 5.525 │ 6.0 │ │ 20 mm │ │ │ │ │ │ │ │ ├────────────┼────────────────────────────┼─────────────────────┼──────────────────────────────────────┼────────────────────────────────────────────────────────┼───────────────────┤ │ wind: │Wire between│ 2.6 │ 2´0.225 │ 3.55 │ 6.60 │ 7.0 │ │ 15 m/s │ │ │ │ │ │ │ 2) The distance between vertical lines. According to the provisions of 10.0.1 and 10.0.4 of the heavy ice regulations, the vertical arrangement of the heavy ice area in the same tower double-circuit tower design needs to verify the static between the upper and lower layers when the wire is separated from the ground between the conductors and the ground wire. Close to the safety distance, this distance should not be less than the gap value of the line operating voltage. At the same time, the verification conditions are given in the procedure, which is specifically: deicing in the middle of the continuous gear, and the remaining gears and grounding are not de-icing. The de-icing rate of the intermediate gear should be determined according to the operating experience. When there is no data, the heavy ice zone of 330 kV and above can be selected to be no less than 80% of the designed ice weight [20]. Static proximity may be expressed as an absolutely static state after deicing in different periods, or it may be expressed as a relatively static state between different layers of grounding lines during different periods of deicing jumps, so to ensure safety, Regardless of the horizontal displacement distance when calculating the distance between vertical lines, then: Between the wires (between the guide wires) the distance between the vertical lines = the upper layer guide (ground) line uneven ice static sag - the lower layer conductor uneven ice static sag + phase (ground) operation overvoltage gap value. The interphase (ground) operation overvoltage gap value can be seen from Table 6. The grounding line unevenness ice static sag calculation is essentially the calculation of the uneven ice grounding line stress and the suspended insulator substring offset value, which can be calculated according to the method in [22]. Calculated. Therefore, the calculation results of the vertical line distances of the towers based on the planning of the project are shown in Table 8. Tab. 8 Vertical dimension of different layer of typical tower │Tower type│Vertical line distance between wires /m │Ground support height /m│ │ SZVB5202 │ 17.5 │ 7 │ │ SJB5201 │ 17.5 │ 10 │ 3) The size control conditions of the double-circuit tower head in the same tower in the heavy ice area. According to the results of the tower head arrangement, the following conclusions are drawn: the height of the lower phase layer, the height of the tensile tower and the height of the ground support are controlled by the distance between the vertical lines, and the height of the upper and middle layers of the straight tower is controlled by the gap circle; The length of the cross-arm in the tension tower is controlled by the horizontal displacement distance, the length of the upper and lower phase cross-arm is controlled by the gap circle, and the length of the ground support is controlled by the lightning protection angle. 4.2 Iron Tower Load Study The safety level of the double-circuit tower of the same tower in the heavy ice area was determined to be one level, and the structural importance coefficient was taken as 1.1. Combined with the relevant provisions of the load value and load combination of the double-circuit iron tower in the single-circuit and light-middle ice areas of the heavy ice area, the load combination of the double-return in the same area of the heavy ice area should not be lower than that of the light and medium ice areas, and the load value should not be It is lower than the relevant regulations of the single-circuit heavy ice area. 1) The value of the tower load. At present, the domestic single-circuit line in the heavy ice area has obtained a wealth of operational experience, which indicates that the design value of the tower load in the heavy ice area is safe according to the provisions of the heavy ice regulations. Therefore, the double-circuit tower of the heavy ice tower is on the load value. There is no difference with single loop, and will not be repeated here. 2) Load combination study. The double-circuit iron tower in the heavy ice area should calculate the normal operation of the line, the broken line (longitudinal unbalanced tension when splitting the conductor), the uneven icing condition and the load combination under the installation condition. If necessary, check the earthquake and rare ice load. Waiting for working conditions. The main load combination conditions are as (1) Normal operation. A. Basic wind speed, no ice, no broken line (including minimum vertical load and maximum horizontal load combination). B. Maximum ice coating, corresponding wind speed and temperature, no disconnection. C. Minimum temperature, no ice, no wind, no disconnection (applicable to terminal and corner tower). (2) Ice-breaking condition: Calculated according to the broken line, -5 °C, ice, no wind load. Suspended pole tower: In the same gear, single conductor breaks any two-phase conductor (the split conductor has any longitudinal unbalanced tension); in the same gear, one ground wire is broken, and one conductor breaks any phase conductor (any one of the split conductors) The phase conductor has a longitudinal unbalanced tension Tension resistant tower: In the same gear, break any two-phase conductor and ground wire; if there is any ground wire and any phase conductor in the same gear. (3) Uneven ice coating: Calculated according to unbroken line, -5 °C, uneven ice, and wind speed of 10 m/s. A. All guides and ground wires have unbalanced tension in the same direction, so that the tower can withstand the maximum bending moment. B.All the guide and ground wires have unbalanced tension at different times, so that the tower can withstand the maximum torque. 4.3 Tower type research In the heavy ice area, the double-tower straight tower of the same tower can be arranged in three ways: horizontal, triangular or vertical. The comparison of the tower types of the three arrangements is shown in Table A1 of the Appendix. Relying on the heavy ice area of the project, it is located in the high mountain terrain. The mountain slope is steep. It can be seen from Appendix A1 that the shortest vertical alignment tower is the least affected by the terrain under the premise of meeting the ground distance requirement. At the same time, due to the uneven ice and ice in the heavy ice zone, the tower is subjected to large bending moments and torque, and the tower generates large longitudinal deformation. In order to ensure safety, the longitudinal deformation is controlled in the design of the tower in the heavy ice zone to become the tower design. The important goal. Therefore, the double-circuit iron tower in the heavy ice area prioritizes the vertical arrangement in the In addition, the tower heads of the vertically arranged straight towers can be further divided into "V-I-V" and "3V" arrangements, as shown in Appendix Table A2. According to the above calculation results, in the case of the same edge distance, the "VIV" arrangement linear tower calculation tower weight is slightly heavier than the "3V" arrangement linear tower, but because the length of the middle crossarm is less than the "3V" arrangement, the wire hanging point The displacement is much lower than the "3V" arrangement straight line tower. The heavy-ice area is the same as the "V-I-V" vertical arrangement. 4.4 True Tower Test The double-circuit design of the same tower in the heavy ice area is the first in China. To verify whether the overall structure of the tower is reasonable and safe, and whether the transmission line is consistent with the theoretical design, it needs to be verified by the true test of the tower. With the support of relevant scientific research units, the test tower successfully passed the loading and overload test. During the test, the iron tower did not produce excessive displacement, and the bearing capacity and stiffness could meet the load requirements. The experimental results were basically consistent with the theoretical analysis [23]. 5 Engineering applications The length of the 20 mm heavy ice area of the first 500 kV same-tower double-circuit line Maomao Line in China is 2×7.581 km. The entire line of heavy ice area is designed according to the same tower, and the wire is made of 4×LGJ-500/45 steel core. Aluminum stranded wire and ground wire adopt OPGW-150 optical cable. The planned and designed heavy ice area of the same tower double-return series tower contains four types of towers, of which the linear tower and the tensile tower are each two types of towers. A total of 28 bases of double-circuit iron towers with the same tower of heavy ice are built in the project, among which the linear tower is 10 bases and the tower is 18 bases. The terrain of the heavy ice area is Junling and Gaoshan Daling, and the elevation is between 700 and 2 200 m. The project was put into operation in December 2013 and remained safe and stable during the severe period of local ice coating. 6 Conclusion Based on the Mao-Mao line of the 500 kV double-circuit line on the same tower in the 20 mm heavy ice area in China, based on the in-depth analysis of the dynamic response law of the de-icing of the transmission line in the heavy ice area, the 500 kV double tower of the 20 mm heavy ice area is the same. The circuit design ideas and design methods are studied, and the following conclusions are drawn for the reference design of the follow-up heavy ice tower double-circuit line: 1) The wire skating jump height and yaw distance are proportional to the continuous gear number, the gear distance, the icing rate, the ice thickness, and the inverse of the wire cross section. 2) The unbalanced tension during the deicing process is less than the static standard value specified in the regulations. The horizontal and vertical loads are smaller than the load design value, and the tower load can be taken according to the heavy ice procedure. 3) When selecting the path of the double-circuit line on the same tower in the heavy ice area, attention should be paid to the factors such as the number of continuous gears, the length of the tensile section, the span, the difference of the span, the conditions of the tower, the boundary of the ice and the micro-meteorology. 4) Ice withstand voltage may become the control condition for the selection of the number of line insulators in the high-altitude heavy ice area. 5) The horizontal offset distance and the vertical line distance are the control conditions for the size of the tower double-circuit iron tower head in the heavy ice area. 6) The calculation standard of the tower load of the double-circuit line on the same tower in the heavy ice area should not be lower than the double-circuit line of the single circuit and the light and medium ice area in the heavy ice area. 7) The arrangement of conductors on the double-circuit line of the same tower in the heavy ice area is recommended to be vertically arranged, and the straight line tower is preferably arranged in the “V-I-V” arrangement. 8) The rationality and safety of the structural arrangement of the tower were verified by the true tower test. [1] Morgan V T,Swift D A.Jump height of overhead-line conductors after the sudden release of ice loads[J].Proceedings of IEE,1964,111(10):1736-1746. [2] Stewart J R.Ice as an influence on compact line phase spacing[C]// Proceedings of IWAIS,Hanover,Mew Hampshire,1983:77-82. [3] McClure G,Rousselet J,Beauchenmin R.Simulation of ice-shedding on electrical transmission lines using ADINA[J].Computer and Structrures,1993(47):523-536. [4] Roshan Fekr M, Mcclure G.Numerical modeling of the dynamic response of ice-shedding on electric transmission lines[J].Atmosphericc Research,1998(46):1-11. [5] Kalman T, Farzaneh M,McGlure G.Numerical analysis of the dynamic defects of shock-load-induced ice shedding on overhead ground wires[J] .Computers & Structures, 2007(85):375-384. [6] Liu Heyun. Study on the mechanism of ice icing and deicing of overhead wires [D]. Wuhan: Huazhong University of Science and Technology, 2001. [7]Yan Zhitao,Li Zhengliang,Wang Zhisong.Simulation of ice-shedding of transmission tower-line system in heavy regions[J].Engineering Mechanics,2010,27(1):209-214(in Chinese). [8]Chen Yong,Hu Wei,Wang Liming,et al.Research on ice-shedding characteristic of icing conductor[J].Proceedings of the CSEE,2009,29(28):115-121(in Chinese). [9]Hu Wei,Chen Yong,Cai Wei,et al.Ice-shedding characteristic of 1 000 kV AC double circuit transmission line on the same tower[J].High Voltage Engineering,2010,36(1):275-280(in Chinese). [10] Xia Zhengchun. Research on ice dancing and deicing of UHV transmission lines [D]. Wuhan: Huazhong University of Science and Technology, 2008. Han Junke,Yang Jingbo,Yang Fengli,et al.Analysis on dynamic responses of ice shedding-caused drastic conductor vibration occurred in EHV/UHV multi-circuit transmission lines on same tower[J]. Power System Technology,2012,36(9):61-66(in Chinese). [12] Ye Ziwan,Guo Yong,Shang Kui.Research on ice accretion and shedding of ice-coating on transmission lines located in hillside areas[J].Power System Technology,2013,37(7):1959-1964(in [13] Shen Guohui,Xu Liang,Xu Xiaobin,et al.Research on ice-shedding of bundle conductor-spacers system[J].Power System Techmology,2012,36(1):201-206(in Chinese). [14] Chen Kequan,Yan Bo,Guo Yueming,et,al.Dynamic responses of ultra-high voltage transmission line ice shedding[J].Journal of Chongqing University,2009,32(5):544-549(in Chinese). [15] Yi Wenyuan. Numerical simulation study on dynamic response of de-icing of UHV transmission tower line system [D]. Chongqing: Chongqing University, 2010. [16]Yan Bo,Guo Yueming,Chen Kequan,et,al.Formula for jump height of overhead transmission lines after ice shedding[J].Journal of Chongqing University,2009,32(11):1306-1310(in Chinese). [17]Yang Fengli,Yang Jingbo.Analysis on loads form ice shedding conductors in heavy icing areas[J].Journal of Vibration and Shock,2013,32(5):10-15(in Chinese). [18] GB50545-2010, 110 kV~750 kV overhead transmission line design specification [S]. Beijing: China Planning Press, 2010. [19]Zhao Yuzhe,Li Li,Deng Wei.Numerical simulation of dynamic response of ice-shedding on transmission lines[J].Water Resources and Power,2013,31(9):205-209(in Chinese). [20] DL/T 5440-2009, Technical Specification for Design of Overhead Ice Transmission Lines [S]. Beijing: China Electric Power Press, 2009. [21] DL/T 620-1997, Overvoltage protection and insulation coordination for AC electrical installations [S]. Beijing: China Electric Power Press, 1997 [22] Zhang Diansheng. Power Engineering High Voltage Transmission Line Design Manual [M]. 2 version. Beijing: China Electric Power Press, 2004. [23] Wang Jiangtao. Analysis of the design of the tower of the double-circuit heavy ice area in the same tower [J]. Urban Construction, 2013, 4 (10): 131-132. Tab. A1 Comparison of three different arrangement mode tower │ project │ Horizontal arrangement │ Triangular arrangement │ Vertically arranged │ │ Tower height │ low │ medium │ high │ │ Tower weight │ heavy │ light │ medium │ │ Corridor width │ width │ medium │ narrow │ │ Deformation │ Big │ medium │ small │ │Engineering implementation perspective │Strongly affected by terrain│Moderately affected by terrain│Smallly affected by terrain│ Tab. A2 Comparison of “V-I-V” and “3V” type tower │ project │3V arrangement│V-I-V arrangement│ratio│ │ Tower height / m │ 39 │ 41.2 │1.06 │ │ Medium cross length / m │ 21.46 │ 15.66 │0.73 │ │ Calculate tower weight / t │ 64.2 │ 64.8 │1.01 │ │ Edge distance / m │ 31.3 │ 31.3 │1.00 │ │Wire hanging point displacement / mm │ 333 │ 236 │0.71 │ │ Tower displacement / mm │ 31.3 │ 31.3 │1.00 │ By: JUCRO Electric (Focus on Vacuum Interrupter, Vacuum circuit breaker, Vacuum contactor and Switchgear)
{"url":"https://www.jucro.com.cn/n1832550/Design-of-500-kV-AC-Double-Circuit-Transmission-Line-on-the-Same-Tower-Located-at-20-mm-Heavy-Icing-Area.htm","timestamp":"2024-11-04T21:13:52Z","content_type":"text/html","content_length":"643980","record_id":"<urn:uuid:0abfea18-f9aa-495c-ae26-2b7264262718>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00159.warc.gz"}
Educación matemática y crisis social en Chile Apreciados y apreciadas lectores/as, comparto con la comunidad el siguiente artículo publicado en el diario electrónico El Mostrador, Chile. For a fairer society: Mathematics education and the social crisis in Chile Listening as a response from a future teacher that the importance of teaching statistics in the school is that it is established in the plans and programs of the ministry is part of what is not tolerable in terms of teacher training; although it is no longer clear that the plans and programs are unquestionable commands. What is mathematics for? It is a question of rigor in the debates held in Mathematics Didactics classes with future teachers. From a sociocultural perspective, it should be clarified that it serves to build a fairer society. For example, to understand a proportionality of the type, the salary of a parliamentarian is 32 times the minimum wage of the country. 32 times! It means that a person who earns the minimum, works 32 months to earn the same as a parliamentarian does in a month. Whereas, a worker must even mark his entry and exit, and many politicians besides being late, sometimes do not show up. Among the answers prior to the analysis of scientific documents in the teacher training classrooms, the school teachings received by future teachers of mathematics may undoubtedly be ineffectual. In practice, almost all mathematical instruction is delivered based on learning formulas, without understanding applications or direct; even in university settings. In the training of current teachers there is a hope. After analyzing in depth along with future teachers the real reasons for learning and teaching mathematics, the question emerges naturally: “so that they do not fool us, teacher”. This is what we have done for years to each of the inhabitants of our community in Chile. They have fooled us! Because there are some who know how to use this. People believed those who knew how to run the country. It is clear that many Chilean citizens could not study, a large majority due to lack of opportunities. Earning bread in Chile has always been difficult. Study too. There are Chileans still paying for college credit, solidarity fund, families indebted for children’s illnesses, parents’ illnesses and, miserable pensions that border on the grotesque. The increase in schooling and the massification of university studies have not given the desired effect. People have studied because they wanted to have a degree that would give them money as revenue. This is not the case of the brave who study to be teachers of mathematics. With tremendous potential they study by vocation, being able to opt for careers as engineering in which the salary is higher. Their vision was always to teach and they enrolled in that program for the first time. The training of teachers in our country has been strengthened with the incorporation of Doctorate training in research in Mathematics Education. Of course, research funds are not always going towards this conglomerate, which would be logical; the same as having an education minister who is a teacher. In a country where resources are given to surnames and prizes and opinion is requested on matters in which they have not lived or do not understand the nature of the profession, they send us to do a bingo to have funds. Not only for education. Chile is a country where with certainty, there is bingo or completada (event where Chilean hot-dogs are sell to gain funds) every weekend. Among others, research in mathematical education indicates that if an implementation is not offered to what you study, interest in a subject of mathematics will be very precarious. A teacher should always have an answer for a student who asks: and what is this for? If, for example, the need to learn percentages is contextualized, we would not find mistakes in advertising or in the news. Ads that say 0% sugar free, are also ridiculous, totally wrong. The person who buys a 0% sugar-free yogurt with the idea of not getting fat, can already buy clothes of a bigger size. There is another issue that it is barely deal in mathematics education and is the relationship between different areas of mathematics. For example, you can understand what the square and the binomial cube geometrically mean. Many of us have discussed with students and in specialized congresses about the true prayers in which the study of mathematics has been transformed in some schools. For example: “a plus b squared is equal to a square plus two ab plus b squared”. Only the amen is missing at the end. Students already consider it part of the daily prayer that they have to study algebra without even imagining that it has a lot to do with geometry. And they could finish their twelve games, evoking to Los Prisioneros, in which to others they taught secrets that to they did not. Sure, because it is not a prayer. There there is a sense, is flat geometry, calculation of areas for that prayer. And for the binomial cube there is space geometry with the calculation of volume. Here are just some of the many reasons to learn mathematics in school and good reasons to offer to children, from those who are trained to teach mathematics: 1) Analyze and interpret graphics. Among the graphics that we should frequently read and hopefully analyze are the bar graphs that come with the electricity, water and gas bill. These indicate our monthly consumption and if we keep the receipt on paper or digital we can establish the comparison of increases. In a house where a person lives alone, it attracts attention to pay from 9,000 CLP to 15,000 CLP, with the same consumption. This is part of what graphics reading gives us; criticize charges based on mathematical fundamentals. Another example, perhaps less common, is to understand a graph such as the growth of a baby. Everyone trusts what the doctor says. What if the doctor does not really know how to interpret and only looks at what is said in the general rule? 2) That they do not lie to us with the badly made graphics about Piñera’s crimes. The graph is already history, perhaps badly printed, but with errors also presented to us on television. Those who realized were those who have minimal mathematical literacy. This concept implies that they know how to read a graph and understand it. 3) Understand how to establish the best mortgage loan. Be able to understand what CAE means. Be able to determine what are real opportunities in the banks when choosing the credit if you can afford it. Be able to compare the interest rate to choose the lowest. Make an estimate of how many years a house will be in debt and know whether or not you can afford things like these with the salary you 4) Determine if a place is overcrowded or where overcrowding is lived. Lately in Chile, spaces have been found where overcrowding is extreme. The lack of scruples and xenophobia have seized the desire to have more money in some compatriots. They have found homes with rooms of dimensions of a “smurf’s house” with up to 6 foreign people inside. This added to the vertical ghettos where construction firms have much to say. You can also consider from the heart how many square meters are needed to live with dignity. This is pure mathematics and social awareness. It is to look at the other as a legitimate other. 5) Compare volume and to cube. How many times have we asked ourselves, and this from whatever the social class of origin, whether a piece of furniture, a TV, or whatever, fit within a space. This is a skill that must be worked from an early age in pre-school. Ask yourself, what is the first thing you see at birth? See the 3D space where you unfold! Not the plane. And the first thing that was taught or is taught is the plane. 6) Convert from one measurement system to another, when you want to make a recipe for example. With internet access, people read recipes that come in ounces. This requires a conversion. The same to know if a weight is healthy. If they explain it to you in pounds, you already know that you must carry out a transformation. It is true that this is easy to do on the internet, but it will help to be clear about these elements as it helps to know that a meter is 100 centimeters. 7) Estimate quantities. For example, how do you know how many people have attended the peaceful demonstrations in Santiago de Chile in recent days. This is an interesting subject where the number of people with surface is related. Or did you think they counted people one by one? 8) Having basic historical notions of the use of statistics in medicine is a compulsory subject of Public Health. John Snow and Florence Nightingale, great pioneers. A doctor and a nurse who had a very interesting role in medicine. In 1854 the most violent cholera outbreak in England occurred in London. With a simple representation of a map, John Snow was able to determine that most of the deaths had occurred in the vicinity of Broad Street, the environment where a water pump supplied the population. On the other hand, Florence Nightingale worked as a nurse in the Scutari hospital during the Crimean War (1853-1856), where she discovered that the high death rate of soldiers was due to infectious diseases. This could be determined by making a graph based on your observations. 9) Do not play KINO or LOTO (Lottery games). Chile is a country that could be called ludopath of these games. The desire to end poverty is what moves these companies that know that people mostly do not know that the possibility of winning is very low. It is enough to calculate some probabilities to know that it is not convenient to play. There could be many more reasons here, and it is the subject of a book that I want to write (I expect editorial offers). A Mathematics Education covered by the research of researchers trained in the area, could become one of the best. Financing is still few. This country will grow more as we are able to generate our own advanced research in subjects as radically fundamental as education, especially in mathematical education. A Mathematics Education covered by the research of researchers trained in the area, could become one of the best. Financing is still few. This country will grow more as we are able to generate our own advanced research in subjects as radically fundamental as education, especially in mathematical education. Pro= género, Matemática, Didáctica de la Matemática, cantante, tocaora de guitarra, 2 hijos, alegre, trabajadora, académica Usach. Parte de CIAEM IACME Parte de RedI Please login to comment 0 Comments Inline Feedbacks View all comments | Responder
{"url":"https://blog.ciaem-redumate.org/educacion-matematica-y-crisis-social-en-chile/","timestamp":"2024-11-10T22:10:26Z","content_type":"text/html","content_length":"204786","record_id":"<urn:uuid:c1acecf2-2db1-4e0d-85f7-8b103cf19a91>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00809.warc.gz"}
Kg To Moles Calculator - Calculator Wow Kg To Moles Calculator In the realm of chemistry, the conversion from mass to moles is a fundamental alchemical equation that unveils the quantitative relationship between substances. The Kg To Moles Calculator emerges as a digital alchemist’s assistant, offering a swift and precise means to transmute mass in kilograms into the mysterious world of moles. This article embarks on a journey to introduce the calculator, shed light on its importance in the chemical domain, guide users on its operation, and demystify the nuances of this transformative conversion through FAQs. The Significance of Kg To Moles Conversion 1. Chemical Stoichiometry The conversion from kilograms to moles is essential in chemical stoichiometry—the study of quantitative relationships in chemical reactions. It allows chemists to balance equations, predict reactants’ and products’ quantities, and understand the underlying principles of molecular interactions. 2. Precise Formulation Chemical reactions involve precise ratios of reactants and products. The Kg To Moles Calculator ensures accuracy in formulating reactions by providing the moles equivalent of a given mass, a crucial parameter in reaction design. 3. Lab Preparations In laboratory settings, where precision is paramount, converting mass to moles aids in preparing solutions with specific concentrations. Chemists can accurately measure and mix substances to achieve desired molar quantities. How to Use the Kg To Moles Calculator Utilizing the Kg To Moles Calculator is a straightforward process: 1. Input Mass (kg): Enter the mass of the substance in kilograms that you want to convert. 2. Input Molar Mass (g/mol): Specify the molar mass of the substance in grams per mole. 3. Click Calculate: Activate the calculator to perform the conversion. 4. Receive Result: The calculator provides the equivalent amount in moles. 10 FAQs About Kg To Moles Conversion 1. Why Convert Mass to Moles in Chemistry? Converting mass to moles is crucial for precise calculations in chemical reactions, helping chemists determine reactant and product quantities. 2. What Is the Role of Molar Mass in the Conversion? Molar mass is the mass of one mole of a substance. It is integral in the conversion formula, establishing the relationship between mass and moles. 3. Is the Calculator Suitable for Different Substances? Yes, the Kg To Moles Calculator is versatile and can be used for any substance, from elements to compounds, provided the molar mass is known. 4. Can the Calculator Handle Non-Integer Molar Mass Values? Absolutely. The calculator accommodates non-integer molar mass values, providing accurate results for substances with complex molecular structures. 5. Why Is the Conversion Important for Reaction Balancing? Balancing chemical equations requires an understanding of the stoichiometric coefficients, which are based on moles. The conversion facilitates accurate balancing. 6. Can I Use the Calculator for Educational Purposes? Yes, the Kg To Moles Calculator is an excellent educational tool, aiding students in understanding the quantitative aspects of chemical reactions. 7. Does the Calculator Consider Temperature and Pressure? The calculator assumes standard temperature and pressure conditions, as it focuses on the fundamental conversion formula without considering variations. 8. Is the Tool Beneficial for Pharmacists and Drug Formulation? Yes, pharmacists can use the calculator in drug formulation to determine the precise amounts of chemical compounds needed for pharmaceutical preparations. 9. Can the Calculator Be Applied to Gases and Solutions? Yes, the calculator is applicable to gases and solutions, providing moles as a standardized unit for various states of matter. 10. What Other Units Can I Convert Kg to Using the Calculator? The calculator specifically converts kilograms to moles. For other units, different conversion tools may be needed. As we delve into the alchemical realm of chemistry, the Kg To Moles Calculator stands as a catalyst, facilitating the transformation of mass into moles with precision and ease. Beyond the calculations, it is a key that unlocks the mysteries of chemical equations, empowering scientists, students, and enthusiasts alike to explore the quantitative essence of molecular interactions. So, whether you’re a chemistry student embarking on a stoichiometric journey or a researcher navigating the intricacies of a laboratory experiment, let the Kg To Moles Calculator be your guide—a digital flask, bubbling with the transformative magic that converts the mundane weight of matter into the mystical realm of moles.
{"url":"https://calculatorwow.com/kg-to-moles-calculator/","timestamp":"2024-11-03T23:33:12Z","content_type":"text/html","content_length":"65819","record_id":"<urn:uuid:b7e48965-842e-4bd8-9cd1-6000f10b7510>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00280.warc.gz"}
Math Adding And Subtracting Negative Numbers Worksheet Math Adding And Subtracting Negative Numbers Worksheet serve as foundational devices in the realm of mathematics, providing an organized yet flexible system for students to explore and master mathematical principles. These worksheets supply a structured strategy to understanding numbers, nurturing a solid foundation upon which mathematical efficiency thrives. From the simplest checking workouts to the ins and outs of advanced calculations, Math Adding And Subtracting Negative Numbers Worksheet deal with students of varied ages and skill levels. Introducing the Essence of Math Adding And Subtracting Negative Numbers Worksheet Math Adding And Subtracting Negative Numbers Worksheet Math Adding And Subtracting Negative Numbers Worksheet - 7th Grade Math Adding And Subtracting Negative Numbers Worksheet, Math Worksheets Adding And Subtracting Negative Numbers, Adding And Subtracting Negative Numbers Examples, Adding And Subtracting Negative Numbers Rules, Adding And Subtracting Negative Numbers Explained, Adding And Subtracting Negative Number Worksheets, Adding And Subtracting Negative Numbers Welcome to our Negative Numbers Worksheets hub page On this page you will find links to all of our worksheets and resources about negative numbers Need help practicing adding subtracting multiplying or dividing negative numbers You ve come to Here are the rules for adding or subtracting negative numbers Adding a positive number is addition e g 4 2 4 2 6 Subtracting a negative number is addition e g 4 2 4 2 6 Adding a negative number is subtraction e g 4 2 4 2 2 Subtracting a positive number is At their core, Math Adding And Subtracting Negative Numbers Worksheet are vehicles for conceptual understanding. They envelop a myriad of mathematical principles, guiding learners via the labyrinth of numbers with a collection of appealing and purposeful exercises. These worksheets go beyond the borders of standard rote learning, encouraging energetic involvement and promoting an instinctive understanding of numerical relationships. Nurturing Number Sense and Reasoning Negative Numbers Addition And Subtraction Worksheets Worksheets Master Negative Numbers Addition And Subtraction Worksheets Worksheets Master The worksheets on this page introduce adding and subtracting negative numbers as well as multiplying and dividing negative numbers The initial sets deal with small integers before moving on to multi digit multiplication and long division with negatives Example 6 3 3 is really saying Positive 6 plus Negative 3 equals Positive 3 We could write it as 6 3 3 The last two examples showed us that taking away balloons subtracting a positive or adding weights adding a negative both make the basket go down So these have the same result The heart of Math Adding And Subtracting Negative Numbers Worksheet hinges on growing number sense-- a deep comprehension of numbers' meanings and interconnections. They encourage expedition, welcoming students to study math operations, decipher patterns, and unlock the enigmas of sequences. Via provocative challenges and sensible problems, these worksheets end up being gateways to sharpening thinking skills, supporting the analytical minds of budding mathematicians. From Theory to Real-World Application Integers Senior Block Integers Senior Block Start Unit test Let s extend our addition and subtraction understanding to include negative numbers Whether we need temperatures below zero altitudes below sea level or decreases from any other reference point negative numbers give us a Adding subtracting negative numbers Evaluate 5 3 6 Stuck Review related articles videos or use a hint Learn for free about math art computer programming economics physics chemistry biology medicine finance history and more Khan Academy is a nonprofit with the mission of providing a free world class education for Math Adding And Subtracting Negative Numbers Worksheet function as channels bridging academic abstractions with the palpable facts of day-to-day life. By infusing sensible scenarios into mathematical exercises, learners witness the significance of numbers in their environments. From budgeting and dimension conversions to recognizing analytical information, these worksheets empower trainees to wield their mathematical prowess beyond the boundaries of the classroom. Varied Tools and Techniques Flexibility is inherent in Math Adding And Subtracting Negative Numbers Worksheet, employing an arsenal of instructional tools to accommodate different understanding designs. Aesthetic aids such as number lines, manipulatives, and electronic resources work as buddies in visualizing abstract concepts. This varied technique ensures inclusivity, accommodating learners with various preferences, strengths, and cognitive styles. Inclusivity and Cultural Relevance In a significantly varied world, Math Adding And Subtracting Negative Numbers Worksheet accept inclusivity. They go beyond cultural boundaries, incorporating examples and problems that reverberate with learners from varied backgrounds. By integrating culturally relevant contexts, these worksheets promote an environment where every student feels represented and valued, enhancing their connection with mathematical concepts. Crafting a Path to Mathematical Mastery Math Adding And Subtracting Negative Numbers Worksheet chart a course in the direction of mathematical fluency. They infuse determination, crucial reasoning, and analytic skills, essential qualities not only in maths yet in various aspects of life. These worksheets equip students to browse the elaborate terrain of numbers, nurturing a profound appreciation for the beauty and logic inherent in Welcoming the Future of Education In an era noted by technical innovation, Math Adding And Subtracting Negative Numbers Worksheet flawlessly adapt to digital platforms. Interactive interfaces and electronic sources enhance conventional knowing, using immersive experiences that transcend spatial and temporal limits. This combinations of typical techniques with technological innovations proclaims an appealing period in education and learning, fostering a much more dynamic and appealing discovering atmosphere. Verdict: Embracing the Magic of Numbers Math Adding And Subtracting Negative Numbers Worksheet epitomize the magic inherent in mathematics-- a charming journey of exploration, exploration, and proficiency. They go beyond conventional pedagogy, acting as stimulants for sparking the fires of interest and questions. With Math Adding And Subtracting Negative Numbers Worksheet, students start an odyssey, unlocking the enigmatic globe of numbers-- one issue, one service, each time. 14 Best Images Of Adding Positive And Negative Numbers Worksheet Adding And Subtracting Adding Integers From 9 To 9 Negative Numbers In Parentheses A Check more of Math Adding And Subtracting Negative Numbers Worksheet below Adding Negative Numbers Worksheet Preschool Printable Sheet KS3 Adding And Subtracting Negative Numbers By Wendysinghal Teaching Resources Tes Adding And Subtracting Negative Numbers Adding And Subtracting Negative Numbers Worksheet Answer Key Carol Jone s Addition Worksheets Adding And Subtracting Negative Numbers Worksheet By AlexPanebianco Teaching Resources TES Adding And Subtracting Positive And Negative Numbers Adding Integers Worksheet Subtracting Adding And Subtracting Negative Numbers Worksheets Here are the rules for adding or subtracting negative numbers Adding a positive number is addition e g 4 2 4 2 6 Subtracting a negative number is addition e g 4 2 4 2 6 Adding a negative number is subtraction e g 4 2 4 2 2 Subtracting a positive number is Adding And Subtracting Negative Numbers Worksheets Adding and Subtracting Negative Numbers Worksheets provide additional practice of addition and subtraction to the students In these worksheets students are asked to add or subtract negative numbers Additionally these worksheets teach the concept of negative numbers to kids by using various types of questions Here are the rules for adding or subtracting negative numbers Adding a positive number is addition e g 4 2 4 2 6 Subtracting a negative number is addition e g 4 2 4 2 6 Adding a negative number is subtraction e g 4 2 4 2 2 Subtracting a positive number is Adding and Subtracting Negative Numbers Worksheets provide additional practice of addition and subtraction to the students In these worksheets students are asked to add or subtract negative numbers Additionally these worksheets teach the concept of negative numbers to kids by using various types of questions Adding And Subtracting Negative Numbers Worksheet Answer Key Carol Jone s Addition Worksheets KS3 Adding And Subtracting Negative Numbers By Wendysinghal Teaching Resources Tes Adding And Subtracting Negative Numbers Worksheet By AlexPanebianco Teaching Resources TES Adding And Subtracting Positive And Negative Numbers Adding Integers Worksheet Subtracting Adding And Subtracting Negative Numbers Worksheet Answer Key Carol Jone s Addition Worksheets Adding And Subtracting Positive And Negative Numbers Worksheet For 8th 9th Grade Lesson Planet Adding And Subtracting Positive And Negative Numbers Worksheet For 8th 9th Grade Lesson Planet Adding And Subtracting Negative Numbers Minimally Different
{"url":"https://szukarka.net/math-adding-and-subtracting-negative-numbers-worksheet","timestamp":"2024-11-08T08:27:17Z","content_type":"text/html","content_length":"30495","record_id":"<urn:uuid:46b3ec80-04fb-4a76-9b72-7dbf34e083b1>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00267.warc.gz"}
Statistics and Probability - MSP Course details Statistics and Probability MSP Acad. year 2022/2023 Winter semester 5 credits Summary of elementary concepts from probability theory and mathematical statistics. Limit theorems and their applications. Parameter estimate methods and their properties. Scattering analysis including post hoc analysis. Distribution tests, tests of good compliance, regression analysis, regression model diagnostics, non-parametric methods, categorical data analysis. Markov decision-making processes and their analysis, randomized algorithms. Credit+Examination (written) • 26 hrs lectures • 21 hrs exercises • 5 hrs projects • 70 pts final exam (written part) • 20 pts written tests (test part) • 10 pts projects Subject specific learning outcomes and competences Students will extend their knowledge of probability and statistics, especially in the following areas: • Parameter estimates for a specific distribution • simultaneous testing of multiple parameters • hypothesis testing on distributions • regression analysis including regression modeling • nonparametric methods • creation of parameter estimates • Bayesian statistics • Markov processes • randomised algorithms Introduction of further concepts, methods and algorithms of probability theory, descriptive and mathematical statistics. Development of probability and statistical topics from previous courses. Formation of a stochastic way of thinking leading to formulation of mathematical models with emphasis on information fields. The society development desires also technology and, in particular, information technology expansion. It is necessary to process information - data in order to control technology. Nowadays, there is a lot of devices that collect data automatically. So we have a large amount of data that needs to be processed. Statistical methods are one of the most important means of processing and sorting data, including their analysis. This allows us to obtain necessary information from your data to evaluate and control. Prerequisite knowledge and skills Foundations of differential and integral calculus. Foundations of descriptive statistics, probability theory and mathematical statistics. • Anděl, Jiří. Základy matematické statistiky. 3., Praha: Matfyzpress, 2011. ISBN 978-80-7378-001-2. • FELLER, W.: An Introduction to Probability Theory and its Applications. J. Wiley, New York 1957. ISBN 99-00-00147-X • Hogg, V.R., McKean J.W. and Craig A.T. Introduction to Mathematical Statistics. Seventh Edition, 2012. Macmillan Publishing Co., INC. New York. ISBN-13: 978-0321795434 2013 • Zvára, Karel. Regrese. 1., Praha: Matfyzpress, 2008. ISBN 978-80-7378-041-8 • Meloun M., Militký J.: Statistické zpracování experimentálních dat (nakladatelství PLUS, 1994). • D. P. Bertsekas, J. N. Tsitsiklis. Introduction to Probability, Athena, 2008. Scientific 1. Markov processes and their analysis. 2. Markov decision processes and their basic analysis. 3. Introduction to randomized algorithms and their use (Monte Carlo, Las Vegas, applications). 4. Summary and recall of knowledge and methods used in the subject of IPT. An outline of other areas of probability and statistics that will be covered. 5. Extension of hypothesis tests for binomial and normal distributions. 6. Analysis of variance (simple sorting, ANOVA), post hos analysis. 7. Regression analysis. Creating a regression model. Testing hypotheses about regression model parameters. Comparison of regression models. Diagnostics. 8. Distribution tests. 9. Estimation of parameters using the method of moments and the maximum likelihood method. 10. Bayesian approach and construction of Bayesian estimates. 11. Nonparametric methods of testing statistical hypotheses - part 1. 12. Nonparametric methods of testing statistical hypotheses - part 2 13. Analysis of categorical data. Contingency table. Independence test. Four-field tables. Fisher's exact test. Syllabus of numerical exercises 1. Application and analysis of Markov processes. 2. Basic application and analysis of Markov decision processes. 3. Design and analysis of basic randomised algorithms. 4. Reminder of discussed examples in the IPT subjekt 5. Hypothesis tests for binomial and normal distributions. 6. Project assignment, analysis of variance, post host analysis. 7. Regression analysis. 8. Tests on distribution, tests of good agreement. 9. The method of moments and the maximum likelihood method. 10. Bayesian estimates. 11. Nonparametric methods of testing statistical hypotheses - part 1. 12. Nonparametric methods of testing statistical hypotheses - part 2. 13. Analysis of categorical data. Contingency table. Four-field tables Syllabus - others, projects and individual work of students 1. Usage of tools for solving statistical problems (data processing and interpretation). Two tests will be written during the semester - 5th and 10th week. The exact term will be specified by the lecturer. The test duration is 90 minutes. The evaluation of each test is 0-10 points. Projected evaluated: 0-10 points. Final written exam: 0-70 points. Students have to achieve at least 30 points, otherwise the exam is assessed by 0 points. Participation in lectures in this subject is not controlled Participation in the exercises is compulsory. During the semester two abstentions are tolerated. Replacement of missed lessons is determined by the leading exercises. Fulfil the attendance conditions and achieve in total at least 15 points from the tests and the project. Course inclusion in study plans • Programme MITAI, field NADE, NBIO, NCPS, NEMB, NEMB up to 2021/22, NGRI, NHPC, NIDE, NISD, NISY, NISY up to 2020/21, NMAL, NMAT, NNET, NSEC, NSEN, NSPE, NVER, NVIZ, 1st year of study, Compulsory
{"url":"https://www.fit.vut.cz/study/course/259477/.en?year=2022","timestamp":"2024-11-10T08:36:23Z","content_type":"text/html","content_length":"97778","record_id":"<urn:uuid:cc5d4c7b-6d48-4f1d-b46b-3bda5d41b8fa>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00776.warc.gz"}
Bayes in the World II: Million Pound Drop Embarrassing update: as pointed out by Vladimir Nesov in the comments, all of my quantitative points below are incorrect. To maximize expected winnings, you should bet on whichever alternative you judge to be most likely. If you have a so-called logarithmic utility function — which already has the property of growing faster for small amounts than large — you should bet proportional to your odds on each answer. In fact, it’s exactly arguments like these that lead many to conclude that the logarithmic utility function is in some sense “correct”. So, in order to be led to betting more on the low-probability choices, one needs a utiltity that changes even faster for small amounts and slower for large amounts. But I disagree that this is “implausible” — if I think that is the best strategy to use, I should adjust my utility function, not change my strategy to match one that has been externally imposed. Just like probabilities, utility functions encode our preferences. Of course, I should endeavor to be consistent, to always use the same utility function, at least in the same circumstances, taking into account what economists call “externalities“. Anyway, all of this goes to show that I shouldn’t write long, technical posts after the office Christmas party…. The original post follows, mistakes included. An even more unlikely place to find Bayesian inspiration was Channel 4’s otherwise insipid game show, “The Million Pound Drop“. In the version I saw, B-list celebs start out with a million pounds (sterling), and are asked a series of multiple-choice questions. For each one, they can bet any fraction of their remaining money on any set of answers; any money bet on wrong answers is lost (we’ll ignore the one caveat, that the contestants must wager no money on at least one answer, which means there’s always the chance that they will lose the entire stake). Is there a best strategy for this game? Obviously, the overall goal is to maximize the actual winnings at the end of the series of questions. In the simplest example, let’s say a question is “What year did England last win the football world cup?” with possible answers “1912”, “1949”, “1966”, and “never”. In this case (assuming you know the answer), the only sensible course is to bet everything on “1966”. Now, let’s say that the question is “When did the Chicago Bulls last win an NBA title?” with possible answers, “1953”, “1997”, “1998”, “2009”. The contestants, being fans of Michael Jordan, know that it’s either 1997 or 1998, but aren’t sure which — it’s a complete toss-up between the two. Again in this case, the strategy is clear: bet the same amount on each of the two — the expected winning is half of your stake no matter what. (The answer is 1998.) But now let’s make it a bit more complicated: the question is “Who was the last American to win a gold medal in Olympic Decathlon?” with answers “Bruce Jenner”, “Brian Clay”, “Jim Thorpe”, and “Jess Owens”. Well, I remember that Jenner won in the 70s, and that Thorpe and Owens predate that by decades, so the only possibilities are Jenner and Clay, whom I’ve never heard of. So I’m pretty sure the answer is Jenner, but I’m by no means certain: let’s say that I’m 99:1 in favor of Jenner over Clay. In order to maximize my expected winnings, I should bet 99 times as much on Jenner as Clay. But there’s a problem here: if it’s Clay, I end up with only one percent of my initial stake, and that one percent — which I have to go on and play more rounds with — is almost too small to be useful. This means that I don’t really want to maximize my expected winnings, but rather something that economists and statisticians call the “utility function“, or conversely, to minimize the loss function, functions which describes how useful some amount of winnings are to me: a thousand dollars is more than a thousand times useful than one dollar, but a million dollars is less than twice as useful as half a million dollars, at least in this context. So in this case, a small amount of winnings is less useful than one might naively expect, and the utility function should reflect that by growing faster for small amounts and slower for larger amounts — I should perhaps bet ten percent on Clay. If it’s Jenner, I still get 90% of my stake, but if it’s Clay, I end up with a more-useful 10%. (The answer is Clay, by the way.) This is the branch of statistics and mathematics called decision theory: how we go from probabilities to actions. It comes into play when we don’t want to just report probabilities, but actually act on them: whether to actually prescribe a drug, perform a surgical procedure, or build a sea-wall against a possible flood. In each of these cases, in addition to knowing the efficacy of the action, we need to understand its utility: if a flood is 1% likely over the next century and would cost one million pounds, but would save one billion in property damage and 100 lives if the flood occurred, we need to compare spending a million now versus saving a billion later (taking the “nonlinear” effects above into account) and complicate that with the loss from even more tragic possibilities. One hundred fewer deaths has the same utility as some amount of money saved, but I am glad I’m not on the panel that has to make that assignment. It is important to point out, however, that whatever decision is made, by whatever means, it is equivalent to some particularly set of utilities, so we may as well be explicit about it. Happily, these sorts of questions tend to arise less in the physical sciences where probabilistic results are allowed, although the same considerations arise at a higher level: when making funding 2 responses to “Bayes in the World II: Million Pound Drop” “In order to maximize my expected winnings, I should bet 99 times as much on Jenner as Clay.” No, to maximize expected money, you should bet everything on Jenner. Even if if you were 51% certain it’s Jenner and 49% certain it’s Clay, you’d still need to bet everything on Jenner to maximize expected money. Value of a marginal dollar is proportional to probability of the option, no matter what the current distribution of the bet, so you pump everything in the highest-probability option. If your utility is logarithmic in amount of money, then you bet 99 times as much on Jenner than on Clay (see “proper scoring rules“). “So in this case, a small amount of winnings is less useful than one might naively expect, and the utility function should reflect that by growing faster for small amounts and slower for larger amounts — I should perhaps bet ten percent on Clay. If it’s Jenner, I still get 90% of my stake, but if it’s Clay, I end up with a more-useful 10%” In order for the bet of 90:10 to be rational, you have to value money even less than logarithm of their amount, which seems implausible. The problem with your original bet was overconfident expression of your state of knowledge (99% certainty), not method of distributing the bet. “So, in order to be led to betting more on the low-probability choices, one needs a utiltity that changes even faster for small amounts and slower for large amounts. But I disagree that this is “implausible” — if i think that is the best strategy to use, I should adjust my utility, not change my strategy to match some externally-imposed utility.” Decision theory would be useless if you always made decisions on intuitive grounds, and then figured out exactly which utility and probability values have to be used in order for the expected utility considerations to endorse the decision you’ve intuitively picked. The whole point of the decision theory is to sometimes surprise you, to suggest decisions that don’t coincide with what you would’ve picked based on intuition alone. When a decision theory presents you with a suggested decision that disagrees with your intuition, it places into conflict the assumptions behind the decision theory, the reasons for picking the parameters of the decision problem (probabilities, utilities, maybe causal graphs), and the reasons for your intuitive judgment of the resulting decision. It’s not a given which of these gives. Maybe your intuition about the decision is strong enough to force you to revise the parameters of the decision problem, maybe your intuition behind the assignment of the parameters is strong enough to overrule your intuition about the decision. Maybe even both your intuition about the parameters and the decision both overrule the reasons for believing that this particular decision theory can adequately model the situation.
{"url":"https://andrewjaffe.net/blog/2010/12/bayes_in_the_wo_1/","timestamp":"2024-11-09T20:01:35Z","content_type":"text/html","content_length":"70210","record_id":"<urn:uuid:1d96f281-8337-493a-ba33-369d3cdd0a11>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00058.warc.gz"}
Estimation of non-null SNP effect size distributions enables the detection of enriched genes underlying complex traits Traditional univariate genome-wide association studies generate false positives and negatives due to difficulties distinguishing associated variants from variants with spurious nonzero effects that do not directly influence the trait. Recent efforts have been directed at identifying genes or signaling pathways enriched for mutations in quantitative traits or case-control studies, but these can be computationally costly and hampered by strict model assumptions. Here, we present gene-ε, a new approach for identifying statistical associations between sets of variants and quantitative traits. Our key insight is that enrichment studies on the gene-level are improved when we reformulate the genome-wide SNP-level null hypothesis to identify spurious small-to-intermediate SNP effects and classify them as non-causal. gene-ε efficiently identifies enriched genes under a variety of simulated genetic architectures, achieving greater than a 90% true positive rate at 1% false positive rate for polygenic traits. Lastly, we apply gene-ε to summary statistics derived from six quantitative traits using European-ancestry individuals in the UK Biobank, and identify enriched genes that are in biologically relevant pathways. Author summary Enrichment tests augment the standard univariate genome-wide association (GWA) framework by identifying groups of biologically interacting mutations that are enriched for associations with a trait of interest, beyond what is expected by chance. These analyses model local linkage disequilibrium (LD), allow many different mutations to be disease-causing across patients, and generate biologically interpretable hypotheses for disease mechanisms. However, existing enrichment analyses are hampered by high computational costs, and rely on GWA summary statistics despite the high false positive rate of the standard univariate GWA framework. Here, we present the gene-level association framework gene-ε (pronounced “genie”), an empirical Bayesian approach for identifying statistical associations between sets of mutations and quantitative traits. The central innovation of gene-ε is reformulating the GWA null model to distinguish between (i) mutations that are statistically associated with the disease but are unlikely to directly influence it, and (ii) mutations that are most strongly associated with a disease of interest. We find that, with our reformulated SNP-level null hypothesis, our gene-level enrichment model outperforms existing enrichment methods in simulation studies and scales well for application to emerging biobank datasets. We apply gene-ε to six quantitative traits in the UK Biobank and recover novel and functionally validated gene-level associations. Citation: Cheng W, Ramachandran S, Crawford L (2020) Estimation of non-null SNP effect size distributions enables the detection of enriched genes underlying complex traits. PLoS Genet 16(6): e1008855. https://doi.org/10.1371/journal.pgen.1008855 Editor: Jonathan Marchini, Regeneron Genetics Center, UNITED STATES Received: November 13, 2019; Accepted: May 13, 2020; Published: June 15, 2020 Copyright: © 2020 Cheng et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Data Availability: Source code implementing gene-ε and tutorials are available online (https://github.com/ramachandran-lab/genee). This research was conducted using the UK Biobank Resource (https:// www.ukbiobank.ac.uk) under Application Number 22419. Data can be accessed by direct application to the UK Biobank. Funding: This research was supported in part by US National Institutes of Health (NIH) grant R01 GM118652, National Science Foundation (NSF) CAREER award DBI-1452622, the Erling-Persson Family Foundation, and the Knut and Alice Wallenberg Foundation to S. Ramachandran. This research was also partly supported by grants P20GM109035 (COBRE Center for Computational Biology of Human Disease; PI Rand) and P20GM103645 (COBRE Center for Central Nervous; PI Sanes) from the NIH NIGMS, 2U10CA180794-06 from the NIH NCI and the Dana Farber Cancer Institute (PIs Gray and Gatsonis), as well as by an Alfred P. Sloan Research Fellowship awarded to L. Crawford. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of any of the funders or supporters. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. Competing interests: The authors have declared that no competing interests exist. Over the last decade, there has been an evolving debate about the types of insight genome-wide single-nucleotide polymorphism (SNP) genotype data offer into the genetic architecture of complex traits [1–5]. In the traditional genome-wide association (GWA) framework, individual SNPs are tested independently for association with a trait of interest. While this approach can have drawbacks [2, 3, 6], more recent approaches that combine SNPs within a region have gained power to detect biologically relevant genes and pathways enriched for correlations with complex traits [7–14]. Reconciling these two observations is crucial for biomedical genomics. In the traditional GWA model, each SNP is assumed to either (i) directly influence (or perfectly tag a variant that directly influences) the trait of interest; or (ii) have no affect on the trait at all (see Fig 1A). Throughout this manuscript, for simplicity, we refer to SNPs under the former as “associated” and those under latter as “non-associated”. These classifications are based on ordinary least squares (OLS) effect size estimates for each SNP in a regression framework, where the null hypothesis assumes that the true effects of non-associated SNPs are zero (H[0]: β[j] = 0). The traditional GWA model is agnostic to trait architecture, and is underpowered with a high false-positive rate for “polygenic” traits or traits which are generated by many mutations of small effect [5, The effect sizes of “non-associated” (pink), “spurious non-associated” (red), and “associated” (blue) SNPs were drawn from normal distributions with successively larger variances. (A) The traditional GWA model of complex traits simply assumes SNPs are associated or non-associated. Under the corresponding null hypothesis, associated SNPs are likely to emit nonzero effect sizes while non-associated SNPs will have effect sizes of zero. When there are many causal variants, we refer to the traits as polygenic. (B) Under our reformulated GWA model, there are three categories: associated SNPs, non-associated SNPs that emit spurious nonzero effect sizes, and non-associated SNPs with effect sizes of zero. We propose a multi-component framework (see also [18]), in which null SNPs can emit different levels of statistical signals based on (i) different degrees of connectedness (e.g., through linkage disequilibrium), or (ii) its regulated gene interacts with an enriched gene. While truly associated SNPs are still more likely to emit large effect sizes than SNPs in the other categories, null SNPs can have intermediate effect sizes. Here, our goal is to treat spurious SNPs with small-to-intermediate nonzero effects as being non-associated with the trait of interest. Suppose that in truth each SNP in a GWA dataset instead belongs to one of three categories depending on the underlying distribution of their effects on the trait of interest: (i) associated SNPs; (ii ) non-associated SNPs that emit spurious nonzero statistical signals; and (iii) non-associated SNPs with zero-effects (Fig 1B) [18]. Associated SNPs may lie in enriched genes that directly influence the trait of interest. The phenomenon of a non-associated SNP emitting nonzero statistical signal can occur due to multiple reasons. For example, spurious nonzero SNP effects can be due to some varying degree of linkage disequilibrium (LD) with associated SNPs [19]; or alternatively, non-associated SNPs can have a trans-interaction effect with SNPs located within an enriched gene. In either setting, spurious SNPs can emit small-to-intermediate statistical noise (in some cases, even appearing indistinguishable from truly associated SNPs), thereby confounding traditional GWA tests (Fig 1B ). Hereafter, we refer to this noise as “epsilon-genic effects” (denoted in shorthand as “ε-genic effects”). There is a need for a computational framework that has the ability to identify mutations associated with a wide range of traits, regardless of whether narrow-sense heritability is sparsely or uniformly distributed across the genome. Here, we develop a new and scalable quantitative approach for testing aggregated sets of SNP-level GWA summary statistics for enrichment of associated mutations in a given quantitative trait. In practice, our approach can be applied to any user-specified set of genomic regions, such as regulatory elements, intergenic regions, or gene sets. In this study, for simplicity, we refer to our method as a gene-level test (i.e., an annotated collection of SNPs within the boundary of a gene). The key contribution of our approach is that gene-level association tests should treat spurious SNPs with ε-genic effects as non-associated variants. Conceptually, this requires assessing whether SNPs explain more than some “epsilon” proportion of the phenotypic variance. In this generalized model, we reformulate the GWA null hypothesis to assume approximately no association for spurious non-associated SNPs where Here, denotes a “SNP-level null threshold” and represents the maximum proportion of phenotypic variance explained (PVE) that is contributed by spurious non-associated SNPs. This null hypothesis can be equivalently restated as (Fig 1B). Non-enriched genes are then defined as genes that only contain SNPs with ε-genic effects (i.e., for every j-th SNP within that region). Enriched genes, on the other hand, are genes that contain at least one associated SNP (i.e., for at least one SNP j within that region). By accounting for the presence of spurious ε-genic effects (i.e., through different values of which the user can subjectively control), our approach flexibly constructs an appropriate GWA SNP-level null hypothesis for a wide range of traits with genetic architectures that land anywhere on the polygenic spectrum (see Materials and methods). We refer to our gene-level association framework as “gene-ε” (pronounced “genie”). gene-ε leverages our modified SNP-level null hypothesis to lower false positive rates and increases power for identifying gene-level enrichment within GWA studies. This happens via two key conceptual insights. First, gene-ε regularizes observed (and inflated) GWA summary statistics so that SNP-level effect size estimates are positively correlated with the assumed generative model of complex traits. Second, it examines the distribution of regularized effect sizes to offer the user choices for an appropriate SNP-level null threshold to distinguish associated SNPs from spurious non-associated SNPs. This makes for an improved and refined hypothesis testing strategy for identifying enriched genes underlying complex traits. With detailed simulations, we assess the power of gene-ε to identify significant genes under a variety of genetic architectures, and compare its performance against multiple competing approaches [7, 10, 12, 14, 20]. We also apply gene-ε to the SNP-level summary statistics of six quantitative traits assayed in individuals of European ancestry from the UK Biobank Overview of gene-ε The gene-ε framework requires two inputs: GWA SNP-level effect size estimates, and an empirical linkage disequilibrium (LD, or variance-covariance) matrix. The LD matrix can be estimated directly from genotype data, or from an ancestry-matched set of samples if genotype data are not available to the user. We use these inputs to both estimate gene-level contributions to narrow-sense heritability h^2, and perform gene-level enrichment tests. After preparing the input data, there are three steps implemented in gene-ε, which are detailed below (Fig 2). (A) gene-ε takes SNP-level GWA marginal effect sizes (OLS estimates ) and a linkage disequilibrium (LD) matrix (Σ) as input. It is well-known that OLS effect size estimates are inflated due to LD (i.e., correlation structures) among genome-wide genotypes. (B) gene-ε first uses its inputs to derive regularized effect size estimates () through shrinkage methods (LASSO, Elastic Net and Ridge Regression; we explore performance of each solution under a variety of simulated trait architectures in Supporting Information). (C) A unique feature of gene-ε is that it treats SNPs with spurious nonzero effects as non-associated. gene-ε assumes a reformulated null distribution of SNP-level effects , where is the SNP-level null threshold and represents the maximum proportion of phenotypic variance explained (PVE) by a spurious or non-associated SNP. This leads to the reformulated SNP-level null hypothesis . To infer an appropriate , gene-ε fits a K-mixture of normal distributions over the regularized effect sizes with successively smaller variances (; with ). In this study (without loss of generality), we assume that associated SNPs will appear in the first set, while spurious and non-associated SNPs appear in the latter sets. By definition, the SNP-level null threshold is then . (D) Lastly, gene-ε computes gene-level association test statistics using quadratic forms and corresponding P-values using Imhof’s method. This assumes the common gene-level null H[0]: Q[g] = 0, where the null distribution of Q[g] is dependent upon the SNP-level null threshold . For more details, see Materials and methods. First, we shrink the observed GWA effect size estimates via regularized regression (Fig 2A and 2B; Eq (4) in Materials and methods). This shrinkage step reduces the inflation of OLS effect sizes for spurious SNPs [22], and increases their correlation with the assumed generative model for the trait of interest (particularly for traits with high heritability; S1 Fig). When assessing the performance of gene-ε in simulations, we considered different types of regularization for the effect size estimates: the Least Absolute Shrinkage And Selection Operator (gene-ε-LASSO) [23], the Elastic Net solution (gene-ε-EN) [24], and Ridge Regression (gene-ε-RR) [25]. We also assessed our framework using the observed ordinary least squares (OLS) estimates without any shrinkage (gene-ε -OLS) to serve as motivation for having regularization as a step in the framework. Second, we fit a K-mixture Gaussian model to all regularized effect sizes genome-wide with the goal of classifying SNPs as associated, non-associated with spurious statistical signal, or non-associated with zero-effects (Figs 1B and 2C; see also [18]). Each successive Gaussian mixture component has distinctly smaller variances () with the K-th component fixed at . Estimating these variance components helps determine an appropriate k-th category to serve as the cutoff for SNPs with null effects (i.e., choosing some variance component to be the null threshold ). The gene-ε software allows users to determine this cutoff subjectively. Intuitively, enriched genes are likely to contain important variants with relatively larger effects that are categorized in the early-to-middle mixture components. Since the biological interpretation of the middle components may not be consistent across trait architectures, we take a conservative approach in our selection of a cutoff when determining associated SNPs. Without loss of generality, we assume non-null SNPs appear in the first mixture component with the largest variance, while null SNPs appear in the latter components. By this definition, non-associated SNPs with spurious ε-genic or zero-effects then have PVEs that fall at or below the variance of the second component (i.e., and for the j-th SNP). gene- ε allows for flexibility in the number of Gaussians that specify the range of null and non-null SNP effects. To achieve genome-wide scalability, we estimate parameters of the K-mixture model using an expectation-maximization (EM) algorithm. Third, we group the regularized GWA summary statistics according to gene boundaries (or user-specified SNP-sets) and compute a gene-level enrichment statistic based on a commonly used quadratic form (Fig 2D) [7, 12, 20]. In expectation, these test statistics can be naturally interpreted as the contribution of each gene to the narrow-sense heritability. We use Imhof’s method [26] to derive a P -value for assessing evidence in support of an association between a given gene and the trait of interest. Details for each of these steps can be found in Materials and Methods, as well as in Supporting Information. Performance comparisons in simulation studies To assess the performance of gene-ε, we simulated complex traits under multiple genetic architectures using real genotype data on chromosome 1 from individuals of European ancestry in the UK Biobank (Materials and methods). Following quality control procedures, our simulations included 36,518 SNPs (Supporting Information). Next, we used the NCBI’s Reference Sequence (RefSeq) database in the UCSC Genome Browser [27] to annotate SNPs with the appropriate genes. Simulations were conducted using two different SNP-to-gene assignments. In the first, we directly used the UCSC annotations which resulted in 1,408 genes to be used in the simulation study. In the second, we augmented the UCSC gene boundaries to include SNPs within ±50kb, which resulted in 1,916 genes in the simulation study. For both cases, we assumed a linear additive model for quantitative traits, while varying the following parameters: sample size (N = 5,000 or 10,000); narrow-sense heritability (h^2 = 0.2 or 0.6); and the percentage of enriched genes (set to 1% or 10%). In each scenario, we considered traits being generated with and without additional population structure. In the latter setting, traits are simulated while also using the top ten principal components of the genotype matrix as covariates to create stratification. Regardless of the setting, GWA summary statistics were computed by fitting a single-SNP univariate linear model (via OLS) without any control for population structure. Comparisons were based on 100 different simulated runs for each parameter combination. We compared the performance of gene-ε against that of five competing gene-level association or enrichment methods: SKAT [20], VEGAS [7], MAGMA [10], PEGASUS [12], and RSS [14] (Supporting Information). As previously noted, we also explored the performance of gene-ε while using various degrees of regularization on effect size estimates, with gene-ε-OLS being treated as a baseline. SKAT, VEGAS, and PEGASUS are frequentist approaches, in which SNP-level GWA P-values are drawn from a correlated chi-squared distribution with covariance estimated using an empirical LD matrix [28]. MAGMA is also a frequentist approach in which gene-level P-values are derived from distributions of SNP-level effect sizes using an F-test [10]. RSS is a Bayesian model-based enrichment method which places a likelihood on the observed SNP-level GWA effect sizes (using their standard errors and LD estimates), and assumes a spike-and-slab shrinkage prior on the true SNP effects [29]. Conceptually, SKAT, MAGMA, VEGAS, and PEGASUS assume null models under the traditional GWA framework, while RSS and gene-ε allow for traits to have architectures with more complex SNP effect size distributions. For all methods, we assess the power and false discovery rates (FDR) for identifying correct genes at a Bonferroni-corrected threshold (P = 0.05/1408 genes = 3.55×10^−5 and P = 0.05/1916 genes = 2.61×10^−5, depending on if the ±50kb buffer was used) or median probability model (posterior enrichment probability >0.5; see [30]) (S1–S16 Tables). We also compare their ability to rank true positives over false positives via receiver operating characteristic (ROC) and precision-recall curves (Fig 3 and S2–S16 Figs). While we find gene-ε and RSS have the best tradeoff between true and false positive rates, RSS does not scale well for genome-wide analyses (Table 1). In many settings, gene-ε has similar power to RSS (while maintaining a considerably lower FDR), and generally outperforms RSS in precision-versus-recall. gene-ε also stands out as the best approach in scenarios where the observed OLS summary statistics were produced without first controlling for confounding stratification effects in more heritable traits (i.e., h^2 = 0.6). Computationally, gene-ε gains speed by directly assessing evidence for rejecting the gene-level null hypothesis, whereas RSS must compute the posterior probability of being an enriched gene (which can suffer from convergence issues; Supporting Information). For context, an analysis of just 1,000 genes takes gene-ε an average of 140 seconds to run on a personal laptop, while RSS takes around 9,400 seconds to complete. We simulate complex traits under different genetic architectures and GWA study scenarios, varying the following parameters: narrow sense heritability, proportion of associated genes, and sample size (Supporting Information). Here, the sample size N = 10, 000 and the narrow-sense heritability h^2 = 0.6. We compute standard GWA SNP-level effect sizes (estimated using ordinary least squares). Results for gene-ε are shown with LASSO (blue), Elastic Net (EN; red), and Ridge Regression (RR; purple) regularizations. We also show the results of gene-ε without regularization to illustrate the importance of this step (labeled OLS; orange). We further compare gene-ε with five existing methods: PEGASUS (brown) [12], VEGAS (teal) [7], the Bayesian approach RSS (black) [14], SKAT (green) [20], and MAGMA (peach) [10]. (A, C) ROC curves show power versus false positive rate for each approach of sparse (1% associated genes) and polygenic (10% associated genes) architectures, respectively. Note that the upper limit of the x-axis has been truncated at 0.1. (B, D) Precision-Recall curves for each method applied to the simulations. Note that, in the sparse case (1% associated genes), the top ranked genes are always true positives, and therefore the minimal recall is not 0. All results are based on 100 replicates. Methods compared include: gene-ε, PEGASUS [12], VEGAS [7], RSS [14], MAGMA [10], and SKAT [20]. Here, we simulated 10 datasets for each pair of parameter values (number of genes analyzed, and number of SNPs within each gene). Each table entry represents the average computation time (in seconds) it takes each approach to analyze a dataset of the size indicated. Run times were measured on a MacBook Pro (Processor: 3.1-gigahertz (GHz) Intel Core i5, Memory: 8GB 2133-megahertz (MHz) LPDDR3). Only a single core on the machine was used. PEGASUS, SKAT, and MAGMA are score-based methods and, thus, are expected to take the least amount of time to run. Both gene-ε and RSS are regression-based methods, but gene-ε is scalable in both the number of genes and the number of SNPs per gene. The increased computational burden of RSS results from its need to do Bayesian posterior inference; however, gene-ε is able to scale because it leverages regularization and point estimation for hypothesis testing. When using GWA summary statistics to identify genotype-phenotype associations, modeling the appropriate trait architecture is crucial. As expected, all methods we compared in this study have relatively more power for traits with high h^2. However, our simulation studies confirm the expectation that the max utility for methods assuming the traditional GWA framework (i.e., SKAT, MAGMA, VEGAS, and PEGASUS) is limited to scenarios where heritability is low, phenotypic variance is dominated by just a few enriched genes with large effects, and summary statistics are not confounded by population structure (S2, S3, S9, and S10 Figs). RSS, gene-ε-EN, and gene-ε-LASSO robustly outperform these methods for the other trait architectures (Fig 3, S4–S8 and S11–S16 Figs). One major reason for this result is that shrinkage and penalized regression methods appropriately correct for inflation in GWA summary statistics (S1 Fig). For example, we find that the regularization used by gene-ε -EN and gene-ε-LASSO is able to recover effect size estimates that are almost perfectly correlated (r^2 > 0.9) with the true effect sizes used to simulate sparse architectures (e.g., simulations with 1% enriched genes). In S17–S24 Figs, we show a direct comparison between gene-ε with and without regularization to show how inflated SNP-level summary statistics directly affect the ability to identify enriched genes across different trait architectures. Regularization also allows gene-ε to preserve type 1 error when traits are generated under the null hypothesis of no gene enrichment. Importantly, our method is relatively conservative when GWA summary statistics are less precise and derived from studies with smaller sample sizes (e.g., N = 5,000; S17 Table). Characterizing genetic architecture of quantitative traits in the UK Biobank We applied gene-ε to 1,070,306 genome-wide SNPs and six quantitative traits—height, body mass index (BMI), mean red blood cell volume (MCV), mean platelet volume (MPV), platelet count (PLC), waist-hip ratio (WHR)—assayed in 349,414 European-ancestry individuals in the UK Biobank (Supporting Information) [21]. After quality control, we regressed the top ten principal components of the genotype data onto each trait to control for population structure, and then we derived OLS SNP-level effect sizes using the traditional GWA framework. For completeness, we then analyzed these GWA effect size estimates with the four different implementations of gene-ε. In the main text, we highlight results under the Elastic Net solution; detailed findings with the other gene-ε approaches can be found in Supporting Information. While estimating ε-genic effects, gene-ε provides insight into to the genetic architecture of a trait (S18 Table). For example, past studies have shown human height to have a higher narrow-sense heritability (estimates ranging from 45-80%; [6, 31–39]). Using Elastic Net regularized effect sizes, gene-ε estimated approximately 11% of SNPs in the UK Biobank to be statistically associated with height. This meant approximately 110,000 SNPs had marginal PVEs (Materials and methods). This number is similar to the 93,000 and 100,000 height associated variants previously estimated by Goldstein [40] and Boyle et al. [4], respectively. Additionally, gene-ε identified approximately 2% of SNPs to be “causal” (meaning they had PVEs greater than the SNP-level null threshold, ); again similar to the Boyle et al. [4] estimate of 3.8% causal SNPs for height using data from the GIANT Consortium [32], and the Lello et al. [41] estimate of 3.1% causal SNPs for height using European-ancestry individuals in the UK Biobank. Compared to body height, narrow-sense heritability estimates for BMI have been considered both high and low (estimates ranging from 25-60%; [31, 33, 34, 36, 37, 39, 42–45]). Such inconsistency is likely due to difference in study design (e.g., twin, family, population-based studies), many of which have been known to produce different levels of bias [44]. Here, our results suggest BMI to have a lower narrow-sense heritability than height, with a slightly different distribution of null and non-null SNP effects. Specifically, we found BMI to have 13% associated SNPs and 6% causal SNPs. In general, we found our genetic architecture characterizations in the UK Biobank to reflect the same general themes we saw in the simulation study. Less aggressive shrinkage approaches (e.g., OLS and Ridge) are subject to misclassifications of associated, spurious, and non-associated SNPs. As a result, these methods struggle to reproduce well-known narrow-sense heritability estimates from the literature, across all six traits. This once again highlights the need for computational frameworks that are able to appropriately correct for inflation in summary statistics. gene-ε identifies refined list of genetic enrichments Next, we applied gene-ε to the summary statistics from the UK Biobank and generated genome-wide gene-level association P-values (Fig 4A and 4B, S25A–S29A and S25B–S29B Figs). As in the simulation study, we conducted two separate analyses using two different SNP-to-gene annotations: (i) we used the RefSeq database gene boundary definitions directly, or (b) we augmented the gene boundaries by adding SNPs within a ±50 kilobase (kb) buffer to account for possible regulatory elements. A total of 14,322 genes were analyzed when using the UCSC boundaries as defined, and a total of 17,680 genes were analyzed when including the 50kb buffer. The ultimate objective of gene-ε is to identify enriched genes, which we define as containing at least one associated SNP and achieving a gene-level association P-value below a Bonferroni-corrected significance threshold (in our two analyses, P = 0.05/14322 genes = 3.49×10^−6 and P = 0.05/17680 genes 2.83×10^−6, respectively; S19–S24 Tables). As a validation step, we compared gene-ε P-values to RSS posterior enrichment probabilities for each gene. We also used the gene set enrichment analysis tool Enrichr [46] to identify dbGaP categories with an overrepresentation of significant genes reported by gene-ε (Fig 4C and 4D, S25C–S29C and S25D–S29D Figs). A comparison of gene-level associations and gene set enrichments between the different gene-ε approaches are also listed (S25–S27 Tables). Body height has been estimated to have a narrow-sense heritability h^2 in the range of 0.45 to 0.80 [6, 31–39]; while, MPV has been estimated to have h^2 between 0.50 and 0.70 [33, 34, 58]. Manhattan plots of gene-ε gene-level association P-values using Elastic Net regularized effect sizes for (A) body height and (B) MPV. The purple dashed line indicates a log-transformed Bonferroni-corrected significance threshold (P = 3.49×10^−6 correcting for 14,322 autosomal genes analyzed). We color code all significant genes identified by gene-ε in orange, and annotate genes overlapping with the database of Genotypes and Phenotypes (dbGaP). In (C) and (D), we conduct gene set enrichment analysis using Enrichr [46, 59] to identify dbGaP categories enriched for significant gene-level associations reported by gene-ε. We highlight categories with Q-values (i.e., false discovery rates) less than 0.05 and annotate corresponding genes in the Manhattan plots in (A) and (B), respectively. For height, the only significant dbGAP category is “Body Height”, with nine of the genes identified by gene-ε appearing in this category. For MPV, the two significant dbGAP categories are “Platelet Count” and “Face”—the first of which is directly connected to trait [57, 60, 61]. Many of the candidate enriched genes we identified by applying gene-ε were not previously annotated as having trait-specific associations in either dbGaP or the GWAS catalog (Fig 4); however, many of these same candidate genes have been identified by past publications as related to the phenotype of interest (Table 2). It is worth noting that multiple genes would not have been identified by standard GWA approaches since the top SNP in the annotated region had a marginal association below a genome-wide threshold (see Table 2 and highlighted rows in S19–S24 Tables). Additionally, 45% of the genes selected by gene-ε were also selected by RSS. For example, gene-ε reports C1orf150 as having a significant gene-level association with MPV (P = 1 × 10^−20 and RSS posterior enrichment probability of 1), which is known to be associated with germinal center signaling and the differentiation of mature B cells that mutually activate platelets [47–49]. Importantly, nearly all of the genes reported by gene-ε had evidence of overrepresentation in gene set categories that were at least related to the trait of interest. As expected, the top categories with Enrichr Q-values smaller than 0.05 for height and MPV were “Body Height” and “Platelet Count”, respectively. Even for the less heritable MCV, the top significant gene sets included hematological categories such as “Transferrin”, “Erythrocyte Indices”, “Hematocrit”, “Narcolepsy”, and “Iron”—all of which have verified and clinically relevant connections to trait [50–57]. We call these novel candidate genes because they are not listed as being associated with the trait of interest in either the GWAS catalog or dbGaP, and they have top posterior enrichment probabilities with the trait using RSS analysis. Each gene is annotated with past functional studies that link them to the trait of interest. We also report each gene’s overall trait-specific significance rank (out of 14,322 autosomal genes analyzed for each trait), as well as their heritability estimates from gene-ε using Elastic Net to regularize GWA SNP-level effect size estimates. The traits are: height; body mass index (BMI); mean corpuscular volume (MCV); mean platelet volume (MPV); platelet count (PLC); and waist-hip ratio (WHR). ^♣: Enriched genes whose top SNP is not marginally significant according to a genome-wide Bonferroni-corrected threshold (P = 4.67 × 10^−8 correcting for 1,070,306 SNPs analyzed; see highlighted rows in S19–S24 Tables for complete list). *: Multiple genes were tied for this ranking. Lastly, gene-ε also identified genes with rare causal variants. For example, ZNF628 (which is not mapped to height in the GWAS catalog) was detected by gene-ε with a significant P-value of 1 × 10^−20 (and P = 4.58 × 10^−8 when the gene annotation included a 50kb buffer). Previous studies have shown a rare variant rs147110934 within this gene to significantly affect adult height [38]. Rare and low-frequency variants are generally harder to detect under the traditional GWA framework. However, rare variants have been shown to be important for explaining the variation of complex traits [28, 39, 80–83]. With regularization and testing for spurious ε-genic effects, gene-ε is able to distinguish between rare variants that are causal and SNPs with larger effect sizes due various types of correlations. This only enhances the power of gene-ε to identify potential novel enriched genes. During the past decade, it has been repeatedly observed that the traditional GWA framework can struggle to accurately differentiate between associated and spurious SNPs (which we define as SNPs that covary with associated SNPs but do not directly influence the trait of interest). As a result, the traditional GWA approach is prone to generating false positives, and detects variant-level associations spread widely across the genome rather than aggregated sets in disease-relevant pathways [4]. While this observation has spurred to many interesting lines of inquiry—such as investigating the role of rare variants in generating complex traits [9, 28, 80, 81], comparing the efficacy of tagging causal variants in different ancestries [84, 85], and integrating GWA data with functional -omics data [86–88]—the focus of GWA studies and studies integrating GWA data with other -omics data is still largely based on the role of individual variants, acting independently. Here, our objective is to identify biologically significant underpinnings of the genetic architecture of complex traits by modifying the traditional GWA null hypothesis from H[0]: β[j] = 0 (i.e., the j-th SNP has zero statistical association with the trait of interest) to H[0]: β[j] ≈ 0. We accomplish this by testing for ε-genic effects: spurious small-to-intermediate effect sizes emitted by truly non-associated SNPs. We use an empirical Bayesian approach to learn the effect size distributions of null and non-null SNP effects, and then we aggregate (regularized) SNP-level association signals into a gene-level test statistic that represents the gene’s contribution to the narrow-sense heritability of the trait of interest. Together, these two steps reduce false positives and increase power to identify the mutations, genes, and pathways that directly influence a trait’s genetic architecture. By considering different thresholds for what constitutes a null SNP effect (i.e., different values of for spurious non-associated SNPs; Figs 1 and 2), gene-ε offers the flexibility to construct an appropriate null hypothesis for a wide range of traits with genetic architectures that land anywhere on the polygenic spectrum. It is important to stress that while we repeatedly point to our improved ability distinguish “causal” variants in enriched genes, gene-ε is by no means a causal inference procedure. Instead, it is an association test which highlights genes in enriched pathways that are most likely to be associated with the trait of interest. Through simulations, we showed the gene-ε framework outperforms other widely used gene-level association methods (particularly for highly heritable traits), while also maintaining scalability for genome-wide analyses (Fig 3, S2–S24 Figs, Table 1, and S1–S17 Tables). Indeed, all the approaches we compared in this study showed improved performance when they used summary statistics derived from studies with larger sample sizes (i.e., simulations with N = 10, 000). This is because the quality of summary statistics also improves in these settings (via the asymptotic properties of OLS estimates). Nonetheless, our results suggest that applying gene-ε to summary statistics from previously published studies will increase the return made on investments in GWA studies over the last Like any aggregated SNP-set association method, gene-ε has its limitations. Perhaps the most obvious limitation is that annotations can bias the interpretation of results and lead to erroneous scientific conclusions (i.e., might cause us to highlight the “wrong” gene [14, 89, 90]). We observed some instances of this during the UK Biobank analyses. For example, when studying MPV, CAPN10 only appeared to be a significant gene after its UCSC annotated boundary was augmented by a ±50kb buffer window (P = 1.85 × 10^−1 and P = 1.17 × 10^−7 before and after the buffer was added, respectively; see S22 Table). After further investigation, this result occurred because the augmented definition of CAPN10 included nearly all causal SNPs from the significant neighboring gene RNPEPL1 (P = 1 × 10^−20 and P = 2.07 × 10^−9 before and after the buffer window was added, respectively). While this shows the need for careful biological interpretation of the results, it also highlights the power of gene-ε to prioritize true genetic signal effectively. Another limitation of gene-ε is that it relies on the user to determine an appropriate SNP-level null threshold to serve as a cutoff between null and non-null SNP effects. In the current study, we use a K-mixture Gaussian model to classify SNPs into different categories and then (without loss of generality) we subjectively assume that associated SNPs only appear in the component with the largest variance (i.e., we choose ). Indeed, there can be many scenarios where this particular threshold choice is not optimal. For example, if there is one very strongly associated locus, the current implementation of the algorithm will assign it to its own mixture component and all other SNPs will be assumed to be not associated with the trait, regardless of the size of their corresponding variances. As previously mentioned, one practical guideline would be to select based on some a priori knowledge about a trait’s architecture. However, a more robust approach would be to select the SNP-null hypothesis threshold based on the data at hand. One way to do this would be to take a fully Bayesian approach and allow posterior inference on to be dependent upon how much heritability is explained by SNPs placed in the top few largest components of the normal mixture. Recently, sparse Bayesian parametric [91] and nonparametric [92] Gaussian mixture models have been proposed for improved polygenic prediction with summary statistics. Combining these modeling strategies with our modified SNP-level null hypothesis could make for a more unified and data-driven implementation of the gene-ε framework. There are several other potential extensions for the gene-ε framework. First, in the current study, we only focused on applying gene-ε to quantitative traits (Fig 4, S25–S29 Figs, Table 2, and S18– S27 Tables). Future studies extending this approach to binary traits (e.g., case-control studies) should explore controlling for additional confounders that can occur within these phenotypes, such as ascertainment [93–95]. Second, we only focus on data consisting of common variants; however, it would be interesting to extend gene-ε for (i) rare variant association testing and (ii) studies that consider the combined effect between rare and common variants. A significant challenge, in either case, would be to adaptively adjust the strength of the regularization penalty on the observed OLS summary statistics for causal rare variants, so as to not misclassify them as spurious non-associated SNPs. Previous approaches with specific re-weighting functions for rare variants may help here [9 , 28, 80] (Materials and methods). A final related extension of gene-ε is to include information about standard errors when estimating ε-genic effects. In our analyses using the UK Biobank, some of the newly identified candidate genes contained SNPs that had large effect sizes but insignificant P-values in the original GWA analysis (after Bonferroni-correction; Table 2 and S19–S24 Tables). While this could be attributed to the modified SNP-level null distribution assumed by gene-ε, it also motivates a regularization model that accounts for the standard error of effect size estimates from GWA studies [14, 22, 29]. Materials and methods Traditional association tests using summary statistics gene-ε requires two inputs: genome-wide association (GWA) marginal effect size estimates , and an empirical linkage disequilibrium (LD) matrix Σ. We assumed the following generative linear model for complex traits (1) where y denotes an N-dimensional vector of phenotypic states for a quantitative trait of interest measured in N individuals; X is an N × J matrix of genotypes, with J denoting the number of single nucleotide polymorphisms (SNPs) encoded as {0, 1, 2} copies of a reference allele at each locus; β is a J-dimensional vector containing the additive effect sizes for an additional copy of the reference allele at each locus on y; e is a normally distributed error term with mean zero and scaled variance τ^2; and I is an N × N identity matrix. For convenience, we assumed that the genotype matrix (column-wise) and trait of interest have been mean-centered and standardized. We also treat β as a fixed effect. A central step in GWA studies is to infer β for each SNP, given both genotypic and phenotypic measurements for each individual sample. For every SNP j, gene-ε takes in the ordinary least squares (OLS) estimates based on Eq (1) (2) where x[j] is the j-th column of the genotype matrix X, and is the j-th entry of the vector . In traditional GWA studies, the null hypothesis for statistical association tests assumes H[0]: β[j] = 0 for all j = 1, …, J SNPs. It can be shown that two genotypic variants x[j] and x[j′] in linkage disequilibrium (LD) will produce effect size estimates and (j ≠ j′) that are correlated [29]. This can lead to confounded statistical tests. For the applications considered here, the LD matrix is empirically estimated from external data (e.g., directly from GWA study data, or using an LD map from a population with similar genomic ancestry to that of the samples analyzed in the GWA study). Regularized regression for GWA summary statistics gene-ε uses regularization on the observed GWA summary statistics to reduce inflation of SNP-level effect size estimates and increase their correlation with the assumed generative model of complex traits. For large sample size N, note that the asymptotic relationship between the observed GWA effect size estimates and the true coefficient values β is [18, 96, 97] (3) where Σ[jj′] = ρ(x[j], x[j ′]) denotes the correlation coefficient between SNPs x[j] and x[j′]. The above mirrors a high-dimensional regression model with the misestimated OLS summary statistics as the response variables and the LD matrix as the design matrix. Theoretically, the resulting output coefficients from this model are the desired true effect size estimates. Due to the multi-collinear structure of GWA data, we cannot reuse the ordinary least squares solution reliably [98]. Thus, we derive the general regularization (4) where, in addition to previous notation, the solution is used to denote the regularized solution of the observed GWA effect sizes ; and denote L[1] and L[2] penalties, respectively. The free regularization parameter t is chosen based off a grid [log t[min], log t[max]] with 100 sequential steps of size 0.01. Here, t[max] is the minimum value such that all summary statistics are shrunk to zero. We then select the t that results in a model with an R^2 within one standard error of the best fitted model. In other words, we choose the t that (i) results in a more sparse solution than the best fitted model, but (ii) cannot be distinguished from the best fitted model in terms of overall variance explained. The term α in Eq (4) distinguishes the type of regularization used, and can be chosen to induce various degrees of shrinkage on the effect size estimates. Specifically, α = 0 corresponds to the “Least Absolute Shrinkage and Selection Operator” or LASSO solution [23], α = 1 equates to Ridge Regression [25], while 0 < α < 1 results in the Elastic Net [24]. The LASSO solution forces some inflated coefficients to be zero; while the Ridge shrinks the magnitudes of all coefficients but does not set any of them to be exactly zero. Intuitively, the LASSO will create a regularized set of effect sizes where associated SNPs have larger effects, non-associated SNPs with spurious small-to-intermediate (or ε-genic) effects, and non-associated SNPs with zero-effects. It has been suggested that the L[1]-penalty can suffer from a lack of stability [99]. Therefore, in the main text, we also highlighted gene-ε using the Elastic Net (with α = 0.5). The Elastic Net is a convex combination of the LASSO and Ridge penalties, but still produces distinguishable sets of associated, spurious, and non-associated SNPs. Note that for large GWA studies (e.g., the UK Biobank analysis in the main text), it can be impractical to construct a genome-wide LD matrix; therefore, we regularize OLS effect size estimates based on partitioned chromosome specific LD matrices. Results comparing each of the gene-ε regularization implementations are given in the main text (Fig 3) and Supporting Information (S2–S24 Figs, S1–S18 and S25–S27 Tables). We will describe how we approximate the null distribution for these regularized GWA summary statistics over the next two sections. Estimating the SNP-level null threshold The main innovation of gene-ε is to treat spurious SNPs with ε-genic effects as non-associated. This leads to reformulating the GWA SNP-level null hypothesis to assume non-associated SNPs can make small-to-intermediate contributions to the phenotypic variance. Formally, we write this as (5) where denotes the “SNP-level null threshold” and represents the maximum proportion of phenotypic variance explained (PVE) that is contributed by spurious SNPs. Based on Eq (5), we equivalently say (6) To estimate the threshold for null SNP-level effects, we use an empirical Bayesian approach and fit a K-mixture of normal distributions over the (regularized) effect size estimates [18], (7) where z[ j] ∈ {1, …, K} is a latent variable representing the categorical membership for the j-th SNP. When summing over all components, Eq (7) corresponds to the following marginal distribution (8) where π[k ] is a mixture weight representing the marginal (unconditional) probability that a randomly selected SNP belongs to the k-th component, with ∑[k] π[k] = 1. The above mixture allows for distinct clusters of nonzero effects through K different variance components (, k = 1, …, K) [18]. Here, we consider sequential fractions (π[1], …, π[K]) of SNPs to correspond to distinctly smaller effects () [18]. The goal of the mixture model is to “bin” each of the (regularized) SNP-level effects and determine an appropriate category k to serve as the cutoff for SNPs with null effects (i.e., choosing the threshold based on some ). Such a threshold can be chosen based on a priori knowledge about the phenotype of interest. It is intuitive to assume that enriched genes will contain non-null SNPs that classify within the early-to-middle mixture components; unfortunately, the biological interpretations of the middle components may not be consistent across trait architectures. Therefore, without loss of generality in this paper, we take a conservative approach in our definition of associated SNPs within enriched genes. Here, we subjectively set the SNP-level null threshold as . Thus, non-null SNPs are assumed to appear in the largest fraction (i.e., the alternative ), while null SNPs with belong to the latter groups (i.e., the null ). Given Eqs (7) and (8), we write the joint log-likelihood for all J SNPs as the following (9) where is the complete set of parameters for the mixture model. Since there is not a closed-form solution for the maximum likelihood estimate (MLE), so we use an expectation-maximization (EM) algorithm to estimate the parameters in Θ [100–102]. Derivation of the EM algorithm. To derive an EM solution, we use Eqs (7) and (8) to write the joint distribution of the J-regularized SNP-level effect sizes and the J-latent random variables z = (z[1], …, z[J]), conditioned on the mixture parameters Θ, (10) where is an indicator function and equates to one if z[j] = k and zero otherwise. Taking the log of this distribution yields the following (11) As opposed to Eq (9), the augmented log-likelihood in Eq (11)) is a much simpler function for which to find a solution. The formal steps of the EM algorithm are now detailed below: 1. E-Step: Update the probability of fraction assignment. In the E-step of the EM algorithm, we estimate the probability that the j-th SNP belongs to one of the K fraction groups. To begin, we use Bayes theorem to find (12) Next, we take the expectation of the complete log-likelihood , with respect to the condtional distribution , under current value of the mixture parameters . This yields (13) where is referred to as the “responsibility of the k-th mixture component”, and is given as (14) Intuitively, the EM algorithm uses the collection of these responsibility values to assign SNPs to one of the K fraction groups. This key step may be interpreted as determining the category of SNP effects (which is determined by identifying the k-th component with the largest for each j-th SNP). 2. M-Step: Update the component variances and mixture weights. In the M-step of the EM algorithm, we now fix the responsibility values and maximize the expectation in Eq (13), with respect to the parameters in . Namely, we compute the following closed-form solutions: (15) where is the sum of the membership weights for the k-th mixture component and represents the number of SNPs assigned to that component. The estimates are used to set the SNP-level null threshold . The gene-ε software implements the above EM algorithm using the mclust [103] package in R. Results in the main text and Supporting Information are based on 100 iterations from 10 different parallel chains to ensure convergence. To implement the above algorithm, we use the mclust software package which can fit a Gaussian mixture with up to K = 10 distinct components (see Software Details). Here, the function will compare the Bayesian Information Criterion (BIC) approximation to the Bayes factor for each possible K [104], and produces a resulting output for the K value that has the largest BIC value. Note that since the EM updates do not involve any large LD matrices, the algorithm scales to be fit efficiently over all SNPs genome-wide. Regularized GWA summary statistics under the null hypothesis With an estimate of the SNP-level null threshold , we now describe the probabilistic distribution of the regularized GWA summary statistics under the null hypothesis. Without loss of generality, we demonstrate this property using the general regularization approach where we fix α ∈ [0, 1] and have the following (approximate) closed form solution for the regularized effect size estimates [23–25] (16) with ϑ ≥ 0 being a penalization parameter that has one-to-one correspondence with t in Eq (4). Here, H is commonly referred to as the “linear shrinkage estimator”, where D is a diagonal weight matrix with nonzero elements dictated by the type of regularization that is being used. For example, D = I while performing ridge regression [25], and while using ridge-based approximations for the elastic net and lasso solutions [23, 24]. From Eq (16), it is clear that may be interpreted as a marginal estimator of SNP-level effects after accounting for LD structure. Using Eqs (2) and (3), it is straightforward to show the (approximate) relationship between the regularized effect size estimates and the true coefficient values (17) As described in the main text, the accuracy of this relationship is dependent upon both the sample size and narrow-sense heritability of the trait of interest (S1 Fig). Indeed, if Σ is full rank and regularization is no longer implemented (i.e., ϑ = 0), is simply the ordinary least squares solution for marginal GWA summary statistics with asymptotic variance-covariance under the null model [18, 96, 97]. In the limiting case where the number of observations in a GWA study is large (i.e., N → ∞) and the trait of interest is highly heritable, converges onto β in expectation; and thus is assumed to be independently and normally distributed under the null hypothesis with asymptotic variance (previously discussed in Eq (5)). As empirically demonstrated for synthetic traits in the current study, we are rarely in situations where we expect the regularized effect size estimates to have completely converged onto the true generative SNP-level coefficients (again see S1 Fig). This effectively means that we cannot expect each to be completely independent under the null hypothesis in practice. We accommodate this realization by assuming that under the null model (18) Our reasoning for the formulation above is that, for most quality controlled studies, SNPs in perfect LD will have been pruned such that ρ(x[j], x[j′])<ρ(x[j], x[j]) for all j ≠ j′ variants in the data. Therefore, when traits are generated under the idealized null scenario with large sample sizes and no genetic effects, the estimate of and the off-diagonals of will approach zero quicker than the diagonal elements; thus, allowing the regularized to asymptotically converge onto the true coefficients β. When this scenario does not occur, we are able to appropriately deal with the remaining correlation structure (e.g., all the simulation scenarios explored in this work; see Fig 3, S2–S24 Figs, Table 1, and S1–S17 Tables). Using the SNP-level null threshold to detect enriched genes We now formalize the hypothesis test for identifying significantly enriched genes conditioned on the SNP-level null threshold , which we compute using the variance component estimates from the EM algorithm detailed in the previous section. The gene-ε gene-level test statistic is based on a quadratic form using GWA summary statistics, which is a common approach for generating gene-level test statistics for complex traits. Let gene (or genomic region) g represent a known set of SNPs ; for example, may include SNPs within the boundaries of g and/or within its corresponding regulatory region. Here, we conformably partition the regularized GWA effect size estimates and define the gene-level test statistic (19) where A is an arbitrary symmetric and positive semi-definite weight matrix. We set to A = I to be the identity matrix for all analyses in the current study; hence, simplifies to a sum of squared SNP effects in the g-th gene. Indeed, similar quadratic forms have been implemented to assess the enrichment of mutations at the gene level [7, 12] and across general SNP-sets [9, 20, 28, 80]. A key feature of the gene-ε framework is to assess the statistics in Eq (19) against a gene-level enrichment null hypothesis H[0]: Q[g] = 0 that is dependent on the SNP-level null threshold . Due to the normality assumption for each SNP effect in Eq (5), Q[g] is theoretically assumed to follow a mixture of chi-square distributions, (20) where denotes the cardinality of the set of SNPs ; are standard chi-square random variables with one degree of freedom; and are the eigenvalues of the matrix [105, 106] Again, in the current study, from the estimates in Eq (15), and Σ[g] denotes a subset of the LD matrix only containing SNPs annotated in the g-th SNP-set. Again, when A = I, the eigenvalues are based on a scaled version of the local gene-specific LD matrix. Several approximate and exact methods have been suggested to obtain P-values under a mixture of chi-square distributions. In this study, we use Imhof’s method [26] where we empirically compute an estimate of the weighted sum in Eq (20) and compare this distribution to the observed test statistic in Eq (19) (see Software Details). It is important to note here that the gene-level null hypothesis is the same for gene-ε and other similar competing enrichment methods [9, 12, 20, 28, 80]; the defining characteristic that sets gene-ε apart is that it assumes a different null distribution for effects on the SNP-level. Estimating gene specific contributions to the PVE. In the main text, we highlight some of the additional features of the gene-ε gene-level association test statistic. First, the expected enrichment for trait-associated mutations in a given gene is equal to the heritability explained by the SNPs contained in said gene. Formally, consider the expansion of Eq (19) derived from the expectation of quadratic forms, (21) where denotes the heritability contributed by gene g. When A = I (as in the current study), the gene-ε hypothesis test for identifying enriched genes is based on the individual SNP contributions to the narrow-sense heritability (i.e., the sum of the expectation of squared SNP effects; see also [34]) (22) Alternatively, one could choose to re-weight these contributions by specifying A otherwise [12, 20, 105, 107, 108]. For example, if SNP j has a small effect size but is known to be functionally associated with the trait of interest, then increasing A[jj] will reflect this knowledge. Specific weight functions have also been suggested for dealing with rarer variants [9, 28, 80]. Simulation studies We used a simulation scheme to generate SNP-level summary statistics for GWA studies. First, we randomly select a set of enriched genes and assume that complex traits (under various genetic architectures) are generated via a linear model (23) where y is an N-dimensional vector containing all the phenotypes; represents the set of causal SNPs contained within the associated genes; x[c] is the genotype for the c-th causal SNP encoded as 0, 1, or 2 copies of a reference allele; β[c] is the additive effect size for the c-th SNP; W is an N×M matrix of covariates representing additional population structure (e.g., the top ten principal components from the genotype matrix) with corresponding fixed effects b; and e is an N-dimensional vector of environmental noise. The phenotypic variance is assumed . The effect sizes of SNPs in enriched genes are randomly drawn from standard normal distributions and then rescaled so they explain a fixed proportion of the narrow-sense heritability . The covariate coefficients are also drawn from standard normal distributions and then rescaled such that . GWA summary statistics are then computed by fitting a single-SNP univariate linear model via ordinary least squares (OLS): for every SNP in the data j = 1, …J. These effect size estimates, along with an LD matrix Σ computed directly from the full N×J genotype matrix X, are given to gene-ε. We also retain standard errors and P-values for implementation of the competing methods (VEGAS, PEGASUS, RSS, SKAT, and MAGMA). Given different model parameters, we simulate data mirroring a wide range of genetic architectures (Supporting Information). Software details Source code implementing gene-ε and tutorials are freely available at https://github.com/ramachandran-lab/genee and was written in R (version 3.3.3). Within this software, regularization of the OLS SNP-level effect sizes is done using the package glmnet (version 2.0-16) [109]. For large datasets, such as the UK Biobank, the software also offers regularization using the biglasso (version 1.3-6) [110] to help with memory and scalability requirements. Note that selection of the free parameter t is done the same way using both the glmnet and biglasso packages. Both packages also take in an α ∈ [0, 1] to specify fitting the Ridge, Elastic Net or Lasso regularization to the OLS SNP-level effect sizes. The fitting of a K-mixture of Gaussian distributions for the estimation of the SNP-level null threshold is done using the package mclust (version 5.4.3) [103]. Lastly, the package CompQuadForm (version 1.4.3) was used to compute gene-ε gene-level P-values with Imhof’s method [26, 111]. Comparisons in this work were made using software for MAGMA (version 1.07b; https://ctg.cncr.nl/software/magma), PEGASUS (version 1.3.0; https://github.com/ramachandran-lab/PEGASUS), RSS (version 1.0.0; https://github.com/stephenslab/rss), SKAT (version 1.3.2.1; https://www.hsph.harvard.edu/skat), VEGAS (version 2.0.0; https://vegas2.qimrberghofer.edu.au) which are also publicly available. See all other relevant URLs below. gene-ε software, https://github.com/ramachandran-lab/genee; UK Biobank, https://www.ukbiobank.ac.uk; Database of Genotypes and Phenotypes (dbGaP), https://www.ncbi.nlm.nih.gov/gap; NHGRI-EBI GWAS Catalog, https://www.ebi.ac.uk/gwas/; UCSC Genome Browser, https://genome.ucsc.edu/index.html; Enrichr software, http://amp.pharm.mssm.edu/Enrichr/; SNP-set (Sequence) Kernel Association Test (SKAT) software, https://www.hsph.harvard.edu/skat; Multi-marker Analysis of GenoMic Annotation (MAGMA) software, https://ctg.cncr.nl/software/magma; Precise, Efficient Gene Association Score Using SNPs (PEGASUS) software, https://github.com/ramachandran-lab/PEGASUS; Regression with Summary Statistics (RSS) enrichment software, https://github.com/stephenslab/rss; Versatile Gene-based Association Study (VEGAS) version 2, https://vegas2.qimrberghofer.edu.au. Supporting information S1 Fig. Simulation study results showing the Pearson correlation between various degrees of gene-ε regularized SNP-level effect size estimates and the true effect sizes that generated the complex Assessed regularization techniques are the (A) LASSO [23], (B) Elastic Net [24], (C) Ridge Regression [25], and (D) no regularization of ordinary least squares (OLS) effect sizes which serves as a baseline. Here, we take real genotype data on chromosome 19 from N = 5, 000 randomly chosen individuals of European ancestry in the UK Biobank (see S1 Text). We then assumed a simple linear additive model for quantitative traits while varying the narrow-sense heritability (h^2 = {0.01, 0.05, 0.10, 0.15, 0.20, 0.25}). We considered two scenarios where traits are generated with and without additional population structure (colored as pink and blue lines, respectively). In the former setting, phenotypes are simulated while also using the top five principal components (PCs) of the genotype matrix as covariates to create stratification. These PCs contributed to 10% of the phenotypic variance. In both settings, GWA SNP-level effect sizes were derived via OLS without accounting for any additional structure. The y-axis shows Pearson correlation between gene-ε regularized effect sizes and the truth. On the x-axis of each plot, we vary the number of causal SNPs for each trait (i.e., {1, 5, 10, 15, 20, 25}%). Results are based on ten replicates (see S1 Text), with the error bars representing standard errors across runs. S2 Fig. (A, C) Receiver operating characteristic (ROC) and (B, D) precision-recall curves comparing the performance of gene-ε and competing approaches in simulations (N = 5,000; h^2 = 0.2). Here, the sample size N = 5, 000 and the narrow-sense heritability of the simulated quantitative trait is h^2 = 0.2. We compute standard GWA SNP-level effect sizes (estimated using ordinary least squares). Results for gene-ε are shown with LASSO (blue), Elastic Net (EN; red), and Ridge Regression (RR; purple) regularizations. We also show the results of gene-ε without regularization to illustrate the importance of the regularization step (labeled OLS; orange). We compare gene-ε with five existing methods: PEGASUS (brown) [12], VEGAS (teal) [7], the Bayesian approach RSS (black) [14 ], SKAT (green) [20], and MAGMA (peach) [10]. (A, C) ROC curves show power versus false positive rate for each approach of sparse (1% enriched genes) and polygenic (10% enriched genes) architectures, respectively. Note that the upper limit of the x-axis has been truncated at 0.1. (B, D) Precision-Recall curves for each method applied to the simulations. Note that, in the sparse case (1% enriched genes), the top ranked genes are always true positives, and therefore the minimal recall is not 0. All results are based on 100 replicates (see S1 Text). S3 Fig. (A, C) Receiver operating characteristic (ROC) and (B, D) precision-recall curves comparing the performance of gene-ε and competing approaches in simulations (N = 10,000; h^2 = 0.2). Here, the sample size N = 10, 000 and the narrow-sense heritability of the simulated quantitative trait is h^2 = 0.2. We compute standard GWA SNP-level effect sizes (estimated using ordinary least squares). Results for gene-ε are shown with LASSO (blue), Elastic Net (EN; red), and Ridge Regression (RR; purple) regularizations. We also show the results of gene-ε without regularization to illustrate the importance of the regularization step (labeled OLS; orange). We compare gene-ε with five existing methods: PEGASUS (brown) [12], VEGAS (teal) [7], the Bayesian approach RSS (black) [14 ], SKAT (green) [20], and MAGMA (peach) [10]. (A, C) ROC curves show power versus false positive rate for each approach of sparse (1% enriched genes) and polygenic (10% enriched genes) architectures, respectively. Note that the upper limit of the x-axis has been truncated at 0.1. (B, D) Precision-Recall curves for each method applied to the simulations. Note that, in the sparse case (1% enriched genes), the top ranked genes are always true positives, and therefore the minimal recall is not 0. All results are based on 100 replicates (see S1 Text). S4 Fig. (A, C) Receiver operating characteristic (ROC) and (B, D) precision-recall curves comparing the performance of gene-ε and competing approaches in simulations (N = 5,000; h^2 = 0.6). Here, the sample size N = 5, 000 and the narrow-sense heritability of the simulated quantitative trait is h^2 = 0.6. We compute standard GWA SNP-level effect sizes (estimated using ordinary least squares). Results for gene-ε are shown with LASSO (blue), Elastic Net (EN; red), and Ridge Regression (RR; purple) regularizations. We also show the results of gene-ε without regularization to illustrate the importance of the regularization step (labeled OLS; orange). We compare gene-ε with five existing methods: PEGASUS (brown) [12], VEGAS (teal) [7], the Bayesian approach RSS (black) [14 ], SKAT (green) [20], and MAGMA (peach) [10]. (A, C) ROC curves show power versus false positive rate for each approach of sparse (1% enriched genes) and polygenic (10% enriched genes) architectures, respectively. Note that the upper limit of the x-axis has been truncated at 0.1. (B, D) Precision-Recall curves for each method applied to the simulations. Note that, in the sparse case (1% enriched genes), the top ranked genes are always true positives, and therefore the minimal recall is not 0. All results are based on 100 replicates (see S1 Text). S5 Fig. (A, C) Receiver operating characteristic (ROC) and (B, D) precision-recall curves comparing the performance of gene-ε and competing approaches in simulations with population stratification (N = 5,000; h^2 = 0.2). Here, the sample size N = 5, 000 and the narrow-sense heritability of the simulated quantitative trait is h^2 = 0.2. In this simulation, traits were generated while using the top five principal components (PCs) of the genotype matrix as covariates. GWA summary statistics were computed by fitting a single-SNP univariate linear model (via ordinary least squares) without any control for the additional structure. Results for gene-ε are shown with LASSO (blue), Elastic Net (EN; red), and Ridge Regression (RR; purple) regularizations. We also show the results of gene-ε without regularization to illustrate the importance of the regularization step (labeled OLS; orange). We compare gene-ε with five existing methods: PEGASUS (brown) [12], VEGAS (teal) [7], the Bayesian approach RSS (black) [14], SKAT (green) [20], and MAGMA (peach) [10]. Note that each was method implemented without using any covariates. (A, C) ROC curves show power versus false positive rate for each approach of sparse (1% enriched genes) and polygenic (10% enriched genes) architectures, respectively. Note that the upper limit of the x-axis has been truncated at 0.1. (B, D) Precision-Recall curves for each method applied to the simulations. Note that, in the sparse case (1% enriched genes), the top ranked genes are always true positives, and therefore the minimal recall is not 0. All results are based on 100 replicates (see S1 Text). S6 Fig. (A, C) Receiver operating characteristic (ROC) and (B, D) precision-recall curves comparing the performance of gene-ε and competing approaches in simulations with population stratification (N = 10,000; h^2 = 0.2). Here, the sample size N = 10, 000 and the narrow-sense heritability of the simulated quantitative trait is h^2 = 0.2. In this simulation, traits were generated while using the top five principal components (PCs) of the genotype matrix as covariates. GWA summary statistics were computed by fitting a single-SNP univariate linear model (via ordinary least squares) without any control for the additional structure. Results for gene-ε are shown with LASSO (blue), Elastic Net (EN; red), and Ridge Regression (RR; purple) regularizations. We also show the results of gene-ε without regularization to illustrate the importance of the regularization step (labeled OLS; orange). We compare gene-ε with five existing methods: PEGASUS (brown) [12], VEGAS (teal) [7], the Bayesian approach RSS (black) [14], SKAT (green) [20], and MAGMA (peach) [10]. Note that each was method implemented without using any covariates. (A, C) ROC curves show power versus false positive rate for each approach of sparse (1% enriched genes) and polygenic (10% enriched genes) architectures, respectively. Note that the upper limit of the x-axis has been truncated at 0.1. (B, D) Precision-Recall curves for each method applied to the simulations. Note that, in the sparse case (1% enriched genes), the top ranked genes are always true positives, and therefore the minimal recall is not 0. All results are based on 100 replicates (see S1 Text). S7 Fig. (A, C) Receiver operating characteristic (ROC) and (B, D) precision-recall curves comparing the performance of gene-ε and competing approaches in simulations with population stratification (N = 5,000; h^2 = 0.6). Here, the sample size N = 5, 000 and the narrow-sense heritability of the simulated quantitative trait is h^2 = 0.6. In this simulation, traits were generated while using the top five principal components (PCs) of the genotype matrix as covariates. GWA summary statistics were computed by fitting a single-SNP univariate linear model (via ordinary least squares) without any control for the additional structure. Results for gene-ε are shown with LASSO (blue), Elastic Net (EN; red), and Ridge Regression (RR; purple) regularizations. We also show the results of gene-ε without regularization to illustrate the importance of the regularization step (labeled OLS; orange). We compare gene-ε with five existing methods: PEGASUS (brown) [12], VEGAS (teal) [7], the Bayesian approach RSS (black) [14], SKAT (green) [20], and MAGMA (peach) [10]. Note that each was method implemented without using any covariates. (A, C) ROC curves show power versus false positive rate for each approach of sparse (1% enriched genes) and polygenic (10% enriched genes) architectures, respectively. Note that the upper limit of the x-axis has been truncated at 0.1. (B, D) Precision-Recall curves for each method applied to the simulations. Note that, in the sparse case (1% enriched genes), the top ranked genes are always true positives, and therefore the minimal recall is not 0. All results are based on 100 replicates (see S1 Text). S8 Fig. (A, C) Receiver operating characteristic (ROC) and (B, D) precision-recall curves comparing the performance of gene-ε and competing approaches in simulations with population stratification (N = 10,000; h^2 = 0.6). Here, the sample size N = 10, 000 and the narrow-sense heritability of the simulated quantitative trait is h^2 = 0.6. In this simulation, traits were generated while using the top five principal components (PCs) of the genotype matrix as covariates. GWA summary statistics were computed by fitting a single-SNP univariate linear model (via ordinary least squares) without any control for the additional structure. Results for gene-ε are shown with LASSO (blue), Elastic Net (EN; red), and Ridge Regression (RR; purple) regularizations. We also show the results of gene-ε without regularization to illustrate the importance of the regularization step (labeled OLS; orange). We compare gene-ε with five existing methods: PEGASUS (brown) [12], VEGAS (teal) [7], the Bayesian approach RSS (black) [14], SKAT (green) [20], and MAGMA (peach) [10]. Note that each was method implemented without using any covariates. (A, C) ROC curves show power versus false positive rate for each approach of sparse (1% enriched genes) and polygenic (10% enriched genes) architectures, respectively. Note that the upper limit of the x-axis has been truncated at 0.1. (B, D) Precision-Recall curves for each method applied to the simulations. Note that, in the sparse case (1% enriched genes), the top ranked genes are always true positives, and therefore the minimal recall is not 0. All results are based on 100 replicates (see S1 Text). S9 Fig. (A, C) Receiver operating characteristic (ROC) and (B, D) precision-recall curves comparing the performance of gene-ε and competing approaches in simulations with gene boundaries augmented by a 50 kilobase (kb) buffer (N = 5,000; h^2 = 0.2). Here, the sample size N = 5, 000 and the narrow-sense heritability of the simulated quantitative trait is h^2 = 0.2. We compute standard GWA SNP-level effect sizes (estimated using ordinary least squares). Results for gene-ε are shown with LASSO (blue), Elastic Net (EN; red), and Ridge Regression (RR; purple) regularizations. We also show the results of gene-ε without regularization to illustrate the importance of the regularization step (labeled OLS; orange). We compare gene-ε with five existing methods: PEGASUS (brown) [12], VEGAS (teal) [7], the Bayesian approach RSS (black) [14 ], SKAT (green) [20], and MAGMA (peach) [10]. (A, C) ROC curves show power versus false positive rate for each approach of sparse (1% enriched genes) and polygenic (10% enriched genes) architectures, respectively. Note that the upper limit of the x-axis has been truncated at 0.1. (B, D) Precision-Recall curves for each method applied to the simulations. Note that, in the sparse case (1% enriched genes), the top ranked genes are always true positives, and therefore the minimal recall is not 0. All results are based on 100 replicates (see S1 Text). S10 Fig. (A, C) Receiver operating characteristic (ROC) and (B, D) precision-recall curves comparing the performance of gene-ε and competing approaches in simulations with gene boundaries augmented by a 50 kilobase (kb) buffer (N = 10,000; h^2 = 0.2). Here, the sample size N = 10, 000 and the narrow-sense heritability of the simulated quantitative trait is h^2 = 0.2. We compute standard GWA SNP-level effect sizes (estimated using ordinary least squares). Results for gene-ε are shown with LASSO (blue), Elastic Net (EN; red), and Ridge Regression (RR; purple) regularizations. We also show the results of gene-ε without regularization to illustrate the importance of the regularization step (labeled OLS; orange). We compare gene-ε with five existing methods: PEGASUS (brown) [12], VEGAS (teal) [7], the Bayesian approach RSS (black) [14 ], SKAT (green) [20], and MAGMA (peach) [10]. (A, C) ROC curves show power versus false positive rate for each approach of sparse (1% enriched genes) and polygenic (10% enriched genes) architectures, respectively. Note that the upper limit of the x-axis has been truncated at 0.1. (B, D) Precision-Recall curves for each method applied to the simulations. Note that, in the sparse case (1% enriched genes), the top ranked genes are always true positives, and therefore the minimal recall is not 0. All results are based on 100 replicates (see S1 Text). S11 Fig. (A, C) Receiver operating characteristic (ROC) and (B, D) precision-recall curves comparing the performance of gene-ε and competing approaches in simulations with gene boundaries augmented by a 50 kilobase (kb) buffer (N = 5,000; h^2 = 0.6). Here, the sample size N = 5, 000 and the narrow-sense heritability of the simulated quantitative trait is h^2 = 0.6. We compute standard GWA SNP-level effect sizes (estimated using ordinary least squares). Results for gene-ε are shown with LASSO (blue), Elastic Net (EN; red), and Ridge Regression (RR; purple) regularizations. We also show the results of gene-ε without regularization to illustrate the importance of the regularization step (labeled OLS; orange). We compare gene-ε with five existing methods: PEGASUS (brown) [12], VEGAS (teal) [7], the Bayesian approach RSS (black) [14 ], SKAT (green) [20], and MAGMA (peach) [10]. (A, C) ROC curves show power versus false positive rate for each approach of sparse (1% enriched genes) and polygenic (10% enriched genes) architectures, respectively. Note that the upper limit of the x-axis has been truncated at 0.1. (B, D) Precision-Recall curves for each method applied to the simulations. Note that, in the sparse case (1% enriched genes), the top ranked genes are always true positives, and therefore the minimal recall is not 0. All results are based on 100 replicates (see S1 Text). S12 Fig. (A, C) Receiver operating characteristic (ROC) and (B, D) precision-recall curves comparing the performance of gene-ε and competing approaches in simulations with gene boundaries augmented by a 50 kilobase (kb) buffer (N = 10,000; h^2 = 0.6). Here, the sample size N = 10, 000 and the narrow-sense heritability of the simulated quantitative trait is h^2 = 0.6. We compute standard GWA SNP-level effect sizes (estimated using ordinary least squares). Results for gene-ε are shown with LASSO (blue), Elastic Net (EN; red), and Ridge Regression (RR; purple) regularizations. We also show the results of gene-ε without regularization to illustrate the importance of the regularization step (labeled OLS; orange). We compare gene-ε with five existing methods: PEGASUS (brown) [12], VEGAS (teal) [7], the Bayesian approach RSS (black) [14 ], SKAT (green) [20], and MAGMA (peach) [10]. (A, C) ROC curves show power versus false positive rate for each approach of sparse (1% enriched genes) and polygenic (10% enriched genes) architectures, respectively. Note that the upper limit of the x-axis has been truncated at 0.1. (B, D) Precision-Recall curves for each method applied to the simulations. Note that, in the sparse case (1% enriched genes), the top ranked genes are always true positives, and therefore the minimal recall is not 0. All results are based on 100 replicates (see S1 Text). S13 Fig. (A, C) Receiver operating characteristic (ROC) and (B, D) precision-recall curves comparing the performance of gene-ε and competing approaches in simulations with gene boundaries augmented by a 50 kilobase (kb) buffer and with population stratification (N = 5,000; h^2 = 0.2). Here, the sample size N = 5, 000 and the narrow-sense heritability of the simulated quantitative trait is h^2 = 0.2. In this simulation, traits were generated while using the top five principal components (PCs) of the genotype matrix as covariates. GWA summary statistics were computed by fitting a single-SNP univariate linear model (via ordinary least squares) without any control for the additional structure. Results for gene-ε are shown with LASSO (blue), Elastic Net (EN; red), and Ridge Regression (RR; purple) regularizations. We also show the results of gene-ε without regularization to illustrate the importance of the regularization step (labeled OLS; orange). We compare gene-ε with five existing methods: PEGASUS (brown) [12], VEGAS (teal) [7], the Bayesian approach RSS (black) [14], SKAT (green) [20], and MAGMA (peach) [10]. Note that each was method implemented without using any covariates. (A, C) ROC curves show power versus false positive rate for each approach of sparse (1% enriched genes) and polygenic (10% enriched genes) architectures, respectively. Note that the upper limit of the x-axis has been truncated at 0.1. (B, D) Precision-Recall curves for each method applied to the simulations. Note that, in the sparse case (1% enriched genes), the top ranked genes are always true positives, and therefore the minimal recall is not 0. All results are based on 100 replicates (see S1 Text). S14 Fig. (A, C) Receiver operating characteristic (ROC) and (B, D) precision-recall curves comparing the performance of gene-ε and competing approaches in simulations with gene boundaries augmented by a 50 kilobase (kb) buffer and with population stratification (N = 10,000; h^2 = 0.2). Here, the sample size N = 10, 000 and the narrow-sense heritability of the simulated quantitative trait is h^2 = 0.2. In this simulation, traits were generated while using the top five principal components (PCs) of the genotype matrix as covariates. GWA summary statistics were computed by fitting a single-SNP univariate linear model (via ordinary least squares) without any control for the additional structure. Results for gene-ε are shown with LASSO (blue), Elastic Net (EN; red), and Ridge Regression (RR; purple) regularizations. We also show the results of gene-ε without regularization to illustrate the importance of the regularization step (labeled OLS; orange). We compare gene-ε with five existing methods: PEGASUS (brown) [12], VEGAS (teal) [7], the Bayesian approach RSS (black) [14], SKAT (green) [20], and MAGMA (peach) [10]. Note that each was method implemented without using any covariates. (A, C) ROC curves show power versus false positive rate for each approach of sparse (1% enriched genes) and polygenic (10% enriched genes) architectures, respectively. Note that the upper limit of the x-axis has been truncated at 0.1. (B, D) Precision-Recall curves for each method applied to the simulations. Note that, in the sparse case (1% enriched genes), the top ranked genes are always true positives, and therefore the minimal recall is not 0. All results are based on 100 replicates (see S1 Text). S15 Fig. (A, C) Receiver operating characteristic (ROC) and (B, D) precision-recall curves comparing the performance of gene-ε and competing approaches in simulations with gene boundaries augmented by a 50 kilobase (kb) buffer and with population stratification (N = 5,000; h^2 = 0.6). Here, the sample size N = 5, 000 and the narrow-sense heritability of the simulated quantitative trait is h^2 = 0.6. In this simulation, traits were generated while using the top five principal components (PCs) of the genotype matrix as covariates. GWA summary statistics were computed by fitting a single-SNP univariate linear model (via ordinary least squares) without any control for the additional structure. Results for gene-ε are shown with LASSO (blue), Elastic Net (EN; red), and Ridge Regression (RR; purple) regularizations. We also show the results of gene-ε without regularization to illustrate the importance of the regularization step (labeled OLS; orange). We compare gene-ε with five existing methods: PEGASUS (brown) [12], VEGAS (teal) [7], the Bayesian approach RSS (black) [14], SKAT (green) [20], and MAGMA (peach) [10]. Note that each was method implemented without using any covariates. (A, C) ROC curves show power versus false positive rate for each approach of sparse (1% enriched genes) and polygenic (10% enriched genes) architectures, respectively. Note that the upper limit of the x-axis has been truncated at 0.1. (B, D) Precision-Recall curves for each method applied to the simulations. Note that, in the sparse case (1% enriched genes), the top ranked genes are always true positives, and therefore the minimal recall is not 0. All results are based on 100 replicates (see S1 Text). S16 Fig. (A, C) Receiver operating characteristic (ROC) and (B, D) precision-recall curves comparing the performance of gene-ε and competing approaches in simulations with gene boundaries augmented by a 50 kilobase (kb) buffer and with population stratification (N = 10,000; h^2 = 0.6). Here, the sample size N = 10, 000 and the narrow-sense heritability of the simulated quantitative trait is h^2 = 0.6. In this simulation, traits were generated while using the top five principal components (PCs) of the genotype matrix as covariates. GWA summary statistics were computed by fitting a single-SNP univariate linear model (via ordinary least squares) without any control for the additional structure. Results for gene-ε are shown with LASSO (blue), Elastic Net (EN; red), and Ridge Regression (RR; purple) regularizations. We also show the results of gene-ε without regularization to illustrate the importance of the regularization step (labeled OLS; orange). We compare gene-ε with five existing methods: PEGASUS (brown) [12], VEGAS (teal) [7], the Bayesian approach RSS (black) [14], SKAT (green) [20], and MAGMA (peach) [10]. Note that each was method implemented without using any covariates. (A, C) ROC curves show power versus false positive rate for each approach of sparse (1% enriched genes) and polygenic (10% enriched genes) architectures, respectively. Note that the upper limit of the x-axis has been truncated at 0.1. (B, D) Precision-Recall curves for each method applied to the simulations. Note that, in the sparse case (1% enriched genes), the top ranked genes are always true positives, and therefore the minimal recall is not 0. All results are based on 100 replicates (see S1 Text). S17 Fig. Scatter plots assessing how regularization on SNP-level summary statistics affects the ability to identify enriched genes in simulations (h^2 = 0.2). Here, the narrow-sense heritability of the simulated quantitative traits is h^2 = 0.2 and sample sizes are set to N = 5,000 in (A, B) and N = 10,000 in (C, D). In each case, standard GWA summary statistics were computed by fitting a single-SNP univariate linear model (via ordinary least squares). Results are shown comparing the -log[10] transformed gene-level P-values derived by gene-ε with Elastic Net (EN) regularization on the y-axis and without regularization (labeled as OLS) on the x-axis. The horizontal and vertical dashed lines are marked at the Bonferonni-corrected threshold P = 3.55×10^−5 corrected for the 1,408 genes on chromosome 1 from the UK Biobank genotype data. True positive causal genes used to generate the synthetic phenotypes are colored in red, while non-causal genes are given in grey. Genes in the top right quadrant are selected by both approaches. Genes in the top left and bottom right quadrants are uniquely identified by gene-ε-EN and gene-ε-OLS, respectively. To illustrate the importance of regularization on SNP-level summary statistics, we highlight the true positive genes only identified by gene-ε-EN in blue. Each plot combines results from 100 simulated replicates (see S1 Text). S18 Fig. Scatter plots assessing how regularization on SNP-level summary statistics affects the ability to identify enriched genes in simulations (h^2 = 0.6). Here, the narrow-sense heritability of the simulated quantitative traits is h^2 = 0.6 and sample sizes are set to N = 5,000 in (A, B) and N = 10,000 in (C, D). In each case, standard GWA summary statistics were computed by fitting a single-SNP univariate linear model (via ordinary least squares). Results are shown comparing the -log[10] transformed gene-level P-values derived by gene-ε with Elastic Net (EN) regularization on the y-axis and without regularization (labeled as OLS) on the x-axis. The horizontal and vertical dashed lines are marked at the Bonferonni-corrected threshold Pp = 3.55×10^−5 corrected for the 1,408 genes on chromosome 1 from the UK Biobank genotype data. True positive causal genes used to generate the synthetic phenotypes are colored in red, while non-causal genes are given in grey. Genes in the top right quadrant are selected by both approaches. Genes in the top left and bottom right quadrants are uniquely identified by gene-ε-EN and gene-ε-OLS, respectively. To illustrate the importance of regularization on SNP-level summary statistics, we highlight the true positive genes only identified by gene-ε-EN in blue. Each plot combines results from 100 simulated replicates (see S1 Text). S19 Fig. Scatter plots assessing how regularization on SNP-level summary statistics affects the ability to identify enriched genes in simulations with population stratification (h^2 = 0.2). Here, the narrow-sense heritability of the simulated quantitative traits is h^2 = 0.2 and sample sizes are set to N = 5,000 in (A, B) and N = 10,000 in (C, D). In this simulation, traits were generated while using the top five principal components (PCs) of the genotype matrix as covariates. GWA summary statistics were computed by fitting a single-SNP univariate linear model (via ordinary least squares) without any control for the additional structure. Results are shown comparing the -log[10] transformed gene-level P-values derived by gene-ε with Elastic Net (EN) regularization on the y-axis and without regularization (labeled as OLS) on the x-axis. The horizontal and vertical dashed lines are marked at the Bonferonni-corrected threshold P = 3.55×10^−5 corrected for the 1,408 genes on chromosome 1 from the UK Biobank genotype data. True positive causal genes used to generate the synthetic phenotypes are colored in red, while non-causal genes are given in grey. Genes in the top right quadrant are selected by both approaches. Genes in the top left and bottom right quadrants are uniquely identified by gene-ε-EN and gene-ε-OLS, respectively. To illustrate the importance of regularization on SNP-level summary statistics, we highlight the true positive genes only identified by gene-ε-EN in blue. Each plot combines results from 100 simulated replicates (see S1 Text). S20 Fig. Scatter plots assessing how regularization on SNP-level summary statistics affects the ability to identify enriched genes in simulations with population stratification (h^2 = 0.6). Here, the narrow-sense heritability of the simulated quantitative traits is h^2 = 0.6 and sample sizes are set to N = 5,000 in (A, B) and N = 10,000 in (C, D). In this simulation, traits were generated while using the top five principal components (PCs) of the genotype matrix as covariates. GWA summary statistics were computed by fitting a single-SNP univariate linear model (via ordinary least squares) without any control for the additional structure. Results are shown comparing the -log[10] transformed gene-level P-values derived by gene-ε with Elastic Net (EN) regularization on the y-axis and without regularization (labeled as OLS) on the x-axis. The horizontal and vertical dashed lines are marked at the Bonferonni-corrected threshold P = 3.55×10^−5 corrected for the 1,408 genes on chromosome 1 from the UK Biobank genotype data. True positive causal genes used to generate the synthetic phenotypes are colored in red, while non-causal genes are given in grey. Genes in the top right quadrant are selected by both approaches. Genes in the top left and bottom right quadrants are uniquely identified by gene-ε-EN and gene-ε-OLS, respectively. To illustrate the importance of regularization on SNP-level summary statistics, we highlight the true positive genes only identified by gene-ε-EN in blue. Each plot combines results from 100 simulated replicates (see S1 Text). S21 Fig. Scatter plots assessing how regularization on SNP-level summary statistics affects the ability to identify enriched genes in simulations with gene boundaries augmented by a 50 kilobase (kb) buffer (h^2 = 0.2). Here, the narrow-sense heritability of the simulated quantitative traits is h^2 = 0.2 and sample sizes are set to N = 5,000 in (A, B) and N = 10,000 in (C, D). In each case, standard GWA summary statistics were computed by fitting a single-SNP univariate linear model (via ordinary least squares). Results are shown comparing the -log[10] transformed gene-level P-values derived by gene-ε with Elastic Net (EN) regularization on the y-axis and without regularization (labeled as OLS) on the x-axis. The horizontal and vertical dashed lines are marked at the Bonferonni-corrected threshold P = 2.61×10^−5 corrected for the 1,916 genes on chromosome 1 from the UK Biobank genotype data. True positive causal genes used to generate the synthetic phenotypes are colored in red, while non-causal genes are given in grey. Genes in the top right quadrant are selected by both approaches. Genes in the top left and bottom right quadrants are uniquely identified by gene-ε-EN and gene-ε-OLS, respectively. To illustrate the importance of regularization on SNP-level summary statistics, we highlight the true positive genes only identified by gene-ε-EN in blue. Each plot combines results from 100 simulated replicates (see S1 Text). S22 Fig. Scatter plots assessing how regularization on SNP-level summary statistics affects the ability to identify enriched genes in simulations with gene boundaries augmented by a 50 kilobase (kb) buffer (h^2 = 0.6). Here, the narrow-sense heritability of the simulated quantitative traits is h^2 = 0.6 and sample sizes are set to N = 5,000 in (A, B) and N = 10,000 in (C, D). In each case, standard GWA summary statistics were computed by fitting a single-SNP univariate linear model (via ordinary least squares). Results are shown comparing the -log[10] transformed gene-level P-values derived by gene-ε with Elastic Net (EN) regularization on the y-axis and without regularization (labeled as OLS) on the x-axis. The horizontal and vertical dashed lines are marked at the Bonferonni-corrected threshold P = 2.61×10^−5 corrected for the 1,916 genes on chromosome 1 from the UK Biobank genotype data. True positive causal genes used to generate the synthetic phenotypes are colored in red, while non-causal genes are given in grey. Genes in the top right quadrant are selected by both approaches. Genes in the top left and bottom right quadrants are uniquely identified by gene-ε-EN and gene-ε-OLS, respectively. To illustrate the importance of regularization on SNP-level summary statistics, we highlight the true positive genes only identified by gene-ε-EN in blue. Each plot combines results from 100 simulated replicates (see S1 Text). S23 Fig. Scatter plots assessing how regularization on SNP-level summary statistics affects the ability to identify enriched genes in simulations with gene boundaries augmented by a 50 kilobase (kb) buffer and with population stratification (h^2 = 0.2). Here, the narrow-sense heritability of the simulated quantitative traits is h^2 = 0.2 and sample sizes are set to N = 5,000 in (A, B) and N = 10,000 in (C, D). In this simulation, traits were generated while using the top five principal components (PCs) of the genotype matrix as covariates. GWA summary statistics were computed by fitting a single-SNP univariate linear model (via ordinary least squares) without any control for the additional structure. Results are shown comparing the -log[10] transformed gene-level P-values derived by gene-ε with Elastic Net (EN) regularization on the y-axis and without regularization (labeled as OLS) on the x-axis. The horizontal and vertical dashed lines are marked at the Bonferonni-corrected threshold P = 2.61×10^−5 corrected for the 1,916 genes on chromosome 1 from the UK Biobank genotype data. True positive causal genes used to generate the synthetic phenotypes are colored in red, while non-causal genes are given in grey. Genes in the top right quadrant are selected by both approaches. Genes in the top left and bottom right quadrants are uniquely identified by gene-ε-EN and gene-ε-OLS, respectively. To illustrate the importance of regularization on SNP-level summary statistics, we highlight the true positive genes only identified by gene-ε-EN in blue. Each plot combines results from 100 simulated replicates (see S1 Text). S24 Fig. Scatter plots assessing how regularization on SNP-level summary statistics affects the ability to identify enriched genes in simulations with gene boundaries augmented by a 50 kilobase (kb) buffer and with population stratification (h^2 = 0.6). Here, the narrow-sense heritability of the simulated quantitative traits is h^2 = 0.6 and sample sizes are set to N = 5,000 in (A, B) and N = 10,000 in (C, D). In this simulation, traits were generated while using the top five principal components (PCs) of the genotype matrix as covariates. GWA summary statistics were computed by fitting a single-SNP univariate linear model (via ordinary least squares) without any control for the additional structure. Results are shown comparing the -log[10] transformed gene-level P-values derived by gene-ε with Elastic Net (EN) regularization on the y-axis and without regularization (labeled as OLS) on the x-axis. The horizontal and vertical dashed lines are marked at the Bonferonni-corrected threshold P = 2.61×10^−5 corrected for the 1,916 genes on chromosome 1 from the UK Biobank genotype data. True positive causal genes used to generate the synthetic phenotypes are colored in red, while non-causal genes are given in grey. Genes in the top right quadrant are selected by both approaches. Genes in the top left and bottom right quadrants are uniquely identified by gene-ε-EN and gene-ε-OLS, respectively. To illustrate the importance of regularization on SNP-level summary statistics, we highlight the true positive genes only identified by gene-ε-EN in blue. Each plot combines results from 100 simulated replicates (see S1 Text). S25 Fig. Gene-level association results from applying gene-ε to body height (panels A and C) and mean platelet volume (MPV; panels B and D), assayed in European-ancestry individuals in the UK Biobank with UCSC RefSeq gene boundaries augmented by a 50 kilobase (kb) buffer. Body height has been estimated to have a narrow-sense heritability h^2 in the range of 0.45 to 0.80 [6, 31–39]; while, MPV has been estimated to have h^2 between 0.50 and 0.70 [33, 34, 58]. Manhattan plots of gene-ε gene-level association P-values using Elastic Net regularized effect sizes for (A) body height and (B) MPV. The purple dashed line indicates a log-transformed Bonferroni-corrected significance threshold (P = 2.83×10^−6 correcting for 17,680 autosomal genes analyzed). We color code all significant genes identified by gene-ε in orange, and annotate genes overlapping with the database of Genotypes and Phenotypes (dbGaP). In (C) and (D), we conduct gene set enrichment analysis using Enrichr [46, 59] to identify dbGaP categories enriched for significant gene-level associations reported by gene-ε. We highlight categories with Q-values (i.e., false discovery rates) less than 0.05 and annotate corresponding genes in the Manhattan plots in (A) and (B), respectively. For height, the most enriched dbGAP category is “Body Height”, with 5 of the genes identified by gene-ε appearing in this category. For MPV, the four significant dbGAP categories are “Platelet Count”, “Behcet Syndrome”, “Psoriasis”, and “Face”—all of which have been connected to trait [57, 60, 61, 112, 113]. S26 Fig. Gene-level association results from applying gene-ε to body mass index (BMI), assayed in European-ancestry individuals in the UK Biobank. BMI has been estimated to have a narrow-sense heritability h^2 ranging from 0.25 to 0.60 [31, 33, 34, 36, 37, 39, 42–45]. Manhattan plots of gene-ε gene-level association P-values using Elastic Net regularized effect sizes when gene boundaries are defined by (A) using UCSC annotations directly, and (B) augmenting the gene boundaries by adding SNPs within a ±50kb buffer. The purple dashed line indicates a log-transformed Bonferroni-corrected significance threshold (P = 3.49×10^−6 and P = 2.83×10^−6 correcting for the 14,322 and 17,680 autosomal genes analyzed, respectively). We color code all significant genes identified by gene-ε in orange, and annotate genes previously associated with BMI in the database of Genotypes and Phenotypes (dbGaP). In (C) and (D), we conduct gene set enrichment analysis using Enrichr [46, 59] to identify dbGaP categories enriched for significant gene-level associations reported by gene-ε in (A) and (B), respectively. While many of the scored categories are biologically related to BMI (e.g., “Body Mass Index”, “Adiposity”, and “Arteries”) [66, 114–116], none of them had Q-values (i.e., false discovery rates) less than 0.05. S27 Fig. Gene-level association results from applying gene-ε to mean corpuscular volume (MCV), assayed in European-ancestry individuals in the UK Biobank. MCV has been estimated to have a narrow-sense heritability h^2 in the range of 0.20 to 0.60 [33, 34, 117, 118]. Manhattan plots of gene-ε gene-level association P-values using Elastic Net regularized effect sizes when gene boundaries are defined by (A) using UCSC annotations directly, and (B) augmenting the gene boundaries by adding SNPs within a ±50kb buffer. The purple dashed line indicates a log-transformed Bonferroni-corrected significance threshold (P = 3.49×10^−6 and P = 2.83×10^−6 correcting for the 14,322 and 17,680 autosomal genes analyzed, respectively). We color code all significant genes identified by gene-ε in orange, and annotate genes previously associated with MCV in the database of Genotypes and Phenotypes (dbGaP). In (C) and (D), we conduct gene set enrichment analysis using Enrichr [46, 59] to identify dbGaP categories enriched for significant gene-level associations reported by gene-ε. We highlight categories with Q-values (i.e., false discovery rates) less than 0.05 and annotate corresponding genes in the Manhattan plots in (A) and (B), respectively. The dbGAP categories significantly enriched for gene-level associations with MCV included “Transferrin”, “Erythrocyte Indices”, “Hematocrit”, “Narcolepsy”, and “Iron”—all of which have been connected to trait [50–57]. S28 Fig. Gene-level association results from applying gene-ε to platelet count (PLC), assayed in European-ancestry individuals in the UK Biobank. PLC has been estimated to have a narrow-sense heritability h^2 ranging from 0.55 to 0.80 [33, 34, 58]. Manhattan plots of gene-ε gene-level association P-values using Elastic Net regularized effect sizes when gene boundaries are defined by (A) using UCSC annotations directly, and (B) augmenting the gene boundaries by adding SNPs within a ±50kb buffer. The purple dashed line indicates a log-transformed Bonferroni-corrected significance threshold (P = 3.49×10^−6 and P = 2.83×10^−6 correcting for the 14,322 and 17,680 autosomal genes analyzed, respectively). We color code all significant genes identified by gene-ε in orange, and annotate genes previously associated with PLC in the database of Genotypes and Phenotypes (dbGaP). In (C) and (D), we conduct gene set enrichment analysis using Enrichr [46, 59] to identify dbGaP categories enriched for significant gene-level associations reported by gene-ε. We highlight categories with Q-values (i.e., false discovery rates) less than 0.05 and annotate corresponding genes in the Manhattan plots in (A) and (B), respectively. The most significant dbGAP category is “Platelet Count” for both SNP-to-gene annotation schemes. The other significant dbGAP category was “Smoking” which has been previously connected to PLC [61, 119, 120]. S29 Fig. Gene-level association results from applying gene-ε to waist-hip ratio (WHR), assayed in European-ancestry individuals in the UK Biobank. WHR has been estimated to have a narrow-sense heritability h^2 ranging from 0.10 to 0.25 [31, 33, 35, 42, 45, 121]. Manhattan plots of gene-ε gene-level association P-values using Elastic Net regularized effect sizes when gene boundaries are defined by (A) using UCSC annotations directly, and (B) augmenting the gene boundaries by adding SNPs within a ±50kb buffer. The purple dashed line indicates a log-transformed Bonferroni-corrected significance threshold (P = 3.49×10^−6 and P = 2.83×10^−6 correcting for the 14,322 and 17,680 autosomal genes analyzed, respectively). We color code all significant genes identified by gene-ε in orange, and annotate genes previously associated with WHR in the database of Genotypes and Phenotypes (dbGaP). In (C) and (D), we conduct gene set enrichment analysis using Enrichr [46, 59] to identify dbGaP categories enriched for significant gene-level associations reported by gene-ε in (A) and (B), respectively. While many of the scored categories are biologically related to WHR (e.g., “Body Mass Index”, “Adiposity”, and “Inflammatory Bowel Diseases”) [122, 123], none of them had Q-values (i.e., false discovery rates) less than S1 Table. Empirical power and false discovery rates (FDR) for detecting enriched genes (genes containing at least one causal SNP) after correcting for multiple hypothesis testing in simulations (N = 5,000; h^2 = 0.2). We computed standard GWA SNP-level effect sizes (estimated using ordinary least squares) as input to each method listed. We show the power of gene-ε to identify enriched genes under the Bonferonni-corrected threshold P = 3.55×10^−5, corrected for 1,408 genes simulated using chromosome 1 from the UK Biobank genotype data (see S1 Text). Results for gene-ε are shown with LASSO, Elastic Net (EN), and Ridge Regression (RR) regularizations. We also show the power of gene-ε without regularization to illustrate the importance of this step (OLS). Additionally, we compare the performance gene-ε with five existing methods: PEGASUS [12], VEGAS [7], RSS [14], SKAT [20], and MAGMA [10]. The last is a Bayesian method and is evaluated based on the “median probability criterion” (i.e., posterior enrichment probability of a gene is greater than 0.5). All results are based on 100 replicates and standard deviations of the estimates across runs are given in the parentheses. Approaches with the greatest power are bolded in purple, while methods with the lowest FDR is bolded in blue. S2 Table. Empirical power and false discovery rates (FDR) for detecting enriched genes (genes containing at least one causal SNP) after correcting for multiple hypothesis testing in simulations (N = 10,000; h^2 = 0.2). We computed standard GWA SNP-level effect sizes (estimated using ordinary least squares) as input to each method listed. We show the power of gene-ε to identify enriched genes under the Bonferonni-corrected threshold P = 3.55×10^−5, corrected for 1,408 genes simulated using chromosome 1 from the UK Biobank genotype data (see S1 Text). Results for gene-ε are shown with LASSO, Elastic Net (EN), and Ridge Regression (RR) regularizations. We also show the power of gene-ε without regularization to illustrate the importance of this step (OLS). Additionally, we compare the performance gene-ε with five existing methods: PEGASUS [12], VEGAS [7], RSS [14], SKAT [20], and MAGMA [10]. The last is a Bayesian method and is evaluated based on the “median probability criterion” (i.e., posterior enrichment probability of a gene is greater than 0.5). All results are based on 100 replicates and standard deviations of the estimates across runs are given in the parentheses. Approaches with the greatest power are bolded in purple, while methods with the lowest FDR is bolded in blue. S3 Table. Empirical power and false discovery rates (FDR) for detecting enriched genes (genes containing at least one causal SNP) after correcting for multiple hypothesis testing in simulations (N = 5,000; h^2 = 0.6). We computed standard GWA SNP-level effect sizes (estimated using ordinary least squares) as input to each method listed. We show the power of gene-ε to identify enriched genes under the Bonferonni-corrected threshold P = 3.55×10^−5, corrected for 1,408 genes simulated using chromosome 1 from the UK Biobank genotype data (see S1 Text). Results for gene-ε are shown with LASSO, Elastic Net (EN), and Ridge Regression (RR) regularizations. We also show the power of gene-ε without regularization to illustrate the importance of this step (OLS). Additionally, we compare the performance gene-ε with five existing methods: PEGASUS [12], VEGAS [7], RSS [14], SKAT [20], and MAGMA [10]. The last is a Bayesian method and is evaluated based on the “median probability criterion” (i.e., posterior enrichment probability of a gene is greater than 0.5). All results are based on 100 replicates and standard deviations of the estimates across runs are given in the parentheses. Approaches with the greatest power are bolded in purple, while methods with the lowest FDR is bolded in blue. S4 Table. Empirical power and false discovery rates (FDR) for detecting enriched genes (genes containing at least one causal SNP) after correcting for multiple hypothesis testing in simulations (N = 10,000; h^2 = 0.6). We computed standard GWA SNP-level effect sizes (estimated using ordinary least squares) as input to each method listed. We show the power of gene-ε to identify enriched genes under the Bonferonni-corrected threshold P = 3.55×10^−5, corrected for 1,408 genes simulated using chromosome 1 from the UK Biobank genotype data (see S1 Text). Results for gene-ε are shown with LASSO, Elastic Net (EN), and Ridge Regression (RR) regularizations. We also show the power of gene-ε without regularization to illustrate the importance of this step (OLS). Additionally, we compare the performance gene-ε with five existing methods: PEGASUS [12], VEGAS [7], RSS [14], SKAT [20], and MAGMA [10]. The last is a Bayesian method and is evaluated based on the “median probability criterion” (i.e., posterior enrichment probability of a gene is greater than 0.5). All results are based on 100 replicates and standard deviations of the estimates across runs are given in the parentheses. Approaches with the greatest power are bolded in purple, while methods with the lowest FDR is bolded in blue. S5 Table. Empirical power and false discovery rates (FDR) for detecting enriched genes (genes containing at least one causal SNP) after correcting for multiple hypothesis testing in simulations with population stratification (N = 5,000; h^2 = 0.2). In this simulation, traits were generated while using the top five principal components (PCs) of the genotype matrix as covariates. GWA summary statistics were computed by fitting a single-SNP univariate linear model (via ordinary least squares) without any control for the additional structure. We show the power of gene-ε to identify enriched genes under the Bonferonni-corrected threshold P = 3.55×10^−5, corrected for 1,408 genes simulated using chromosome 1 from the UK Biobank genotype data (see S1 Text). Results for gene-ε are shown with LASSO, Elastic Net (EN), and Ridge Regression (RR) regularizations. We also show the power of gene-ε without regularization to illustrate the importance of this step (OLS). Additionally, we compare the performance gene-ε with five existing methods: PEGASUS [12], VEGAS [7], RSS [14], SKAT [20], and MAGMA [10]. The last is a Bayesian method and is evaluated based on the “median probability criterion” (i.e., posterior enrichment probability of a gene is greater than 0.5). All results are based on 100 replicates and standard deviations of the estimates across runs are given in the parentheses. Approaches with the greatest power are bolded in purple, while methods with the lowest FDR is bolded in blue. S6 Table. Empirical power and false discovery rates (FDR) for detecting enriched genes (genes containing at least one causal SNP) after correcting for multiple hypothesis testing in simulations with population stratification (N = 10,000; h^2 = 0.2). In this simulation, traits were generated while using the top five principal components (PCs) of the genotype matrix as covariates. GWA summary statistics were computed by fitting a single-SNP univariate linear model (via ordinary least squares) without any control for the additional structure. We show the power of gene-ε to identify enriched genes under the Bonferonni-corrected threshold P = 3.55×10^−5, corrected for 1,408 genes simulated using chromosome 1 from the UK Biobank genotype data (see S1 Text). Results for gene-ε are shown with LASSO, Elastic Net (EN), and Ridge Regression (RR) regularizations. We also show the power of gene-ε without regularization to illustrate the importance of this step (OLS). Additionally, we compare the performance gene-ε with five existing methods: PEGASUS [12], VEGAS [7], RSS [14], SKAT [20], and MAGMA [10]. The last is a Bayesian method and is evaluated based on the “median probability criterion” (i.e., posterior enrichment probability of a gene is greater than 0.5). All results are based on 100 replicates and standard deviations of the estimates across runs are given in the parentheses. Approaches with the greatest power are bolded in purple, while methods with the lowest FDR is bolded in blue. S7 Table. Empirical power and false discovery rates (FDR) for detecting enriched genes (genes containing at least one causal SNP) after correcting for multiple hypothesis testing in simulations with population stratification (N = 5,000; h^2 = 0.6). In this simulation, traits were generated while using the top five principal components (PCs) of the genotype matrix as covariates. GWA summary statistics were computed by fitting a single-SNP univariate linear model (via ordinary least squares) without any control for the additional structure. We show the power of gene-ε to identify enriched genes under the Bonferonni-corrected threshold P = 3.55×10^−5, corrected for 1,408 genes simulated using chromosome 1 from the UK Biobank genotype data (see S1 Text). Results for gene-ε are shown with LASSO, Elastic Net (EN), and Ridge Regression (RR) regularizations. We also show the power of gene-ε without regularization to illustrate the importance of this step (OLS). Additionally, we compare the performance gene-ε with five existing methods: PEGASUS [12], VEGAS [7], RSS [14], SKAT [20], and MAGMA [10]. The last is a Bayesian method and is evaluated based on the “median probability criterion” (i.e., posterior enrichment probability of a gene is greater than 0.5). All results are based on 100 replicates and standard deviations of the estimates across runs are given in the parentheses. Approaches with the greatest power are bolded in purple, while methods with the lowest FDR is bolded in blue. S8 Table. Empirical power and false discovery rates (FDR) for detecting enriched genes (genes containing at least one causal SNP) after correcting for multiple hypothesis testing in simulations with population stratification (N = 10,000; h^2 = 0.6). In this simulation, traits were generated while using the top five principal components (PCs) of the genotype matrix as covariates. GWA summary statistics were computed by fitting a single-SNP univariate linear model (via ordinary least squares) without any control for the additional structure. We show the power of gene-ε to identify enriched genes under the Bonferonni-corrected threshold P = 3.55×10^−5, corrected for 1,408 genes simulated using chromosome 1 from the UK Biobank genotype data (see S1 Text). Results for gene-ε are shown with LASSO, Elastic Net (EN), and Ridge Regression (RR) regularizations. We also show the power of gene-ε without regularization to illustrate the importance of this step (OLS). Additionally, we compare the performance gene-ε with five existing methods: PEGASUS [12], VEGAS [7], RSS [14], SKAT [20], and MAGMA [10]. The last is a Bayesian method and is evaluated based on the “median probability criterion” (i.e., posterior enrichment probability of a gene is greater than 0.5). All results are based on 100 replicates and standard deviations of the estimates across runs are given in the parentheses. Approaches with the greatest power are bolded in purple, while methods with the lowest FDR is bolded in blue. S9 Table. Empirical power and false discovery rates (FDR) for detecting enriched genes (genes containing at least one causal SNP) after correcting for multiple hypothesis testing in simulations with gene boundaries augmented by a 50 kilobase (kb) buffer (N = 5,000; h^2 = 0.2). We computed standard GWA SNP-level effect sizes (estimated using ordinary least squares) as input to each method listed. We show the power of gene-ε to identify enriched genes under the Bonferonni-corrected threshold P = 2.61×10^−5, corrected for 1,916 genes simulated using chromosome 1 from the UK Biobank genotype data (see S1 Text). Results for gene-ε are shown with LASSO, Elastic Net (EN), and Ridge Regression (RR) regularizations. We also show the power of gene-ε without regularization to illustrate the importance of this step (OLS). Additionally, we compare the performance gene-ε with five existing methods: PEGASUS [12], VEGAS [7], RSS [14], SKAT [20], and MAGMA [10]. The last is a Bayesian method and is evaluated based on the “median probability criterion” (i.e., posterior enrichment probability of a gene is greater than 0.5). All results are based on 100 replicates and standard deviations of the estimates across runs are given in the parentheses. Approaches with the greatest power are bolded in purple, while methods with the lowest FDR is bolded in blue. S10 Table. Empirical power and false discovery rates (FDR) for detecting enriched genes (genes containing at least one causal SNP) after correcting for multiple hypothesis testing in simulations with gene boundaries augmented by a 50 kilobase (kb) buffer (N = 10,000; h^2 = 0.2). We computed standard GWA SNP-level effect sizes (estimated using ordinary least squares) as input to each method listed. We show the power of gene-ε to identify enriched genes under the Bonferonni-corrected threshold P = 2.61×10^−5, corrected for 1,916 genes simulated using chromosome 1 from the UK Biobank genotype data (see S1 Text). Results for gene-ε are shown with LASSO, Elastic Net (EN), and Ridge Regression (RR) regularizations. We also show the power of gene-ε without regularization to illustrate the importance of this step (OLS). Additionally, we compare the performance gene-ε with five existing methods: PEGASUS [12], VEGAS [7], RSS [14], SKAT [20], and MAGMA [10]. The last is a Bayesian method and is evaluated based on the “median probability criterion” (i.e., posterior enrichment probability of a gene is greater than 0.5). All results are based on 100 replicates and standard deviations of the estimates across runs are given in the parentheses. Approaches with the greatest power are bolded in purple, while methods with the lowest FDR is bolded in blue. S11 Table. Empirical power and false discovery rates (FDR) for detecting enriched genes (genes containing at least one causal SNP) after correcting for multiple hypothesis testing in simulations with gene boundaries augmented by a 50 kilobase (kb) buffer (N = 5,000; h^2 = 0.6). We computed standard GWA SNP-level effect sizes (estimated using ordinary least squares) as input to each method listed. We show the power of gene-ε to identify enriched genes under the Bonferonni-corrected threshold P = 2.61×10^−5, corrected for 1,916 genes simulated using chromosome 1 from the UK Biobank genotype data (see S1 Text). Results for gene-ε are shown with LASSO, Elastic Net (EN), and Ridge Regression (RR) regularizations. We also show the power of gene-ε without regularization to illustrate the importance of this step (OLS). Additionally, we compare the performance gene-ε with five existing methods: PEGASUS [12], VEGAS [7], RSS [14], SKAT [20], and MAGMA [10]. The last is a Bayesian method and is evaluated based on the “median probability criterion” (i.e., posterior enrichment probability of a gene is greater than 0.5). All results are based on 100 replicates and standard deviations of the estimates across runs are given in the parentheses. Approaches with the greatest power are bolded in purple, while methods with the lowest FDR is bolded in blue. S12 Table. Empirical power and false discovery rates (FDR) for detecting enriched genes (genes containing at least one causal SNP) after correcting for multiple hypothesis testing in simulations with gene boundaries augmented by a 50 kilobase (kb) buffer (N = 10,000; h^2 = 0.6). We computed standard GWA SNP-level effect sizes (estimated using ordinary least squares) as input to each method listed. We show the power of gene-ε to identify enriched genes under the Bonferonni-corrected threshold P = 2.61×10^−5, corrected for 1,916 genes simulated using chromosome 1 from the UK Biobank genotype data (see S1 Text). Results for gene-ε are shown with LASSO, Elastic Net (EN), and Ridge Regression (RR) regularizations. We also show the power of gene-ε without regularization to illustrate the importance of this step (OLS). Additionally, we compare the performance gene-ε with five existing methods: PEGASUS [12], VEGAS [7], RSS [14], SKAT [20], and MAGMA [10]. The last is a Bayesian method and is evaluated based on the “median probability criterion” (i.e., posterior enrichment probability of a gene is greater than 0.5). All results are based on 100 replicates and standard deviations of the estimates across runs are given in the parentheses. Approaches with the greatest power are bolded in purple, while methods with the lowest FDR is bolded in blue. S13 Table. Empirical power and false discovery rates (FDR) for detecting enriched genes (genes containing at least one causal SNP) after correcting for multiple hypothesis testing in simulations with gene boundaries augmented by a 50 kilobase (kb) buffer and with population stratification (N = 5,000; h^2 = 0.2). In this simulation, traits were generated while using the top five principal components (PCs) of the genotype matrix as covariates. GWA summary statistics were computed by fitting a single-SNP univariate linear model (via ordinary least squares) without any control for the additional structure. We show the power of gene-ε to identify enriched genes under the Bonferonni-corrected threshold P = 2.61×10^−5, corrected for 1,916 genes simulated using chromosome 1 from the UK Biobank genotype data (see S1 Text). Results for gene-ε are shown with LASSO, Elastic Net (EN), and Ridge Regression (RR) regularizations. We also show the power of gene-ε without regularization to illustrate the importance of this step (OLS). Additionally, we compare the performance gene-ε with five existing methods: PEGASUS [12], VEGAS [7], RSS [14], SKAT [20], and MAGMA [10]. The last is a Bayesian method and is evaluated based on the “median probability criterion” (i.e., posterior enrichment probability of a gene is greater than 0.5). All results are based on 100 replicates and standard deviations of the estimates across runs are given in the parentheses. Approaches with the greatest power are bolded in purple, while methods with the lowest FDR is bolded in blue. S14 Table. Empirical power and false discovery rates (FDR) for detecting enriched genes (genes containing at least one causal SNP) after correcting for multiple hypothesis testing in simulations with gene boundaries augmented by a 50 kilobase (kb) buffer and with population stratification (N = 10,000; h^2 = 0.2). In this simulation, traits were generated while using the top five principal components (PCs) of the genotype matrix as covariates. GWA summary statistics were computed by fitting a single-SNP univariate linear model (via ordinary least squares) without any control for the additional structure. We show the power of gene-ε to identify enriched genes under the Bonferonni-corrected threshold P = 2.61×10^−5, corrected for 1,916 genes simulated using chromosome 1 from the UK Biobank genotype data (see S1 Text). Results for gene-ε are shown with LASSO, Elastic Net (EN), and Ridge Regression (RR) regularizations. We also show the power of gene-ε without regularization to illustrate the importance of this step (OLS). Additionally, we compare the performance gene-ε with five existing methods: PEGASUS [12], VEGAS [7], RSS [14], SKAT [20], and MAGMA [10]. The last is a Bayesian method and is evaluated based on the “median probability criterion” (i.e., posterior enrichment probability of a gene is greater than 0.5). All results are based on 100 replicates and standard deviations of the estimates across runs are given in the parentheses. Approaches with the greatest power are bolded in purple, while methods with the lowest FDR is bolded in blue. S15 Table. Empirical power and false discovery rates (FDR) for detecting enriched genes (genes containing at least one causal SNP) after correcting for multiple hypothesis testing in simulations with gene boundaries augmented by a 50 kilobase (kb) buffer and with population stratification (N = 5,000; h^2 = 0.6). In this simulation, traits were generated while using the top five principal components (PCs) of the genotype matrix as covariates. GWA summary statistics were computed by fitting a single-SNP univariate linear model (via ordinary least squares) without any control for the additional structure. We show the power of gene-ε to identify enriched genes under the Bonferonni-corrected threshold P = 2.61×10^−5, corrected for 1,916 genes simulated using chromosome 1 from the UK Biobank genotype data (see S1 Text). Results for gene-ε are shown with LASSO, Elastic Net (EN), and Ridge Regression (RR) regularizations. We also show the power of gene-ε without regularization to illustrate the importance of this step (OLS). Additionally, we compare the performance gene-ε with five existing methods: PEGASUS [12], VEGAS [7], RSS [14], SKAT [20], and MAGMA [10]. The last is a Bayesian method and is evaluated based on the “median probability criterion” (i.e., posterior enrichment probability of a gene is greater than 0.5). All results are based on 100 replicates and standard deviations of the estimates across runs are given in the parentheses. Approaches with the greatest power are bolded in purple, while methods with the lowest FDR is bolded in blue. S16 Table. Empirical power and false discovery rates (FDR) for detecting enriched genes (genes containing at least one causal SNP) after correcting for multiple hypothesis testing in simulations with gene boundaries augmented by a 50 kilobase (kb) buffer and with population stratification (N = 10,000; h^2 = 0.6). In this simulation, traits were generated while using the top five principal components (PCs) of the genotype matrix as covariates. GWA summary statistics were computed by fitting a single-SNP univariate linear model (via ordinary least squares) without any control for the additional structure. We show the power of gene-ε to identify enriched genes under the Bonferonni-corrected threshold P = 2.61×10^−5, corrected for 1,916 genes simulated using chromosome 1 from the UK Biobank genotype data (see S1 Text). Results for gene-ε are shown with LASSO, Elastic Net (EN), and Ridge Regression (RR) regularizations. We also show the power of gene-ε without regularization to illustrate the importance of this step (OLS). Additionally, we compare the performance gene-ε with five existing methods: PEGASUS [12], VEGAS [7], RSS [14], SKAT [20], and MAGMA [10]. The last is a Bayesian method and is evaluated based on the “median probability criterion” (i.e., posterior enrichment probability of a gene is greater than 0.5). All results are based on 100 replicates and standard deviations of the estimates across runs are given in the parentheses. Approaches with the greatest power are bolded in purple, while methods with the lowest FDR is bolded in blue. S17 Table. Empirical type I error estimates using different gene-ε approaches. Here, quantitative traits are simulated with just noise randomly drawn from standard normal distributions. This represents the scenario in which all SNPs are non-causal and satisfy the conventional null hypothesis H[0]: β[j] = 0. GWA summary statistics were computed by fitting a single-SNP univariate linear model (via ordinary least squares). Each table entry lists the mean type I error rate estimates for the four gene-ε modeling approaches—which is computed as the proportion of P-values under some significance level α. Empirical size for the analyses used significance levels of α = 0.05, 0.01, 0.001, and 2.61×10^−5 (the Bonferonni-corrected threshold), respectively. Sample sizes of the individual-level data (used to derive the summary statistics), were set to N = 5,000 and 10,000 observations. These results are based on 100 simulated datasets and the standard errors across the replicated are included in the parentheses. Overall, gene-ε controls the type I error rate for reasonably sized datasets, and can be slightly conservative when the sample size is small and the GWA summary statistics are less precise/more inflated. S18 Table. Characterization of the genetic architectures of six traits assayed in European-ancestry individuals in the UK Biobank. Here, we report the way difference regularization makes when gene-ε characterizes ε-genic effects in complex traits. Results are shown for Elastic Net (which is highlighted in the main text). We also show results when no shrinkage is applied to illustrate the importance of this step (denoted by OLS). In the former case, we regress the GWA SNP-level effect size estimates onto chromosome-specific LD matrices to derive a regularized set of summary statistics . gene-ε assumes a reformulated null distribution of SNP-level effects , where is the SNP-level null threshold and represents the maximum proportion of phenotypic variance explained (PVE) by a spurious or non-associated SNP. We used an EM-algorithm with 100 iterations to fit K-mixture Gaussian models over the regularized effect sizes to estimate . Here, each mixture component had distinctively smaller variances (; with the K-th component fixed at ), and the number of total mixture components K was chosen based on a grid of values where the best model yielded the highest Bayesian Information Criterion (BIC). We assume associated SNPs appear in the first component, non-associated SNPs appear in the last component, and null SNPs with spurious effects fell in between (i.e., ). Thus, a SNP is considered to have some level of association with a trait if ; while a SNP is considered “causal” if . Column 3 gives the K used for each trait. Column 4 and 5 detail the percentage of associated and causal SNPs, respectively. The last column gives the mean threshold for ε-genic effects across the chromosomes. S19 Table. Significant genes for body height in the UK Biobank analysis using gene-ε-EN. Here, we analyze 17,680 genes from N = 349,468 individuals of European-ancestry. This file gives the gene-ε gene-level association P-values using Elastic Net regularized effect sizes when gene boundaries are defined by (page 1) using UCSC annotations directly, and (page 2) augmenting the gene boundaries by adding SNPs within a ±50kb buffer. Significance was determined by using a Bonferroni-corrected P-value threshold (in our analyses, P = 0.05/14322 autosomal genes = 3.49×10^−6 and P = 0.05/17680 autosomal genes = 2.83×10^−6, respectively). The columns of tables on both pages provide: (1) chromosome position; (2) gene name; (3) gene-ε-EN gene P-value; (4) gene-specific heritability estimates; (5) whether or not an association between gene and trait is listed in the GWAS catalog (marked as “yes” or “no”); (6-7) the starting and ending position of the gene’s genomic position; (8) number of SNPs within a gene that were included in analysis; (9) the most significant SNP according to GWA summary statistics; (10) the P-value of the most significant SNP; and, on the first page, (11) the corresponding gene-level posterior enrichment probability as found by RSS for comparison. Note that an “NA” in column (11) occurs wherever the MCMC for RSS failed to converge. Highlighted rows represent enriched genes whose top SNP is not marginally significant according to a genome-wide Bonferroni-corrected threshold (P = 4.67×10^−8 correcting for 1,070,306 SNPs analyzed). S20 Table. Significant genes for body mass index (BMI) in the UK Biobank analysis using gene-ε-EN. Here, we analyze 17,680 genes from N = 349,468 individuals of European-ancestry. This file gives the gene-ε gene-level association P-values using Elastic Net regularized effect sizes when gene boundaries are defined by (page 1) using UCSC annotations directly, and (page 2) augmenting the gene boundaries by adding SNPs within a ±50kb buffer. Significance was determined by using a Bonferroni-corrected P-value threshold (in our analyses, P = 0.05/14322 autosomal genes = 3.49×10^−6 and P = 0.05/17680 autosomal genes = 2.83×10^−6, respectively). The columns of tables on both pages provide: (1) chromosome position; (2) gene name; (3) gene-ε-EN gene P-value; (4) gene-specific heritability estimates; (5) whether or not an association between gene and trait is listed in the GWAS catalog (marked as “yes” or “no”); (6-7) the starting and ending position of the gene’s genomic position; (8) number of SNPs within a gene that were included in analysis; (9) the most significant SNP according to GWA summary statistics; (10) the P-value of the most significant SNP; and, on the first page, (11) the corresponding gene-level posterior enrichment probability as found by RSS for comparison. Note that an “NA” in column (11) occurs wherever the MCMC for RSS failed to converge. Highlighted rows represent enriched genes whose top SNP is not marginally significant according to a genome-wide Bonferroni-corrected threshold (P = 4.67×10^−8 correcting for 1,070,306 SNPs analyzed). S21 Table. Significant genes for mean corpuscular volume (MCV) in the UK Biobank analysis using gene-ε-EN. Here, we analyze 17,680 genes from N = 349,468 individuals of European-ancestry. This file gives the gene-ε gene-level association P-values using Elastic Net regularized effect sizes when gene boundaries are defined by (page 1) using UCSC annotations directly, and (page 2) augmenting the gene boundaries by adding SNPs within a ±50kb buffer. Significance was determined by using a Bonferroni-corrected P-value threshold (in our analyses, P = 0.05/14322 autosomal genes = 3.49×10^−6 and P = 0.05/17680 autosomal genes = 2.83×10^−6, respectively). The columns of tables on both pages provide: (1) chromosome position; (2) gene name; (3) gene-ε-EN gene P-value; (4) gene-specific heritability estimates; (5) whether or not an association between gene and trait is listed in the GWAS catalog (marked as “yes” or “no”); (6-7) the starting and ending position of the gene’s genomic position; (8) number of SNPs within a gene that were included in analysis; (9) the most significant SNP according to GWA summary statistics; (10) the P-value of the most significant SNP; and, on the first page, (11) the corresponding gene-level posterior enrichment probability as found by RSS for comparison. Note that an “NA” in column (11) occurs wherever the MCMC for RSS failed to converge. Highlighted rows represent enriched genes whose top SNP is not marginally significant according to a genome-wide Bonferroni-corrected threshold (P = 4.67×10^−8 correcting for 1,070,306 SNPs analyzed). S22 Table. Significant genes for mean platelet volume (MPV) in the UK Biobank analysis using gene-ε-EN. Here, we analyze 17,680 genes from N = 349,468 individuals of European-ancestry. This file gives the gene-ε gene-level association P-values using Elastic Net regularized effect sizes when gene boundaries are defined by (page 1) using UCSC annotations directly, and (page 2) augmenting the gene boundaries by adding SNPs within a ±50kb buffer. Significance was determined by using a Bonferroni-corrected P-value threshold (in our analyses, P = 0.05/14322 autosomal genes = 3.49×10^−6 and P = 0.05/17680 autosomal genes = 2.83×10^−6, respectively). The columns of tables on both pages provide: (1) chromosome position; (2) gene name; (3) gene-ε-EN gene P-value; (4) gene-specific heritability estimates; (5) whether or not an association between gene and trait is listed in the GWAS catalog (marked as “yes” or “no”); (6-7) the starting and ending position of the gene’s genomic position; (8) number of SNPs within a gene that were included in analysis; (9) the most significant SNP according to GWA summary statistics; (10) the P-value of the most significant SNP; and, on the first page, (11) the corresponding gene-level posterior enrichment probability as found by RSS for comparison. Note that an “NA” in column (11) occurs wherever the MCMC for RSS failed to converge. Highlighted rows represent enriched genes whose top SNP is not marginally significant according to a genome-wide Bonferroni-corrected threshold (P = 4.67×10^−8 correcting for 1,070,306 SNPs analyzed). S23 Table. Significant genes for platelet count (PLC) in the UK Biobank analysis using gene-ε-EN. Here, we analyze 17,680 genes from N = 349,468 individuals of European-ancestry. This file gives the gene-ε gene-level association P-values using Elastic Net regularized effect sizes when gene boundaries are defined by (page 1) using UCSC annotations directly, and (page 2) augmenting the gene boundaries by adding SNPs within a ±50kb buffer. Significance was determined by using a Bonferroni-corrected P-value threshold (in our analyses, P = 0.05/14322 autosomal genes = 3.49×10^−6 and P = 0.05/17680 autosomal genes = 2.83×10^−6, respectively). The columns of tables on both pages provide: (1) chromosome position; (2) gene name; (3) gene-ε-EN gene P-value; (4) gene-specific heritability estimates; (5) whether or not an association between gene and trait is listed in the GWAS catalog (marked as “yes” or “no”); (6-7) the starting and ending position of the gene’s genomic position; (8) number of SNPs within a gene that were included in analysis; (9) the most significant SNP according to GWA summary statistics; (10) the P-value of the most significant SNP; and, on the first page, (11) the corresponding gene-level posterior enrichment probability as found by RSS for comparison. Note that an “NA” in column (11) occurs wherever the MCMC for RSS failed to converge. Highlighted rows represent enriched genes whose top SNP is not marginally significant according to a genome-wide Bonferroni-corrected threshold (P = 4.67×10^−8 correcting for 1,070,306 SNPs analyzed). S24 Table. Significant genes for waist-hip ratio (WHR) in the UK Biobank analysis using gene-ε-EN. Here, we analyze 17,680 genes from N = 349,468 individuals of European-ancestry. This file gives the gene-ε gene-level association P-values using Elastic Net regularized effect sizes when gene boundaries are defined by (page 1) using UCSC annotations directly, and (page 2) augmenting the gene boundaries by adding SNPs within a ±50kb buffer. Significance was determined by using a Bonferroni-corrected P-value threshold (in our analyses, P = 0.05/14322 autosomal genes = 3.49×10^−6 and P = 0.05/17680 autosomal genes = 2.83×10^−6, respectively). The columns of tables on both pages provide: (1) chromosome position; (2) gene name; (3) gene-ε-EN gene P-value; (4) gene-specific heritability estimates; (5) whether or not an association between gene and trait is listed in the GWAS catalog (marked as “yes” or “no”); (6-7) the starting and ending position of the gene’s genomic position; (8) number of SNPs within a gene that were included in analysis; (9) the most significant SNP according to GWA summary statistics; (10) the P-value of the most significant SNP; and, on the first page, (11) the corresponding gene-level posterior enrichment probability as found by RSS for comparison. Note that an “NA” in column (11) occurs wherever the MCMC for RSS failed to converge. Highlighted rows represent enriched genes whose top SNP is not marginally significant according to a genome-wide Bonferroni-corrected threshold (P = 4.67×10^−8 correcting for 1,070,306 SNPs analyzed). S25 Table. Characterization of the genetic architectures of six traits assayed in European-ancestry individuals in the UK Biobank (using un-imputed genotypes). Here, we report the way different regularizations in gene-ε characterize ε-genic effects in complex traits. Results are shown for Elastic Net (which is highlighted in the main text), as well as for LASSO and Ridge Regression. We also show results when no shrinkage is applied to illustrate the importance of this step (denoted by OLS). In the three former cases, we regress the GWA SNP-level effect size estimates onto chromosome-specific LD matrices to derive a regularized set of summary statistics . gene-ε assumes a reformulated null distribution of SNP-level effects , where is the SNP-level null threshold and represents the maximum proportion of phenotypic variance explained (PVE) by a spurious or non-associated SNP. We used an EM-algorithm with 100 iterations to fit K-mixture Gaussian models over the regularized effect sizes to estimate . Here, each mixture component had distinctively smaller variances (; with the K-th component fixed at ), and the number of total mixture components K was chosen based on a grid of values where the best model yielded the highest Bayesian Information Criterion (BIC). We assume associated SNPs appear in the first component, non-associated SNPs appear in the last component, and null SNPs with spurious effects fell in between (i.e., ). Thus, a SNP is considered to have some level of association with a trait if ; while a SNP is considered “causal” if . Column 3 gives the K used for each trait. Column 4 and 5 detail the percentage of associated and causal SNPs, respectively. The last column gives the mean threshold for ε-genic effects across the chromosomes. S26 Table. Comparison of the different gene-ε approaches on the six quantitative traits assayed in European-ancestry individuals from the UK Biobank un-imputed genotyped data. Traits include: height; body mass index (BMI); mean corpuscular volume (MCV); mean platelet volume (MPV); platelet count (PLC); and waist-hip ratio (WHR). Here, we list the number of significant genes found when using gene-ε with various regularization strategies, as well as the number of dbGAP categories enriched for significant genes identified by gene-ε. We also assess how well these results overlap with the gene-ε -EN findings that were reported in the main text. Significant genes were determined by using a Bonferroni-corrected P-value threshold (in our analyses, P = 0.05/13029 autosomal genes = 3.84×10^−6). Enriched dbGAP categories were those with Enrichr Q-values (i.e., false discovery rates) less than 0.05. S27 Table. Comparison of the different gene-ε approaches on the six quantitative traits assayed in European-ancestry individuals from the UK Biobank un-imputed genotyped data with gene boundaries augmented by a 50 kilobase (kb) buffer. Traits include: height; body mass index (BMI); mean corpuscular volume (MCV); mean platelet volume (MPV); platelet count (PLC); and waist-hip ratio (WHR). Here, we list the number of significant genes found when using gene-ε with various regularization strategies, as well as the number of dbGAP categories enriched for significant genes identified by gene-ε. We also assess how well these results overlap with the gene-ε -EN findings that were reported in the main text. Significant genes were determined by using a Bonferroni-corrected P-value threshold (in our analyses, P = 0.05/17680 autosomal genes = 2.83×10^−6). Enriched dbGAP categories were those with Enrichr Q-values (i.e., false discovery rates) less than 0.05. S1 Text. Supplementary and background information for results mentioned in the main text. Specifically, we give description of data quality control procedures, simulation setup and scenarios, review of other competing gene-level association methods, and additional results for the traits analyzed from the UK Biobank. The authors would like to thank the Editor, Associate Editor, Doug Speed, and the other two anonymous referees for their constructive comments. The authors also thank Xiang Zhu (Stanford University) for help with the implementation of RSS, as well as Sam Smith (Brown University) for help with the management of the UK Biobank data. This research was conducted using the UK Biobank Resource under Application Number 22419, and part of this research was conducted using computational resources and services at the Center for Computation and Visualization (CCV), Brown University. S. Ramachandran also acknowledges support from a Natural Sciences Fellowship at the Swedish Collegium for Advanced Study (Spring 2019), and by the Erling-Persson Family Foundation and the Knut and Alice Wallenberg
{"url":"https://journals.plos.org/plosgenetics/article?id=10.1371/journal.pgen.1008855","timestamp":"2024-11-03T22:29:07Z","content_type":"text/html","content_length":"570178","record_id":"<urn:uuid:32180ef8-b77a-4f31-b98c-2e49843db1d8>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00769.warc.gz"}
Wind Turbine Technology From energypedia A wind turbine is a device that converts kinetic energy from the wind into electrical power^[1].The role of wind turbines is to extract energy from wind and convert it to electrical energy. This extraction is subjected to certain limitations represented by Betz’s limit which is the maximum energy possible to convert kinetic energy into mechanical energy without any losses^[2]. Basic Theory An ideal wind turbine has a maximum power coefficient of 16/27.The theoretical limit cannot be exceeded and this caused by the aerodynamic losses due to conversion of angular momentum,tip and drag^ Turbine Power output can can be expressed as follows: P= ½ * ρ* A * v³* Cp where ρ is density of air, A is the area swept by rotor and v is the velocity of wind. The extracted power is then converted into electrical power and is defined as: where P is the electrical power in watt V is the voltage in V I is the current in A Betz Theory is the maximum possible Cp=16/27 which means 59% efficiency is the best a conventional wind turbine can do in extracting power from the wind. Stall Control: • Passive Stall • Power of the wind turbine is limited by the aerodynamic characteristics of the turbine. Active Stall Power of the wind turbine is limited additionally by decreasing the pitch angle (increasing the inflow angle ). Pitch Control: • Power of the wind turbine is limited by increasing the pitch angle (decreasing the inflow angle α) Wind Turbine Operation Operation of Fix Speed Wind Turbine (Passive Stall) • Start up (with open breaker) if wind speed > cut-in wind speed • Close breaker • Operation at constant blade angle over the whole wind speed range • In case of large wind speeds: Power limited by aerodynamic profile. Operation of Variable Speed Wind-Turbines Start up (with open breaker) if wind speed > cut-in wi nd speed • Close breaker • Below rated wind-speed – Maximum power coefficient (Max. Power Tracking) – Evt: Speed Limitation • Above rated wind-speed: – P=Pr ated (Limited by power electronics converter) – Pitching • Advantages of variable speed operation: – Lower cut-in wind speeds – Higher efficiency, especially at low wind speeds – Lower power variations (compared to fixed speed turbines) • Disadvantage: More expensive! Generator Concepts Fixed Speed Induction Generator Only fix speed operation possible (stall control required) • Reactive power compensation required • No reactive power control capability. Additional devices required: – TSCs (Thyristor switched capacitors) – STATCOMs • Risk of dynamic voltage collapse GTZ Expert Workshop 2010: Grid and System Integration of Wind Energy, 22/23.11.2010, Berlin/Germany y g p – > Typically, wind generators based on induction generators are asked to disconnect in case of voltage dips Induction Generator with Variable Rotor Resistance Simple concept for variable speed operation. • Reactive power compensation required. • No reactive power control capability. Additional devices required: – TSCs (Thyristor switched capacitors) – STATCOMs • Limited LVRT capability. Dynamic voltage collapse problems have to GTZ Expert Workshop 2010: Grid and System Integration of Wind Energy, 22/23.11.2010, Berlin/Germany be mitigated by: – Fast increase of rotor resistance during faults – Additional reactive power compensation devices (typically TSCs) Doubly-Fed Induction Generator Generator with Fully Rated Converter Generator with Fully Rated Converter and Direct Drive Directly Coupled Synchronous Generator with Variable Gear Box Further Information 1. ↑ http://en.wikipedia.org/wiki/Wind_turbine 2. ↑ J. F. Manwell, J. G. McGowan and A. L. Rogers. WIND ENERGY; Theory, Design and Application. United Kingdom : John Wiley & Sons Ltd, 2009 3. ↑ DESIGN AND POWER CHARACTERIZATION OF A SMALL WIND TURBINE MODEL IN PARTIAL LOAD REGION by Abdulkarim Abdulrazek 4. ↑ ^4.00 ^4.01 ^4.02 ^4.03 ^4.04 ^4.05 ^4.06 ^4.07 ^4.08 ^4.09 ^4.10 ^4.11 Weigel S., Poeller M. (2010) Wind Turbine Generators (WTGs) Physical Principals and Generator Concepts, Presentation prepared by DigSILENT GmbH for the Wind Energy and Development Dialogue 2010, retrieved 27.8.2011 [[1]]
{"url":"https://staging.energypedia.info/wiki/Wind_Turbine_Technology","timestamp":"2024-11-04T01:16:41Z","content_type":"text/html","content_length":"72551","record_id":"<urn:uuid:69c612b9-fb15-4133-b2fe-85b1f51f87f7>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00015.warc.gz"}
Drawing from their 2022 research article, titled ‘Hidden Figures, Hidden Messages: The Construction of Mathematical Identities with Children’s Picturebooks’ (published in the journal For the Learning of Mathematics), the authors – Drs. Olga O. Fellus, David E. Low, Lynette D. Guzmán, Alex Kasman, and Ralph T. Mason share a short blog post for educators and parents. This post discusses four recurring ‘hidden messages’ we have found in children’s books about what mathematics is and what doing mathematics entails. Given the prevalence of picturebooks in children’s everyday school and home experiences, we decided to investigate how picturebooks about mathematics and mathematicians present identities for young readers to adopt or refuse. Many children begin to identify as “good at math,” or “bad at math,” or “not a math person” by a young age. We wondered how children’s literature might contribute to those self-classifications. What images and stories about mathematics are children exposed to? How might the stories shape and even limit how children understand what mathematics is? Many of the picturebooks we read are well-intentioned in rejecting the stereotype that mathematics is primarily a domain for boys and people of European descent. We applaud the number of books that reject the trope of white male dominance in STEM fields! However, it is still possible for children to absorb other subtextual messages that present limited and limiting views of mathematics. In this blog post, we share insights from our examination of 24 picturebooks and discuss four patterns (or hidden messages) we identified within and across the texts. If you are interested in reading the full study, we encourage you to do so and let us know what you think! Hidden Message 1: Mathematical ability is a gift Books written to inspire readers belonging to identity groups that have been underrepresented in the narrative of mathematical ability may nevertheless mischaracterize what it means to be mathematically able. In ‘Hidden Figures: The True Story of Four Black Women and the Space Race,’ Katherine Johnson, Dorothy Vaughan, Mary Jackson, and Christine Darden do not struggle with mathematical concepts or make mistakes. These women are paragons, though not ones that readers can realistically emulate, for their brilliance is preternatural, their mathematical ability pure magic. In ‘Hidden Figures,’ each of the four protagonists is referenced as being “good at math. Really good.” In fact, that phrase is repeated nine times throughout the book, leaving it to the reader to understand what it means to be ‘really good’ at math. Other picturebooks use similarly opaque wording. Paul Erdös “was the best. He loved being at the top in math” (Heiligman & LeUyen, 2013, p. 15). Albert Einstein was “a genius” (Berne & Radunsky, 2013, opening 14). Raye Montague was “a smarty” (Mosca & Reiley, 2018, opening 13). Senefer had “intelligence and abilities” (Lumpkin & Nickens, 1992, p. 10). Eratosthenes was “a real whiz in math” (Lasky & Hawkes, 1994, p. 13). Garth, a character in ‘Math Man,’ “had a way with numbers” (opening 5). Harley, the central character of ‘The Great Math Tattle Battle’, was “the best math student in second grade” (opening 1). Across these examples, readers encounter the archetype of the mathematical doer who owes their ability to some innate, ineffable gift. The gist of this hidden message is that mathematical ability is an innate talent that one either possesses or doesn’t. Further implied is that those who possess ‘the gift’ are highly intelligent and may be geniuses. For them, mathematics requires little or no visible effort. Their work is intuitive and characterized by eureka moments. When innovation is reduced to eureka moments, and math is ‘a piece of cake,’ it obscures the perseverance associated with making sense of mathematical concepts. Children might misidentify themselves as mathematically incapable if it doesn’t come easily to Hidden Message 2: Mathematical ability is like having a magic eye or knowing a secret language In another picturebook about Katherine Johnson - ‘A Computer Called Katherine’ - the protagonist sees numbers up among the stars. Similarly, in ‘Nothing Stopped Sophie,’ a young Sophie Germain sees numbers vibrantly appearing out of thin air, superimposed over scenes of the French Revolution (opening 4). Although this might be a reasonable way to represent the thought processes of someone doing math in a picturebook, it might leave readers with the mistaken impression that talented mathematicians are people who literally see formulae floating around them. For other picturebook personae, mathematics functions like a decoder ring for translating or decrypting the language of the universe. For Einstein, numbers “were a secret language for figuring things out” (Berne & Radunsky, 2013, opening 9). The idea that math is akin to language is not problematic on its own. The issue is with the exclusive nature of languages if one does not speak them and believes one cannot learn them. For a child struggling with math anxiety, it could be frustrating to see that mathematics is a language understood easily by its conversant speakers but is incomprehensible to outsiders. Although the language of math is openly taught in classrooms, it is a language many learners struggle to use fluently. We can easily imagine children feeling inspired by Katherine Johnson, but also saying “I never see numbers floating in the air, so I guess I’m not a math person.” Hidden Message 3: Mathematics is about doing calculations quickly Twelfth century mathematician Fibonacci tells us that when he was just a boy, his teacher “wrote out a math problem and gave us two minutes to solve it. I solved it in two seconds” (D’Agnese & O’Brien, 2010, opening 2). Early in ‘The Great Math Tattle Battle,’ readers are told that Harley Harrison “could figure out forty-five plus thirty-nine faster than you could spell ‘Mississippi’” (opening 1). Harley’s mathematical experiences render him as mathematically capable because he is fast and does not make mistakes. His spot at the top of the class is only jeopardized when he produces an incorrect answer and becomes vulnerable to Emma Jean encroaching on his turf. For both Harley and Fibonacci, the focus is on the result rather than the process of making sense of a problem at hand. This hidden message reflects the tendency to overemphasize manipulation of numbers over relational thinking about mathematical ideas, which precludes (slower) processes of taking ownership of mathematical ideas through reasoning. In ‘Counting on Katherine: How Katherine Johnson Saved Apollo 13’, readers encounter the message that mathematical ability translates to experiences of speed and correctness. When the Apollo 13 spacecraft was in peril, Katherine “did flight-path calculations, quickly and flawlessly, to get the astronauts home safely” (opening 13). ‘The Girl with a Mind for Math: The Story of Raye Montague’ contains similar messages about mathematical speed and heroism: “Would it take her a month? Maybe weeks for success? Well, it took CALCULATIONS (and tons of caffeine), but Raye finished in HOURS … just over EIGHTEEN! (opening 22). ‘YOU DID IT!’ they cheered, and her boss had to say that her quick mind for math had in fact saved the day” (opening 24). Perhaps the goal of reading these and other picturebooks with children is to celebrate and normalize the contributions of Black women mathematicians. That is an admirable goal in and of itself! However, if the objective is to inspire children to view themselves as mathematically capable, it is counterproductive to insinuate that mathematical ability is based solely on innate speed and flawlessness. While speed and accuracy are important skills that need to be developed, having automatization take center stage is counterproductive to developing positive mathematical identities among young learners Hidden Message 4: Mathematical ability is associated with social awkwardness Few children yearn to be ostracized by their peers. However, one common message children receive about mathematics is that when you decide to devote time and energy to its pursuit, you may become a pariah, or at least socially awkward. This message is reinforced by numerous picturebooks. In ‘The Great Math Tattle Battle’, Harley Harrison and Emma Jean are the only two children shown to be interested in math. They are represented as being extremely annoying to their classmates. Eight centuries earlier, Fibonacci also annoyed his classmates, and later the townspeople of Pisa (D’Agnes & O’Brien, 2010). In some picturebooks, characters find themselves with a choice to make: pursue mathematics or have friends. As a child, Albert Einstein “didn’t want to be like the other students. He wanted to discover the hidden mysteries of the world” (Berne & Radunsky, 2013, opening 6). Why is Einstein’s choice presented as mutually exclusive? Did studying mathematics preclude him from being like other kids? In the case of other picturebook personae, such as Ada Byron Lovelace, “numbers were her friends” (Wallmark & Chu, 2015, openings 7 and 16) and for Paul Erdös, “numbers were his best friends” (Heiligman & LeUyen, 2013, p. 11). Later, he made human friends when he met others who loved mathematics (p. 14). We are concerned these picturebooks could leave young readers thinking, “I don’t want to be good at math because I don’t want to annoy people or be lonely.” In our article, we offer an analysis of overt and hidden messages in picturebooks, and consider how these messages may contribute to the formation of young people’s identities as learners and doers of mathematics. Inviting children to examine messages about what it means to do math, and what constitutes being ‘good’ at math, are steps toward welcoming a greater swath of learner identities into the boundless world of mathematical inquiry. About the authors Dr. Olga Fellus is an Assistant Professor of Mathematics Education at Brock University (Canada). Her work focuses on the interface between teaching and learning mathematics, identity making, and educational change. Olga can be reached at [email protected] Dr. David E. Low is an Associate Professor of Literacy Education at Fresno State University (USA). His research explores how children and youth critically theorize race, gender, power, and identity through multimodal texts such as comics and picturebooks. David can be reached at [email protected] Dr. Lynette D. Guzmán is a former mathematics teacher educator whose scholarship focused on broadening the ways students explore mathematical ideas in classrooms. She is currently a content creator for her brand, , and creates videos that bring forward philosophical perspectives on various media (games, film, books). She can be reached at [email protected] Dr. Alex Kasman received his PhD in mathematics from Boston University in 1995 and subsequently held postdoctoral positions in Athens, Montreal, and Berkeley. Since 1999, he has been a professor at the College of Charleston. He has published over 30 research papers in mathematics, physics, and biology journals. He also maintains a that lists, reviews, and categorizes all works of “mathematical fiction”. The American Mathematical Society published his textbook on soliton theory in 2010 and the Mathematical Association of America published a book of his short stories in 2005. Alex can be contacted at [email protected] Dr. Ralph T. Mason’s work focuses on mathematics education, curriculum theory, and pedagogy. He can be reached at [email protected] 0 Comments 0 Comments 1 Comment 2 Comments 0 Comments 1 Comment 6 Comments 2 Comments 1 Comment 16 Comments 1 Comment 0 Comments 1 Comment 2 Comments 2 Comments 1 Comment 4 Comments The goal of this blog post is to introduce you to a new series of mathematical stories in Italian language by Germano Pettarin who teaches mathematics at a public high school and at the University of Venice and Padua in Italy. While many of us might not be fortunate enough to be able to read and understand Italian, it can still be interesting to learn about the world of mathematical stories outside the English-speaking world. Can you imagine a city, an island, a world where numbers live together and mathematical operations become stories? This is the basic idea of a new series of mathematical stories in Italian language by Germano Pettarin, an Italian professor of mathematics and computer science (and Jacopo Olivieri, his co-author). The books are published by Einaudi Ragazzi publishing house. The idea of books that propose mathematics as a story, using numbers as characters who talk, interact with each other, get angry, etc., is a novelty in the field of books with educational purposes for children here in Italy. Often the numbers are just elements of the stories, not the protagonists. In these stories, numbers, operations, geometric figures come to life, and the rules of mathematics and geometry become the pretext for inventing places (e.g., the Pacifric Ocean, the Cifradi Archipelago), characters (e.g., the general Abacone or the unpredictable Pi Greek), or paradoxical situation (e.g., world without zeros and without circles). In a word, to tell stories where numbers and polygons all play together, argue, argue always following the rules of mathematics in a fun and playful way. Each with its own characteristics: One wants to be the leader, Three the perfect number, Zero unable to follow the rules: it does not divide, does not increase the result of the sums, etc. Indeed, in his most recent story: numbers get sick, a pandemic between numbers, solved by a prodigious mathematical vaccine. They are all stories that try to talk about maths to children and teenagers in fun and enjoyable way. Inspirations for writing mathematical stories The idea of writing illustrated books on the intricate world of mathematics stems from the proposal of the publisher who, by the way, has always had a disgust towards mathematics. We would like to overcome the distrust that is found towards mathematics with books for children who have not taken the blows that lead to hatred of mathematics. The publisher asked for books that explained the more particular concepts of mathematics to attract interest in this subject, in a colloquial and fun way. Children's books are good because you can express concepts, even difficult ones, in a simple way: the examples and images with which they are presented are effective not only for the little ones, but also for adults. All treated in a non-notional way. Mathematics is a difficult subject for many, useless to go around it, with its round and square brackets, square roots and commas in the most unlikely places, elevations to surprising powers. It's hard to believe how you can have fun talking about it and even studying it. Especially if it is proposed in a rigid and repetitive way. In these novels it is approached from another point of view: mathematics as fun, discovery. About the series La matematica fa schifo! (Math sucks!) - 2017 In the imaginary world of Cifralia, to fix the disorder created by the government of King Chaos, a leader is finally proclaimed: Generalissimo Abacone, a rigid, precise, fanatic of the rules. I'm the right one in the right place - He thought - who better than me can govern the world of mathematics, governed by strict, formal rules where there is no room for fantasy and imagination but only repetitive and boring calculations. Here it takes rigor, rigor and rigor! But things will not be like this. Other than docile to follow the rules: numbers do not behave like in math books, they do not allow themselves to be submitted to mechanical and monotonous exercises. The numbers all live together, they talk to each other, they face everyday problems, each with its own character, sometimes creating crazy messes. And Abacone, despite himself, will discover that even in the world of numbers there are surprising results: sometimes paradoxical, apparently impossible but therefore fascinating. Mathematics is not just rigor. L’isola delle tabelline (The island of multiplication tables) - 2018 Why we are here? What is our purpose? This is the question that torments the 99 numbers that inhabit the island of Tabellandia. There are four numbers Twenty-four, two numbers Twenty-seven, only one Forty-nine. There is no Forty-seven, much less thirteen. It seems that they were placed there without a criterion, almost as if they were scrap numbers. While, in the other islands, everything is clear: on the island of even numbers there are even numbers, on the island of squares the numbers that are a square, on the island of Plus Hundred the large numbers, etc. But perhaps the arrival of Cento would have clarified the enigma. With him they would become a hundred numbers. One hundred like the Greek name of the capital of Tabellandia, Hecaton. But why a hundred? And who is the mysterious character who arrived by plane with Cento? What is the huge table on top of the mountain for? It's the mystery of the multiplication tables! Le cose non quadrano… ci vogliono I cerchi! (Things don't add up ... circles are needed!) - 2019 Are all numbers equally important? And the polygons? These seem like obvious questions: of course, yes. You can't do without a number or a polygon. There is no more beautiful number or polygon: each has its own characteristics, which make it special. Instead in the world of Matematopia, where numbers and polygons live, in strictly separate neighborhoods, it seems that it is not so obvious. There are numbers, there are polygons that feel better than others. And there are numbers and polygons that are badly tolerated, considered strange, ambiguous, different. Better not trust them, better keep them on the sidelines: who knows what they really think … It would be better if they left. And they really go! And then yes, the problems come! La rivincita delle 4 operazioni (The revenge of the 4 operations) - 2020 There is always a lot to do in Typotopia, the land of characters. New textbooks are continually arriving with the text beautiful and ready to be written in full, from the first capital letter to the last point. The characters jump frantically along the lines, page after page, composing the necessary words. In the end, you have the complete book. Most characters are ultra busy. Almost all of them: because there is Crocetta, a cross-shaped font, which is never used. In a text there are vowels, consonants, punctuation marks. Who ever writes a And his friends, Colon, Hyphen, and x also work very little, compared to the other characters. Despised by the letters of the alphabet, the four friends find themselves wandering the world without a purpose. Until they discover that they are actually the most important signs of Mathematics! Il dottore dei numeri (The doctor of numbers) - 2021 In Mondonumero, life flows happily in the village of Borgo Intero Piú. And it could not be otherwise: its inhabitants, the Positive Numbers, are always cheerful and optimistic types. At least until the day when a frightening contagion hits the village: an infestation of minus signs begins to transform everyone into Negative Numbers, sad and depressed. What happened? Is it possible that the mysterious evil has something to do with the visit of an envious irrational number, the expert in magical arts Pi Greco? In a fight against time to save the village from rampant unhappiness, Doctor Uno tries to find a cure. But not even the faithful Zero and Piccolo Due (a small number who has never grown up) seem able to help him ... And if the solution were precisely the unknown powers of his two friends? And more mathematical stories are coming! Final words … I hope that reading my mathematical stories will help overcome the fear of mathematics. It is not clear why, but at a certain point during their school career, more or less brilliant, boys and girls stumble into a huge and insurmountable fear. A fear that adults usually struggle to understand. Or for which they find themselves feeling enormous empathy since they have experienced it themselves in times gone by. The famous fear of mathematics. Very famous. And the teachers are not to be held responsible for so much terror. Unfortunately, mathematics is often the protagonist of terrible urban legends. Unfortunately, these are always accompanied by the feeling of “not succeeding”, a frustration that tends to increase. But, math shouldn't be scary, on the contrary ... it can also be very fun and stimulating. About the author Germano Pettarin was born in Italy, where he teaches mathematics in the public high school in Pordenone (northeast Italy), and at the University of Venice and Padua. He is also a consultant and computer teacher at companies, public and private schools, training institutions and universities. He holds two degrees in specialized computer science and information science. He is the author of numerous manuals and popular publications in the field of mathematics and computer science and books on mathematical and computer games. His LinkedIn profile can be found here, and can be contacted via [email protected]. This blog post is about oral mathematical storytelling and is written by Caroline McGrath, a specialist mathematics teacher in the UK. The blog post is based in part on her doctoral research; on her book, ‘Teaching Mathematics through Story: A creative approach for the early years’, and her chapter, ‘Mathematical Storyteller Kings and Queens: An alternative pedagogical choice to facilitate mathematical thinking and understand children’s mathematical capabilities’, published by Routledge. The medium of oral story allows children flexibility to think playfully about mathematical ideas. In this blog post, I discuss a range of pedagogical benefits of thinking mathematically through storytelling. This piece has three aims: to characterise some of the qualities of oral mathematical storytelling; to consider the satisfaction and surprise the approach can bring; and, to promote this creative pedagogical tool. Story and oral story Story as an oral tradition is a powerful medium for thinking and one which is often neglected as part of young children’s learning experiences (Allison, 1987; Booker, 2004; Bryant, 1947; Egan, 1988; Walker, 1975). Children respond favourably to mathematical ideas contextualised in a meaningful way in story contexts (Schiro, 2004; Van den Heuvel-Panhuizen & Van den Boogaard, 2008). Where story and mathematics connect, there is scope to think mathematically through the story context. Oral story is the art of expressing verbally, real life or fantasy, without a written text. The concept of storying is described by Wells (1987, p.194) as “constructing stories in the mind”, which he positions as one of ‘the most fundamental ways of making meaning’. Oral mathematical story is telling a story with the intention of constructing mathematical meaning. Reading a book is interpreting a text in a shared way, whereas telling a story is a personal performance (McGrath, 2015). With oral storytelling, the storyteller is free from text; needs to be spontaneous; has a closer connection with their audience; and has a personal consciousness (Schiro, 2004). To develop skill at oral mathematical storytelling, a three-step model is helpful. The proposed model draws on Corbett’s Talk for Writing (2008) where educators commit to the creative opportunity oral storytelling brings as they develop a personal pedagogic tool (Corbett, 2006; 2007; Palmer & Corbett, 2003). The three steps: imitation, innovation and invention which equip children with prerequisite story-writing skills, can be remodelled for oral mathematical storytelling. Oral story authors can start by retelling a published story without the picture book (imitation); then change something about the original story (innovation); and, create original oral mathematical stories (invention). The storyteller orchestrates words, gestures and simple story related materials as mediational tools (Carlsen, 2013). What if? This question prompts wondering, promotes possibility thinking, and, taps into the remarkable capacity of children to think playfully. What happens to the mathematical idea if we change the story? Or what happens to the story if we change the mathematical idea? For example, in ‘Goldilocks and The Three Bears’, what if there were four bears instead of three? What if there are two similarly aged small bears (twins)? What if Goldilocks is out with a friend from her village, how will they share the porridge? In mathematical oral storytelling, the ‘What if?’ question is key to thinking playfully about mathematics. Constructing a story with children ‘Penguin’, is a story constructed between a teacher and a small group of Reception class children using cut-out coloured fish (see Figure 1). First, a summary of the story: Once upon a time there was a little penguin. His mum said to him ‘Go to the magical pond and catch ten fish for our tea.’ He walked a bit, and he walked a bit, and he walked a bit, and he walked a bit, until he got to the magical pond that glistens and shines. ‘Today we have orange and lemon flavoured fish’, the pond says. Penguin fished, and fished and fished until he caught ten delicious fish for tea. But on bringing the catch home, the family eats the fish and is still hungry and so Penguin has to return to the pond with the lemon and orange flavours and find different ways to catch ten fish … Figure 1: A Reception class teacher keeps number compositions for ten in view so that children think of other possibilities. Different storytelling’s of ‘Penguin’ uncover ideas about: eleven possible number compositions for the number 10; conservation of number with different arrangements of fish on the carpet; a different sized fish feeding the family for longer; the division of a large fish; and tessellation as cut-up pieces are reunited. The commutative property of addition is made visible by using different coloured fish, for example, 10 is represented as 4 strawberry and 6 blueberry, and 6 strawberry and 4 blueberry flavours. An outcome of the action of not placing all the same coloured fish together, is thinking about pattern. For example, Adam arranges 7 lemon and 3 orange fish carefully setting out (in this order) 5 lemon, 2 orange, 2 lemon and 1 orange fish. Equivalence between his 7 lemon flavours, composed of five and two; his 3 orange flavours composed of two and one, and a previous arrangement of 7 lemon and 3 orange fish, is realised. Mathematical storytelling is characterised by conversations rich with higher order questioning which consequently allow children’s negotiation and generation of mathematical understanding. A teacher describes this pedagogical approach as: “a living interactive relational experience which is creative, exciting and unknown” (McGrath, 2019, p. 305). Naik (2013) refers to a “space between the known and the unknown where true creativity can thrive”; participants in the oral story project take a creative risk with their mathematical storytelling. Story as a medium provides meaningful, memorable, metaphorical contexts which help children think about and articulate mathematical ideas. After hearing ‘Penguin’, a child, for example, retells the story using the coloured fish with remarkable precision. She creatively adapts the story to fit with different characters, extends the story to try the number composition for 11 rather than 10, offering an imaginative twist at the end (see Figure 2). The transcript of this child telling 'Penguin' can be accessed here, and the accompanying audio file can be accessed under the Support Material section here. Figure 2: A child retells ‘Penguin’ arranging the cut out coloured fish in a way which requires further thinking. Satisfaction and surprise The imitative storytelling activity of children provides surprising insight about their mathematical capabilities. A child considered of lower ability explains a story contextualised idea of counting in multiples and in doing so challenges assumptions about her ability: “When you counted in twos you missed one out, so it’s like a pattern” (McGrath, 2014, p.67). During the oral story project, quiet children surprise their teachers with their mathematical storytelling confidence. (Children as kings and queens of oral mathematical storytelling is documented in McGrath (2015)). Oral mathematical story challenges educator perception of themselves and enhances their professionalism: “I have grown in confidence, from a teacher who stuck to traditional tales, to being able to create my own stories based on mathematical concepts, confidently telling them with just a simple story map as a prompt” (McGrath, 2014, p. 140). The impact of children sharing storytelling with their parents at home is noted by a teacher in McGrath (2019, p. 293): “[…] to be able to go home and be a storyteller…I just think that when children take their learning home…it gives you such a positive feeling as a teacher”. The possibilities Oral mathematical story represents a hybrid of pedagogical approaches in that it is a traditional idea combined with modern practice. This alternative pedagogical approach opens up a new discourse which legitimatises a different way of teaching mathematics. As an integrative approach to implementing literacy and mathematics curricula, it is characterised by professional surprise and satisfaction. However, opportunity for oral mathematical story rely on a culture of creative choice. Allison, C. (1987). I’ll tell you a story, I’ll sing you a song: A parents’ guide to the fairy tales, fables, songs, and rhymes of childhood. Dell Publishing. Booker, C. (2004). The seven basic plots: Why we tell stories. Continuum. Bryant, S. C. (1947). How to tell stories to children and some stories to tell. George G. Harrap and co. Ltd. Carlsen, M. (2013). Engaging with mathematics in the kindergarten. Orchestrating a fairy tale through questioning and use of tools. European Early Childhood Education Research Journal, 21(4), Corbett, P. (2006). The bumper book of storytelling into writing Key Stage 1. Clown Publishing. Corbett, P. (2007). Developing creative writing skills Available at: http://www.learning-works.org.uk/index.php?id=566 (Accessed: 30 October 2020). Egan, K. (1988). Teaching as storytelling: An alternative approach to teaching and the curriculum. Routledge. McGrath, C. (2014). Teaching mathematics through Story: A creative approach for the early years. Routledge. McGrath, C. (2015). Mathematical storyteller kings and queens: An alternative pedagogical choice to facilitate mathematical thinking and understand children’s mathematical capabilities. In S. Chinn (Ed.) The international handbook for mathematical difficulties and Dyscalculia (pp. 369-382). Routledge. McGrath, C. (2019). Oral story: A pedagogical tool encouraging children’s mathematical thinking. PhD thesis. University of Plymouth [Online]. Available at: https://pearl.plymouth.ac.uk/bitstream/ handle/10026.1/13717/2018McGrath10163476phd_full.pdf?sequence=1&isAllowed=y (Accessed: 30 October 2020). Naik, M. (2013). ‘Mathematics. In R. Jones and D. Wyse (Eds.), Creativity in the Primary Curriculum (2nd ed., pp. 33-49). Routledge. Palmer, S. & Corbett, P. (2003). Literacy: what works? The golden rules of primary literacy and how you can use them in your classroom. Nelson Thornes. Schiro, M. (2004). Oral storytelling and teaching mathematics: Pedagogical and multicultural perspectives. SAGE. Talk for Writing (2008). Presented by P. Corbett [DVD]. DCSF Publications. van den Heuvel-Panhuizen M. & van den Boogaard, S. (2008). Picture books as an impetus for kindergartners’ mathematical thinking. Mathematical Thinking and Learning, 10(4), 341-373. Walker, B. (1975). We made a story. Garnet Miller Ltd. Wells, G. (1987). The meaning makers: Children learning language and using language to learn. Hodder and Stoughton. ‘Teaching Mathematics Through Story: A creative approach for the early years’ draws on practical work with children, educators, parents, and professional storytellers. This book considers relationships between story and mathematics in picture books and provides guidance on how to construct oral mathematical storytelling experiences. (The book has now been translated into Korean). Audio files of mathematical storytelling by children and educators can be found under the support material link here. Audio recordings include: Dinosaur Blue Eggs; Good Night Gorilla; Little Lumpty; One City Two Brothers; Penguin (Child); Penguin (educator); The Elves and the Shoe Maker; The Enormous Turnip; The Greedy Triangle; and, Two of Everything. They are free to download. The craft of oral mathematical storytelling is made accessible through this text. About the author Caroline McGrath is a specialist mathematics teacher. She began her studies at Dublin City University (Ireland) for an Analytical Science degree. After qualifying with a Post Graduate Certificate in Education from Oxford Brookes University (UK) she worked as an Early Years Teacher and lectured in Early Childhood Studies. She achieved the Postgraduate Diploma in Professional Studies in Education (SpLD/dyslexia) from Kingston University, was awarded the Hornsby Certificate of Professional Practice, and holds Associate Membership of the British Dyslexia Association (AMBDA). She has a Masters and Doctorate from Plymouth University (UK). She writes from first-hand experience teaching and researching children’s mathematical development. When you think about math picture books, do you think about counting and shapes? I did. Do you think about diverse characters and rich, satisfying stories? I didn’t. But then Marlene Kliman, a math expert at STEM non-profit TERC, reached out to Charlesbridge, the independent publishing company where I work as an editor. Marlene proposed an alternative perspective on what math picture books can and should be. What if a math picture book could explore important math topics beyond counting and shapes? What if it could feature characters who reflect the diversity of our world? And what if it told a story that young children would want to read again and again? Such a picture book could have the power to change a child’s life. Naturally Charlesbridge said yes. We partnered with TERC to develop Storytelling Math, a new series of board books and picture books that bring together math, diversity, and the power of story. What types of math? There are many wonderful counting and shapes books out there, but math is more than that. Young children need exposure to a rich array of math topics, such as patterns, categorizing, and spatial reasoning. As Marlene puts it, “Research has shown that facility with math topics like these is critically important for young children’s school success in all subjects.” Storytelling Math introduces important but often overlooked math topics to young children and helps them build a foundation for later understanding. In What Will Fit? by Caldecott and Newbery Honor winner Grace Lin, Olivia visits a farmers’ market and searches for something to fill her basket. As Olivia tries to fit an apple, then a zucchini (courgette), and finally a pumpkin in her basket, she builds her spatial sense. We use spatial sense every day, when we read a map, pack a car trunk, and put our shoes on the correct feet. Spatial sense is also crucial in every subject, including math, science, and reading. Each Storytelling Math book includes hands-on activities for kids and grown-ups to explore math together. After reading What Will Fit?, parents can ask children to help pair socks or mittens and talk through their reasoning: “How can you tell these two are a pair?” Conversations like these can help children become stronger, more confident mathematical thinkers. Why diversity? It’s vitally important for books to reflect the diversity of our world, and for kids to be able to see themselves in books. Diverse books can empower young readers, validate their experiences, and make them feel seen. The same is true for children’s math books, and the need there is dire. When Marlene shared the data of how few math picture books feature children of color, I was shocked. It was obvious that math literature needed the same sort of transformation that’s happening in the wider world of children’s books: more #ownvoices creators and more BIPOC characters. Children of color need to see themselves as good at math. As Marlene says, “Despite decades of calls for changes, research shows that pernicious ‘deficit’ discourse appears in math education as early as preschool. Many adults, despite the best of intentions, hold unconscious bias.” Storytelling Math seeks to help undo that bias in two ways: “young children of color will build positive math identity as they literally see themselves as mathematical thinkers, and all audiences will see children of color as fully realized mathematical thinkers.” In short, representation in early math books is an issue of equity: building math confidence and aptitude is critical for every child’s success in school and life. In Lia & Luís: Who Has More? by Ana Crespo, twins Lia & Luís love Brazilian snacks, but they argue over who has more. The problem is a universal one that readers will recognize no matter their cultural background. If readers are of Brazilian descent, though, they may very well see themselves in the story: “That’s just like my family!” or “I like biscoito de polvilho, too!” And when Lia comes up with a clever solution to the problem, they’ll see that they, too, can be mathematically empowered. Why storytelling? We expect a good picture book to be emotionally resonant and compelling. We expect the story to make us care. Why not expect the same of math picture books? Beyond the math, beyond the diversity, we want kids to love these stories. Maybe they’ll root for the main character and wait eagerly for each page turn. Maybe they’ll laugh themselves silly. Ideally, they’ll shout, “Again! Again!” In The Animals Would Not Sleep! by Sara Levine, it’s bedtime for Marco and his stuffed animals, but the animals will have none of it. When Marco tries to put them away, they fly, swim, and slither right out of their bins. Marco tries sorting the animals in different ways, but nothing works and the animals start getting cranky. How can Marco make everyone happy? The math is excellent: an expertly developed introduction to sorting and classifying. Young readers will hopefully absorb the math (perhaps without even realizing it) and build their understanding along with Marco. But first and foremost, this is a great story. I laugh every time the animals start their wild rumpus, and I cheer for Marco as he races against (bed)time. At the end, when Marco saves the day with both math and empathy, my heart just melts. Bringing it all together The first six Storytelling Math books are now available, and more are on their way. I hope we’re succeeding in our mission to develop better math storybooks for children of all backgrounds—and I hope we’re helping change the landscape of math literature for the better. When you think about math picture books, do you think of deep math, rich diversity, and top-notch storytelling? I do now! About the author Alyssa Mito Pusey is an executive editor at Charlesbridge Publishing. Together with TERC senior scientist Marlene Kliman, she edits the Storytelling Math series, which was developed under a grant from the Heising-Simons Foundation. Alyssa’s other titles include Hot Pot Night by Vincent Chen, Mario and the Hole in the Sky by Elizabeth Rusch, Samurai Rising by Pamela S. Turner, A Black Hole Is NOT a Hole by Carolyn Cinami DeCristofano, and the Baby Loves Science series by Ruth Spiro. Alyssa presents regularly about nonfiction, fiction, and Storytelling Math at events for authors, illustrators, and educators. Drawing from their research article titled ‘Integrating Mathematics and Children’s Literature for Preschoolers with Disabilities’ published in the Journal of Early Intervention, the authors - Dr. Katherine B. Green (University of West Georgia) and Drs. Peggy A. Gallagher and Lynn C. Hart (both Professor Emerita, Georgia State University), have put together this short and easy-to-read blog post for interested teachers and parents. We hope you will find the information stimulating. School-entry mathematical knowledge, specifically the knowledge and understanding of numbers, is the strongest predictor of later academic achievement (Claessens et al., 2009). Researchers have found that preschoolers are developmentally ready for mathematics (Balfanz et al., 2003) and that there is key foundational content that young children should master before they can understand more complex mathematical content (National Association for the Education of Young Children & National Council of Teachers of Mathematics [NCTM], 2002). One of these broad content areas is that of Number and Operations, the focus of this research. The NCTM (2006) noted that the domain of number and operations for preschoolers includes the development of number sense, understanding whole numbers, concepts of correspondence, counting, cardinality, and comparison. Math and Young Children with Disabilities While there has been a recent increase in research on mathematics and young children, there is a scarcity of research related to young children with disabilities and mathematics. This research is critical since, as noted, early mathematical knowledge is a predictor of later academic achievement (Claessens et al., 2009). In early childhood special education, researchers agree that most children learn best with a combination of explicit instruction and naturalistic learning, particularly children who are at risk or have disabilities (Wolery & Hemmeter, 2011). One way to intervene using a more naturalistic approach is to integrate mathematics and children’s literature. Integrating Mathematics and Shared Storybook Readings In instruction that integrates mathematics and children’s literature, the literature becomes a context within which to think about the mathematics, and the mathematics can be taught and constructed naturally within the context (Van den Heuvel-Panhuizen & Van den Boogaard, 2008). Using children’s literature as a context for mathematics problems and situations provides opportunities for children to actively construct mathematical ideas and promote critical thinking by providing a forum to ask questions, elicit discussion, and make personal connections (Anderson et al., 2004; Haury, 2001). Through active construction of mathematical knowledge, children develop new mathematical ideas, structures, and schemas (Elia, Van den Heuvel-Panhuizen, & Georgiou, 2010). Empirical research supports the premise that mathematics can be effectively integrated within children’s literature during shared storybook readings (Hojnoski et al., 2014; Skoumpourdi & Mpakopoulou, 2011; Van den Heuvel-Panhuizen & Iliada, 2011). Shared storybook reading is a common and well-documented research practice in preschool classrooms (Dynia & Justice, 2015). The majority of the research literature for children with disabilities focuses on children with mild to moderate language impairments (e.g., Colmar, 2014; Van Kleeck et al., 2006; Voelmle & Storkel, 2015). Shared storybook reading is effective in improving children’s expressive and receptive language skills, mean length of utterances (MLU), and literal and inferential language skills (Colmar, 2014; Voelmle & Storkel, 2015). It is also effective in improving preliteracy skills, such as alphabet knowledge, concepts of print, alliteration, identification of initial sounds, name writing, and rhyming skills (Justice et al., 2015; Pile et al., 2010). Our Study The purpose of our study was to examine the effects of an intervention that integrated mathematics instruction within children’s literature on the early numeracy skills of preschoolers with disabilities. The specific research question was, is there any difference in the math skills of preschoolers with disabilities who received a shared storybook reading intervention with related math activities and those of preschoolers with disabilities who received only a shared storybook reading? We studied 50 children, ages 3 to 5 years, designated as having a developmental delay. The children all participated in preschool special education classrooms in one school district in a southeastern state in the U.S. We randomly divided the 10 preschool classrooms into two groups. Children in the intervention group read one children’s storybook three times per week for two weeks with related math activities introduced over the two week period. Three storybooks were read in all for a total of 18 sessions. For each lesson, this group spent approximately 5 to 10 minutes on the storybook reading and approximately 10 to 15 minutes on math instruction related to the storybook. Books used were The Snowy Day (Keats, 1962), Goldilocks and the Three Bears, and The Very Hungry Caterpillar (Carle, 1987). Children in the comparison group received the same small group storybook sessions, but there were no math questions or elaborations provided during the sessions. We assessed all children before and after the intervention on the Test of Early Mathematics Ability, Third Edition (TEMA-3; Ginsburg & Baroody, 2003) and the Individual Growth & Development Indicators Early Numeracy (IGDIS-EN; Hojnoski & Floyd, 2004). The TEMA-3 is a norm-referenced instrument designed to measure the mathematical knowledge of children ages 3 to 8 years old, whereas the IGDIS-EN is a curriculum-based measure used to assess young children’s number sense including quantity comparison, one-to-one correspondence counting, and oral counting. The Results The results showed that the integration of mathematics and children’s literature had positive and significant effects on the intervention groups’ total mathematical ability scores on the TEMA-3, quantity comparison skills, oral counting, and one-to-one-counting correspondence on the IGDIS-EN. The researchers believe that it is important to be intentional and purposeful in selecting the storybooks, and to consider which book(s) might best elicit particular math skills. It is interesting to consider the beginning number sense skills targeted in this intervention. For instance, the children in the intervention group were noted to experience significant gains in the Quantity Comparison task. The skill of recognizing which of two groups have “more” objects is one of the most fundamental and early developing numeracy skills (Chu et al., 2013), and, as discovered, a most natural concept to incorporate in math-focused storybook readings. For example, two of the targeted storybooks had several pictures throughout the book that naturally allowed for discussion of which page or object had more. The Snowy Day had some pages with more snowballs or snowflakes than other pages. The caterpillar in The Very Hungry Caterpillar ate more fruits on different days of the week. Quantity comparison was addressed in lesson activities and easily targeted during the storybook readings. Counting skills, too, were intentionally modeled and encouraged throughout the intervention. For example, in The Very Hungry Caterpillar, the researchers pointed, counting the fruit after reading each page. The children were then encouraged to count objects by touching each item as opportunities occurred during the instruction. Counting skills proved to be an easily integrated mathematics skill when reading all three target storybooks. Integrating mathematics instruction and children’s literature allowed researchers to not only provide a shared storybook reading using quality literature within a short time frame, but also within a limited budget. The materials were specifically designed so that teachers may use the intervention in their own classrooms, with costs kept to a minimum. The manipulatives used were materials that most teachers already have in their classrooms (i.e., counting/sorting bears or snapping cubes), as were the books. Providing mathematics interventions for children with disabilities is of immense importance, as researchers have found that children with disabilities who lag behind peers in math skills may experience less growth and slower gains than peers without disabilities (Lambert et al., 2014). Within a 20-minute intervention per day, three times per week for six weeks, two content areas were targeted: storybook reading and mathematics, with books and materials readily found within the preschool classrooms. Integrating the mathematics instruction and children’s literature provides a shared storybook reading to the children using quality children’s literature while encouraging the construction of early numeracy concepts. Implications for teacher preparation programs include focusing on the strategy of teaching early math skills through children’s literature. Parents could also implement many of the ideas and activities in their homes, as well, by reading the storybooks and creating fun and engaging games and mathematics activities with their children. Anderson, A., Anderson, J., & Shapiro, J. (2004). Mathematical discourse in shared storybook reading. Journal for Research in Mathematics Education, 35, 5-33. Balfanz, R., Ginsburg, H. P., & Greenes, C. (2003). The big math for little kids early childhood mathematics program. Teaching Children Mathematics, 9, 264-268. Carle, E. (1987). The very hungry caterpillar (Rev. ed.) Philomel Books. Chu, F. W., vanMarle, K., & Geary, D. C. (2013). Quantitative deficits of preschool children at risk for mathematical learning disability. Frontiers in Psychology, 4, 1-10. doi:10.3389/ Claessens, A., Duncan, G., & Engel, M. (2009). Kindergarten skills and fifth-grade achievement: Evidence from the ECLS-K. Economics of Education Review, 28, 415-427. Colmar, S. (2014). A parent-based book-reading intervention for disadvantaged children with language difficulties. Child Language Teaching & Therapy, 30, 79-90. Crehan, K. D. (2005). Review of the Test of Early Mathematics Ability–Third Edition. In R. A. Spies & B. S. Plake (Eds.), The sixteenth mental measurements yearbook. Buros Institute of Mental Measurements. Retrieved from http://www.buros.org Dynia, J. M., & Justice, L. M. (2015). Shared-reading volume in early childhood special education classrooms. Reading Psychology, 36, 232-269. Elia, I., Van den Heuvel-Panhuizen, M., & Georgiou, A. (2010). The role of picture books on children’s cognitive engagement with mathematics. European Early Childhood Education Research Journal, 18, Haury, D. (2001). Literature-based mathematics in elementary school. ERIC digest (ED No. 464807). ERIC Clearinghouse for Science, Mathematics, and Environmental Education. Hojnoski, R. L., Columba, H. L., & Polignano, J. (2014). Embedding mathematical dialogue in parent-child shared book reading: A preliminary investigation. Early Education and Development, 25, Hojnoski, R., & Floyd, R. (2004). Individual Growth and Development Indicators of Early Numeracy (IGDIS-EN). Early Learning Labs. Justice, L. M., Logan, J. R., & Damschroder, L. (2015). Designing caregiver-implemented shared-reading interventions to overcome implementation barriers. Journal of Speech, Language, and Hearing Research, 58, S1851-S1863. Keats, J. E. (1962). The snowy day. Viking Press. Lambert, R. G., Kim, D. H., & Burts, D. C. (2014). Using teacher ratings to track the growth and development of young children using the Teaching Strategies GOLD® assessment system. Journal of Psychoeducational Assessment, 32, 27-39. National Association for the Education of Young Children & National Council of Teachers of Mathematics. (2002). Early childhood mathematics: Promoting good beginnings. A joint position statement of NAEYC and NCTM. Retrieved from http://www.naeyc.org/files/naeyc/file/positions/psmath.pdf National Council of Teachers of Mathematics. (2006). Curriculum focal points for prekindergarten through grade 8 mathematics: A quest for coherence. NCTM. Pile, E. J., Girolametto, L., Johnson, C. J., Chen, X., & Cleave, P. L. (2010). Shared book reading intervention for children with language impairment: Using parents-as-aides in language intervention. Canadian Journal of Speech-language Pathology & Audiology, 4, 96-109. Skoumpourdi, C., & Mpakopoulou, I. (2011). The prints: A picture book for pre-formal geometry. Journal of Early Childhood Education, 39, 197-206. Van den Heuvel-Panhuizen, M., & Iliada, E. (2011). Kindergartener’s performance in length measurement and the effect of picture book reading. ZDM Mathematics Education, 43, 621-635. Van den Heuvel-Panhuizen, M., & Van den Boogaard, S. (2008). Picture books as an impetus for kindergartners’ mathematical thinking. Mathematical Thinking and Learning, 10, 341-373. Van Kleeck, A., Woude, J. V., & Hammett, L. (2006). Fostering literal and inferential language skills in head start preschoolers with language impairment using scripted book-sharing discussions. American Journal of Speech-language Pathology, 15, 85-95. Voelmle, K., & Storkel, H. L. (2015). Teaching new words to children with specific language impairment using interactive book reading. SIG 1 Perspectives on Language Learning and Education, 22, Wolery, M., & Hemmeter, M. L. (2011). Classroom instruction: Background, assumptions, and challenges. Journal of Early Intervention, 33, 371-380. About the authors Dr. Katherine (Katy) Green is an associate professor and program coordinator of Special Education at the University of West Georgia. She graduated from Georgia State University with a Ph.D. in the Education of Students with Exceptionalities with a focus on children with disabilities ages birth to five. With degrees in Speech-Language Pathology and Special Education, Katy taught young children with disabilities in public schools for eight years. Katy’s passion and expertise include social-emotional, early communication, and academic supports for young children with disabilities and their Dr. Peggy Gallagher, Professor Emerita, Early Childhood Special Education, Georgia State University, Atlanta, GA, has over 40 years of experience in Early Childhood Special Education, both as a classroom teacher and as a University faculty member. Her research interests are in families of children with disabilities, inclusion for young children with disabilities, and personnel preparation in special education. Dr. Gallagher is involved in special education at the international level as well. She has been an active member of the European Teacher Education Network, and is the past Director of International Programs for the College of Education at Ga. State University. She has recently presented her research in Turkey, China, Hong Kong, and India and completed a Fulbright Specialist project writing curriculum to train special education teachers in Sri Lanka. In Fall 2018, she completed a Fulbright assignment in Mongolia, training assistant teachers to include children with autism in their classrooms. Dr. Lynn Hart is Professor Emeritus from Georgia State University. Before her retirement in 2019, she served as a department chair in the College of Education and Human Development where she also served as a professor of mathematics education for over 20 years. In addition, Dr. Hart was an adjunct professor for 10 years at Notre Dame University. Dr. Hart has 3 edited books and 12 book chapters in addition to numerous scholarly journal papers. Drawing from their research article, titled ‘Preferences for tactile and narrative counting books across parents with different education levels’ which is published in Early Childhood Research Quarterly, Shannon Gaylord (Carolina Outreach), Connor O’Rear (Purdue University), Caroline Hornburg (Virginia Tech) and Nicole McNeil (University of Notre Dame) have put together this short and easy to read blog post for interested teachers and parents. We hope you will find this blog interesting and share it on social media. Counting books Counting books are a good tool for teaching children early mathematics concepts, such as one-to-one correspondence, cardinality, and addition. Parents have a lot of options when it comes to choosing which counting books to read with their children. The books available on the market vary in a number of ways, such as how the set sizes are represented in the text, the level of distraction on the page, the complexity of the math talk in the text, and much more (Powell & Nurnberger-Haag, 2015; Ward et al., 2017). How do parents choose among these? Determining the types of books parents prefer is important because parents are gatekeepers for the books that come into children’s homes, and a given counting book can only provide a child with the opportunity to engage in math talk and learn early mathematics concepts if it is read (and re-read). Our study To gain a better understanding of how parents choose counting books for their children, we administered a survey to roughly seven hundred parents of children between two-and-a-half and four years old (Gaylord et al., 2020). Our initial predictions were that parents of boys may prefer counting books that include tactile features (i.e., books with a touch-and-feel component), whereas parents of girls may prefer books that include a narrative. We also suspected that parents’ education level may influence the books parents choose to read with their children, given previous studies suggesting a link between parents’ education level and the activities parents choose to participate in with their children (Stipek et al., 1992). To determine which features parents preferred, we asked a series of forced-choice and open-ended questions. For the two sets of forced-choice questions, parents saw two different counting books and were asked which one they would prefer to read with their child and why. Unbeknownst to parents, one of the pairs pit a book with tactile counting features against a book without tactile counting features. The other pair pit a counting book with a narrative against a counting book without a narrative. For the forced-choice questions, the books that were presented were randomly selected from a larger set of ten possible books for that category (e.g., a given parent saw one of ten possible tactile books pitted against one of ten possible non-tactile books). By randomizing which books parents saw across a larger set of books that shared the same feature, our results are not tied to any one book, but are more likely to be representative of the feature more broadly. We also asked parents to rate, in general, how important four different factors were whenever they choose a counting book to read with their child: enjoyment, challenge, tactility, and presence of a narrative. Overall, we did not find evidence of any consistent associations between preferences for certain features and child gender, but we did find an association between parent education level and their choice of counting book. When asked to select a counting book either with tactile features or without tactile features, parents with lower education levels were more likely to both pick a book with tactile features and mention the tactile features of the book when justifying their selection. Parents with a graduate-level education rated tactility as less important when considering counting books than parents with a bachelor’s degree or less. When asked to select a counting book either with a narrative or without a narrative, parents with higher education levels were more likely to both pick a narrative book and mention the narrative when justifying their selection. Parents with at least a bachelor’s degree also rated the level of challenge of the counting book as a more important factor than parents with less than a bachelor’s degree. Overall, parents of all education levels reported that enjoyment was the most important factor when selecting which counting book to read. Across these questions, it is clear that parents are picking books they think their children will enjoy, but it is also apparent that parents from different education backgrounds take different approaches to deciding which counting books they want to read with their children. What else do parents consider? A follow-up examination of the open-ended responses further indicated the factors affecting parents’ book choices (O’Rear et al., 2019). In general, parents were more likely to pick a book based on the illustrations of the book (42.8% of parents) than on the mathematical content (24.1% of parents). Of the responses that mentioned mathematical aspects of the book, 77% mentioned whether or not it would be helpful for children’s counting, compared to 29% who mentioned the way the numbers were represented within the book (e.g., “provides various ways of visualizing the same number”). Take-home points Overall, these findings suggest that the math content of the book is not a driving factor in parents’ selection of counting books. Instead, parents value the perceived enjoyment of the book and the book’s illustrations. Moreover, parents from different backgrounds differ in what they view as important in selecting counting books. Research has shown that counting book reading can promote children’s conceptual understanding of counting (Gibson et al., 2020; Mix et al., 2012; O’Rear & McNeil, 2019; Petersen et al., 2014). An important next step is to determine how best to design counting books to elicit a focus on the important mathematics concepts within the book in a way that parents and children find appealing. Gaylord, S. M., O’Rear, C. D., Hornburg, C. B., & McNeil, N. M. (2020). Preferences for tactile and narrative counting books across parents with different education levels. Early Childhood Research Quarterly, 50, 29-39. https://doi.org/10.1016/j.ecresq.2018.07.010 Gibson, D. J., Gunderson, E. A., & Levine, S. C. (2020). Causal Effects of Parent Number Talk on Preschoolers’ Number Knowledge. Child Development. https://doi.org/10.1111/cdev.13423 Mix, K. S., Sandhofer, C. M., Moore, J. A., & Russell, C. (2012). Acquisition of the cardinal word principle: The role of input. Early Childhood Research Quarterly, 27(2), 274-283. https://doi.org/ O’Rear, C. D., & McNeil, N. M. (2019). Improved set-size labeling mediates the effect of a counting intervention on children’s understanding of cardinality. Developmental Science, 22, e12819. https:/ O’Rear, C. D., Gaylord, S. M., Hornburg, C. B., & McNeil, N. M. (2019, March). Features that Affect Parents’ Preferences for Different Counting Books [Poster presentation]. Biennial Meeting of the Society for Research in Child Development (SRCD), Baltimore, MD. Petersen, L., McNeil, N., Tollaksen, A., Boehm, A., Hall, C., Carrazza, C., & Devlin, B. (2014). Counting practice with pictures, but not objects, improves children’s understanding of cardinality. Proceedings of the 36th Annual Meeting of the Cognitive Science Society, 2633-2638. Quebec City, Canada. Powell, S. R., & Nurnberger-Haag, J. (2015). Everybody counts, but usually just to 10! A systematic analysis of number representations in children’s books. Early Education and Development, 26, 377-398. https://doi.org/10.1080/10409289.2015.994466 Stipek, D., Milburn, S., Clements, D., & Daniels, D. H. (1992). Parents' beliefs about appropriate education for young children. Journal of Applied Developmental Psychology, 13, 293-310. https:// Ward, J. M., Mazzocco, M. M., Bock, A. M., & Prokes, N. A. (2017). Are content and structural features of counting books aligned with research on numeracy development? Early Childhood Research Quarterly, 39, 47-63. https://doi.org/10.1016/j.ecresq.2016.10.002. About the authors Shannon M. Gaylord, LCSWA, LCASA, is a clinical social worker and addiction specialist working as an outpatient therapist at a community mental health agency, Carolina Outreach. She works with diverse and marginalized populations and specializes in working with clients with borderline personality disorder through dialectical behavior therapy (DBT). She completed this research as a student at the University of Notre Dame and continued developing this work after graduation. Connor D. O’Rear is a postdoctoral researcher in Human Development and Family Studies at Purdue University. His research looks at how children develop foundational math skills as well as how parents and teachers structure children’s early learning environment. The ultimate goal of this work is to identify the best ways to bridge research and practice to promote children’s early learning. Caroline Byrd Hornburg is an assistant professor of Human Development and Family Science at Virginia Tech, and director of the Learning and Development Lab. Her research program focuses on children’s learning, primarily in the domain of mathematics, from PreK to 6th grade. She focuses on skills that are foundational for understanding of later math concepts, as data on early skills can provide insight into identification of children at risk for later difficulties. Her research also provides a framework for optimal design of interventions to improve children’s understanding. Nicole M. McNeil is a professor in the Department of Psychology and a fellow of the Institute of Educational Initiatives at the University of Notre Dame. She also directs the interdisciplinary minor in Education, Schooling, and Society and the Cognition Learning and Development Lab at Notre Dame. She studies learning and cognitive development, with a focus on mathematical cognition, symbolic understanding, quantitative reasoning, and problem solving. She is interested in theoretical issues related to the construction and organization of knowledge, as well as practical issues related to learning and instruction in formal and informal environments. In this blog post, Professors Pat McGuire and Grant Clayton (University of Colorado – Colorado Springs, USA) provide a brief summary of a recently published ten-week pre-kindergarten micro-curriculum, Booked on Math. This micro-curriculum includes ten strategically selected book (read-alouds) and supplementary lesson plans designed to support the development of pre-kindergartener’s conceptual understanding in mathematics. This blog post also includes a brief overview of the read-alouds leveraged and provides insight into the instructional design considerations for supplementary lesson plans. The post concludes with recommendations and next steps for researchers, practitioners, and parents. A version of this blog post is reported in the authors’ research article, titled ‘Booked on math: Developing math concepts in Pre-K Classrooms using interactive read-alouds’ which is published earlier this year in Early Childhood Education Booked on math overview In Mouse Count by Ellen Stoll Walsh, children follow the adventures of a group of mice and a snake in the meadow. The rhythm of the story and illustrations follows a glissando giving the story an easy to follow musical quality. Children are immediately engaged, and the short story can easily be re-read as children count forwards and backwards along with the adventures of the mice. A natural extension is to act out the countdown as a snake attempts to find the mice. Parents and teachers understand firsthand the power of stories for children’s development of language and early literacy. Fewer make use of stories and read-aloud activities for the development of early numeracy and mathematical thinking despite the emerging body of work that explores the effects of interactive read-alouds on mathematics teaching and learning (Casey, Ekrut, Ceder, & Young, 2008; Clarke, 2002; Jennings, Jennings, Van den Heuvel-Panhuizen, & Iliada, 2011; Van den Heuvel-Panhuizen, Elia, & Robitzsch, 2016; Young-Loveridge, 2004). In the Mouse Count, preschool aged children learn to count forwards and backwards to ten. The Mouse Count is an example a story picture book used in the ten-week micro curriculum--Booked on Math­­--our team developed to support early numeracy and mathematical thinking. In this curriculum, students and teachers completed one read-aloud and associated lesson per week (10 total). Each lesson is composed of a story to be read-aloud, activities to reinforce the key concepts, and extension activities students and parents can do at home to reinforce learning. Supplementary lesson plans Supplementary lessons included in the Booked on Math­­ micro-curricula follow an inquiry-based “5E” lesson framework developed by Bybee et al. (2006) where students engage with a challenging situation, then explore what the concept means, explain the new knowledge they gained through exploration, elaborate by applying this knowledge in a novel situation, and finally evaluate by reflecting on their learning. We design easy to follow lesson plans that support student learning through the 5E framework with Get Ready, Engage, Investigate, Discuss, and Extend cycle. There is a Make it Work at the end that supports students with varied skill development. Click here is the lesson plan and activities for Mouse Count. During the micro-curricula development, we selected books from the National Association for the Education of Young Children's (NAEYC) recommended book list and other research-based early childhood curricula (e.g., MyTeachingPartner Mathematics and Science) that have demonstrated increases in student learning outcomes in pre-kindergarten mathematics and science classrooms through randomized controlled trials (Kinzie et al., 2014). Book readings were also chosen to reflect the four primary domains tested in the Teaching Strategies GOLD (TS GOLD) assessment. Mathematical domains include: (1) number concepts and operations; (2) spatial relationships and shapes; (3) comparison and measurement; and (4) patterns. These domains align with state standards for pre-kindergarten in most states (Neuman & Roskos, 2005). The full Booked on Math curriculum and TS Gold domain coverage by book reading is listed below: Our study We piloted Booked on Math in a pre-kindergarten center in the United States with seventy children (approximately 48 months old on average) across three classes. Teachers found the lesson plans easy to follow and were able to integrate Booked on Math into regular classroom activities. To test Booked on Math, some classes used the curriculum while the other teacher taught using their usual lessons. To measure the impact of the program, the school tested the children using the TS Gold assessment (The Center for Educational Measurement and Evaluation, 2011)—a typical preschool assessment of student learning in the United States—before and after Booked on Math. We found statistically significant differences in the areas of (1) quantifies, (2) shapes, and (3) spatial relationships on the TS Gold assessment after Booked on Math. The three TS Gold areas that yielded statistically significant results for Booked on Math are both conceptual and visual in nature. Our hypothesis is that the book readings, associated images, and activities may have helped young students develop a stronger conceptual understanding in these constructs (in particular with the constructs of spatial relationships and shapes) when compared to the traditional teaching methods. This result is consistent with previous research that suggests an overreliance on auditory teaching methods (e.g., lecture or listening alone) are problematic for most learners and present significant barriers for visual-spatial learners. Recommendations and next steps We believe Booked on Math will help teachers—especially with those who have math anxieties or who spend little time teaching early numeracy—easily integrate mathematical thinking and concepts into their pre-kindergarten classrooms. The recommended extension activities can support parental involvement at home and increase communication between parent and teacher about early numeracy. Given the fact that many parents read to their children at home, Booked on Math presents an opportunity to support students’ early numeracy development through an the read-alouds they already do. There is increasing evidence that read-alouds help develop early mathematical in preschool children. Booked on Math helps to extend the value of these readings by engaging students, teachers, and parents in reinforcing these key concepts in a systematic fun way. The 5E framework and lesson plan format can be adapted to other books as teachers identify new readings. Bybee, R. W., Taylor, J. A., Gardner, A., Van Scotter, P., Powell, J. C., Westbrook, A., & Landes, N. (2006). The BSCS 5E instructional model: Origins and effectiveness. Colorado Springs, Co: BSCS, 5 , 88-98. Casey, B., Erkut, S., Ceder, I., & Young, J. M. (2008). Use of a storytelling context to improve girls' and boys' geometry skills in kindergarten. Journal of Applied Developmental Psychology, 29(1), Jennings, C. M., Jennings, J. E., Richey, J., & Dixon-Krauss, L. (1992). Increasing interest and achievement in mathematics through children’s literature. Early Childhood Research Quarterly, 7, Kinzie, M. B., Whittaker, J. V., Williford, A. P., DeCoster, J., McGuire, P., Lee, Y., & Kilday, C.R. (2014). MyTeachingPartner-Math/Science pre-kindergarten curricula and teacher supports: Associations with children's mathematics and science learning. Early Childhood Research Quarterly, 29(4), 586-599. McGuire, P., Himot, B., Clayton, G., Yoo, Monica, & Logue, M. E. (2020). Booked on math: Developing math concepts in Pre-K Classrooms using interactive read-alouds. Early Childhood Education Journal. Neuman, S. & Roskos, K. (2005). The state of state’s pre-kindergarten standards. Early Childhood Research Quarterly, 20(2), 125-145. The Center for Educational Measurement and Evaluation (2011). Teaching Strategies GOLD Assessment System Technical Summary. The University of North Carolina at Charlotte. Van den Heuvel-Panhuizen, M., & Iliada, E. (2011). Kindergartener’s performance in length measurement and the effect of picture book reading. ZDM Mathematics Education, 43, 621-635. Van den Heuvel-Panhuizen, M., & Van den Boogaard, S. (2008). Picture books as an impetus for kindergartners’ mathematical thinking. Mathematical Thinking and Learning, 10, 341-373. Young-Loveridge, J. (2004). Effects on early numeracy of a program using number books and games. Early Childhood Research Quarterly, 19, 82–89. About the authors Grant Clayton is an Assistant Professor in the Department of Teaching and Learning in the College of Education at the University of Colorado – Colorado Springs. His research interests include postsecondary readiness, school choice, teacher labor market, and causal methods. Prior to joining the faculty, he was a secondary teacher and International Baccalaureate Coordinator. Pat McGuire is an Associate Professor in the Department of Teaching and Learning focusing on STEM education. Pat also serves as the College of Education Co-Director of UCCSTeach, an inquiry-based program designed to prepare the next generation of secondary mathematics and science teachers. His research interests lie in the intersection of curriculum, instructional technology and STEM education. Before joining the UCCS faculty in 2010, Pat worked as a high school mathematics teacher in Pittsburgh, PA and as a researcher at Carnegie Mellon University. Drawing from their research article, titled ‘The Effect of Using Storytelling Strategy on Students’ Performance in Fractions’ which is published in the Journal of Education and Learning), Prof. Charalampos Lemonidis and Ioanna Kaiafa (University of Western Macedonia, Greece) have put together a short and easy-to-read summary below. We hope you will find it interesting to read and inspiring you to use storytelling to enrich your mathematics teaching. Using storytelling to teach mathematics The teaching of mathematics through the use of stories has emerged in recent years as a modern and effective method of teaching. Teachers can use stories to introduce, explain and discuss mathematical concepts in a memorable way. Integrating storytelling within mathematics teaching develops literacy skills and promotes mathematical language (Wiburne & Napoli, 2008). Storytelling helps teachers to create a dynamic and interactive learning environment that supports students to make sense of mathematical vocabulary (Bintz & Moore, 2002). The use of storytelling in teaching mathematics can spark students’ interest, reduce their anxiety, engage them in the educational process (Zazkis & Liljedahl, 2009) and provide an alternative explanation of a mathematical concept. Storytelling supports memory, provides learning motivation (Van den Heuvel-Panhuizen, Boogaard Doig, 2009) and improves analytical skills. Our study The main purpose of our study was to investigate the role that the use of storytelling can play in teaching fractions to third grade students. The study sample consisted of 76 third graders (8-9 years old), who attended two primary schools in Greece. This sample was divided into experimental (n=38) and control (n=38) group. The same teacher applied the teaching program to both groups (experimental and control group). Students of both groups followed the same curriculum for the teaching of fractions (Lemonidis, 2017) and had the same textbook (student book and workbook), which was created by the researchers in accordance with the principles and objectives of the particular curriculum. When introducing a new mathematical concept to the experimental group, the teacher was reading a teaching story to students while the pictures accompanying the text were displayed. Then, a brief discussion on the content of the story took place in the classroom, while the students were working on math activities related to the story and learning objectives. The teacher was reading the story in an interactive way that fostered communication to accomplish the goal of meaningful student engagement (Courtade, Lingo, Karp, & Whithey, 2013). The Students were encouraged to strategically and purposefully interact with both the teacher and the content of the story. This reading experience requires students to be active participants than passive listeners (National Early Literacy Panel, 2008). Subsequently, students explored mathematical concepts through the processing of textbook activities (student book and workbook). When introducing a new mathematical concept of fractions to the control group, the teacher used manipulative materials. The students worked with objects, area models, fraction strips and number lines to explore fraction concepts. Subsequently, the students approached mathematical concepts through the processing of textbook activities (student book & workbook). The story The story, written by the researchers, is entitled "Journey to the Land of Fractions" and includes seven parts. Each part was a target-focused story, namely, it was in accordance with the curriculum objectives. The protagonist of the story is “Takis, the little fraction” a fractional unit (1/8). Takis fails his school exams and his value decreases as teachers add units to his denominator. Takis feels very disappointed and decides to leave the City of Fractional Units and not return, unless he increases his value. Two whole numbers, 3 and 5, help him overcome obstacles and manage to increase his value. Thus, a fascinating adventure begins, through which students observe the properties of fractions, watching the story's plot unfold. The results The results showed that the use of storytelling had a positive effect on students’ achievement in fractions, as the experimental group performed significantly better than the control group in post - test. This test was developed by the researchers and included 10 activities and problems that were in line with the teaching objectives of the intervention. These activities were referring to: a) The part-whole interpretation of fraction, b) Placing fractions on number line diagram, g) Creating and manipulating fraction representations and e) Comparing fractions. Students who benefit most from the use of storytelling were the medium and low achieving students. The use of storytelling had a positive effect on mathematical skills, such as comparing fractions, finding equivalent fractions, creating and manipulating representations and problem solving. The use of storytelling in teaching mathematics allowed students to approach mathematical concepts in a learning environment with rich stimuli that contributed to conceptual understanding of mathematical concepts (Capraro & Capraro, 2006; Van den Heuvel-Panhuizen, Boogaard, & Doig, 2009). Representations that students created through the use of the stories enriched their mathematical understanding. The students had the opportunity to transform the ideas presented through the stories into their own personal representations. Transforming an idea includes processes, such as rethinking, re-creating and reconstructing it in a new form (Whitin & Whitin, 2001). As the development of mathematical concepts took place along with the story evolution, students had the opportunity to understand “What is fraction”, how its value changes, and what they should consider when comparing two fractions with a common numerator or common denominator. Subsequently, students could transfer this knowledge to solving problems with similar content. The results from this study suggest that careful selection of a story and its targeted inclusion in teaching can support students’ understanding of mathematical concepts. Storytelling provides students with a meaningful context in which they can communicate and discuss the mathematical ideas inherent in the text. Students’ learning is more successful when material is presented in a way that is meaningful to them (Price, 2009). The context story provides students an opportunity to develop meaningful knowledge of concepts and processes through investigation rather than memorizing (Carparo & Carparo, 2006). Teachers and parents can use picture books as a way to introduce children to mathematical concepts in a meaningful and applicable way. Bintz, W. P., & Moore, S. D. (2002). Using literature to support mathematical thinking in middle school. Middle School Journal, 34(2), 25–32. Capraro, R. M., & Capraro, M. M. (2006). Are you really going to read us a story? Learning geometry through children’s mathematics literature. Reading Psychology, 27(1), 21–36. Courtade, G. R., Lingo, A. S., Karp, K. S., & Whitney, T. (2013). Shared story reading. Teaching mathematics to students with moderate and severe disabilities. Teaching Experimental Children, 45, Lemonidis, Ch. (2017). In the trajectory of the rational. Kyriakidis Publications, Thessaloniki, Greece. [in Greek]. National Early Literacy Panel. (2008). Developing early literacy: Report of the national early literacy panel. Washington, DC: National Institute for Literacy. Price, R. R. (2009). Using children’s literature to teach mathematics. Quantile. Van den Heuvel-Panhuizen, M., Boogaard, S., & Doig, B. (2009). Picture books stimulate the learning of mathematics. Australian Journal of Early Childhood, 34(3), 30–39. Wilburne, J. M., & Napoli, M. (2008). Connecting mathematics and literature: An analysis of pre-service education school teachers’ changing beliefs and knowledge. IUMPST: The Journal, 2, 1–10. Whitin, P., & Whitin, D. (2001). Using literature to invite mathematical representations. In A. A. Cuoco (Ed.), The roles of representation in school mathematics (2001 yearbook of the National Council of Teachers of Mathematics, pp. 228–237). National Council of Teachers of Mathematics. Zazkis, R., & Liljedahl, P. (2009). Teaching mathematics as storytelling. Sense Publishers. About the authors Prof. Charalampos Lemonidis is a Professor of Mathematics Education at the University of Western Macedonia, Greece. His research interests include mental calculation and Estimation, use of technology in teaching/learning mathematics, mathematics disabilities, long life learning of mathematics. Ioanna Kaiafa is a Post-Doctoral Researcher at the University of Western Macedonia, Greece. Her research interests include the use of storytelling in teaching mathematics, learning disabilities in mathematics and mathematics teaching to young learners. Drawing from their research article, titled 'Shared book reading to promote math talk in parent–child dyads in low-income families' which is published in Topics in Early Childhood Special Education journal, the authors - Dr. Nicole M. Hendrix (Emory University), Prof. Robin L. Hojnoski (Lehigh University) and Dr. Kristen Missall (University of Washington) – have put together this short and easy-to-read blog post for interested teachers and parents. We hope you will find their blog post interesting. Differences in early mathematical skill and knowledge develop prior to formal schooling, with some children entering kindergarten with greater skills and knowledge than others. Children from lower-income backgrounds, dual language learners, and children with disabilities may be at increased risk for entering school with limited mathematical skills and knowledge (e.g., Hojnoski et al., 2017; National Mathematics Advisory Panel, 2008). Because early mathematical skills and knowledge are related to long-term achievement in both reading and mathematics (Duncan et al., 2007), it is important to provide rich mathematical experiences for young children in their natural environments. Math in the home The home mathematical environment includes activities, materials, and interactions that support development of mathematical skills and knowledge. Different opportunities to learn in the home environment lead to differences in children’s skills and knowledge. The home learning environment varies widely across families (e.g., DeFlorio & Beliakoff, 2015; Levine et al., 2011), which has implications for school-based learning (Berkowitz et al., 2015; Kleemans et al., 2012). Variability may be due, in part, to gaps in understanding about skills and knowledge young children should develop as well as activities that can support mathematical development in the home. Given the busy lives of many families, it is useful to consider whether mathematical interactions can occur during more common routines, such as shared book reading. During shared book reading, adults can support children’s learning through rich language experiences that go beyond the words on the pages (Towson et al., 2017). Adults can engage children in talking about key vocabulary and concepts embedded within the book, guiding children’s conceptual understanding and language skills through interactive dialogue. Because interactions during shared book reading are related to the content of the book, books can be strategically selected to promote certain types of interactions and learning. Our study In our work, we were interested in whether the use of mathematically-oriented books during shared book reading would lead to more parent and child mathematical interactions than books without mathematical content. To address this, we provided three parents of preschoolers with four to five books each week for four to six weeks to read with their child. Parents received a mix of mathematically-oriented books and books without mathematical content. Mathematically-oriented books discussed topics such as geometric shapes (e.g., Mouse Shapes), measurement (Inch by Inch), counting (e.g., Each Orange has 8 Slices), and connections between printed numbers and their quantities (e.g., Rooster’s Off to See the World). Books without mathematical content contained no clearly identifiable mathematical concepts (e.g., Corduroy) and were selected as developmentally appropriate given their balance of text and illustrations. We asked parents to read with their child as they normally would and to audio record their reading sessions. We transcribed the audio recordings and coded parent and child interactions that occurred outside of the story line for mathematical content (e.g., numbers and operations, geometry, measurement, algebra, patterns, and data analysis); we refer to this as “math talk.” Using the coded interactions, we calculated the overall number of utterances, or verbal statements that occurred, and the percentage of utterances that were mathematical for parents and children separately. The data showed clear differences in the occurrence of math talk and the percentage of utterances of math talk between mathematically-oriented books and those without mathematical content. Both parents and children used more math talk while reading mathematically-oriented books, creating more opportunities for children to learn mathematical vocabulary, concepts, and skills. Our results suggested that the content of the book influences what parents and children talk about during shared book reading. Strategically selecting books, then, provides a simple way to increase math talk at While the results of the first phase of our work were encouraging, we also were interested in whether parents would benefit from additional support in the form of training in shared book reading strategies and a review of key early mathematical vocabulary, concepts, and skills that they could focus on during shared book reading. To address this, we asked the same parents to participate in brief training on how to use books to engage in mathematical interactions with their children. Following the training, we asked parents to continue with shared book reading using only mathematically-oriented books. To support parent use of specific strategies, each book included a reader’s guide that contained a one- to two-sentence summary of the book, learning objectives and key concepts (i.e., mathematical knowledge targeted within the book), key mathematical vocabulary, and recommended questions to encourage math talk. As in the first phase of our work, we asked parents to audio record reading sessions, and we then transcribed and coded parent-child dialogue. Following the training, parents and children talked more consistently about math during shared book reading, and the occurrence of math talk was generally higher than before the training. Also, the ways in which families talked about mathematics during shared book reading were qualitatively different following the training. There was more discussion of mathematical concepts, and parents used more open-ended questions than prior to the training; accordingly, these strategies may have allowed for richer interactions related to mathematics. Despite these positive changes, over time, parent and child math talk decreased. Parents may have benefited from a reminder of key strategies, or perhaps, math talk was more difficult with certain books. Study implications Overall, our work suggests the promise of shared book reading to support mathematical development in young children. Strategically selecting books with a mathematical focus provides more opportunities for adults to introduce and explain key mathematical vocabulary and concepts and to support children’s skills. Parents can also benefit from brief trainings that address their own knowledge of early mathematics and strategies to engage children in shared book reading. Although our work focused on mathematical shared book reading between parents and children, similar approaches have been demonstrated to be a promising way for teachers to increase their attention to early mathematical vocabulary, concepts, and skill development (e.g., Hojnoski et al., 2016; Purpura et al., 2017). Shared book reading may offer parents and teachers a way of supporting math development that fits within existing daily routines. Berkowitz, T., Schaeffer, M. W., Maloney, E. A., Peterson, L., Gregor, C., Levine, S. C., & Beilock, S. L. (2015). Math at home adds up to achievement in school. Science, 350(6257), 196-198. DeFlorio, L., & Beliakoff, A. (2015). Socioeconomic status and preschoolers' mathematical knowledge: The contribution of home activities and parent beliefs. Early Education and Development, 26(3), Duncan, G. J., Dowsett, C. J., Claessens, A., Magnuson, K., Huston, A. C., Klebanov, P., Pagani, L., Feinstein, L., Engel, M., Brooks-Gunn, J., Sexton, H., Duckworth, K., & Japel, C. (2007). School readiness and later achievement. Developmental Psychology, 43(6), 1428-1464. Hojnoski, R. L., Caskie, G. I., & Miller, R. Y. (2017). Early numeracy trajectories: Baseline performance levels and growth rates in young children by disability status. Topics in Early Childhood Special Education, 37(4), 206-218. Hojnoski, R. L., Columba, H. L., & Polignano, J. (2014). Embedding mathematical dialogue in parent-child shared book reading: A preliminary investigation. Early Education and Development, 25(4), Hojnoski, R., Polignano, J., & Columba, H. L. (2016). Increasing teacher mathematical talk during shared book reading in the preschool classroom: A pilot study. Early Education and Development, 27 (5), 676-691. Kleemans, T., Peeters, M., Segers, E., & Verhoeven, L. (2012). Child and home predictors of early numeracy skills in kindergarten. Early Childhood Research Quarterly, 27(3), 471-477. Levine, S. C., Gunderson, E. A., & Huttenlocher, J. (2011). Number development in context: Variations in home and school input during the preschool years. In N. L. Stein, & S. W. Raudenbush (Eds.), Developmental cognitive science goes to school (pp. 189-202). Taylor and Francis: New York, NY. National Mathematics Advisory Panel. (2008). Foundations for success: The final report of the National Mathematics Advisory Panel. Washington, D.C.: U.S. Department of Education. Purpura, D. J., Napoli, A. R., Wehrspann, E. A., & Gold, Z. S. (2017). Causal connections between mathematical language and mathematical knowledge: A dialogic reading intervention. Journal of Research on Educational Effectiveness, 10(1), 116-137. Towson, J. A., Fettig, A., Fleury, V. P., & Abarca, D. L. (2017). Dialogic reading in early childhood settings: A summary of the evidence base. Topics in Early Childhood Special Education, 37(3), About the Authors Dr. Nicole Hendrix is an Assistant Professor in the Department of Pediatrics within the Emory University School of Medicine. In her clinical role, she conducts psychodiagnostic evaluations for children and adolescents and provides parent coaching for families with toddlers with social communication delays. Her research focuses on early academic and social communication interventions for at-risk populations, early identification of autism spectrum disorder, and systemic issues impacting healthcare disparities. Prof. Robin Hojnoski is a Professor of School Psychology and is currently the Associate Dean for Graduate Studies in the College of Education at Lehigh University. Her research interests center on supporting early learning and social development through effective assessment, instruction, and intervention practices across home and school settings. Dr. Kristen Missall is an Associate Professor in the University of Washington's College of Education. Her research centers on child growth and development from 3 to 8 years of age in the areas of early academic and social development, data-based decision making (MTSS) and school readiness/transition to school. Drawing from their article titled 'Mathematics Learning Opportunities in Preschool: Where Does the Classroom Library Fit In?' which has recently been published in the Early Education and Development journal, the authors - Dr. Michele L. Stites, Dr. Susan Sonnenschein, Rebecca Dowling and Brittany Gay (University of Maryland, Baltimore County) – have put together this short and easy-to-read blog post for interested teachers and parents. We hope you will find their blog post interesting and feel free to share it on social media. Math in preschool It is critical that young children are provided with experiences that foster mathematics skills (Ginsburg et al., 2008). A child’s classroom provides an opportunity to engage in mathematical learning experiences and build foundational mathematics skills (National Council of Teachers of Mathematics, 2013; NAEYC/NCTM, 2010). Young children often learn through direct instruction, observation, and engaging in games and other mathematics-related tasks (LeFevre et al, 2009; Sonnenschein et al., 2016), but learn best when instruction occurs in situations that children find engaging (NAEYC/NCTM, 2010; Pomerantz & Grolnick, 2017; Sonnenschein et al., 2016; Stites & Brown, 2019). While engaging child-centered opportunities for non-mathematics learning abound in preschool classrooms, mathematics exposure is often more limited (Ginsburg et al., 2008). Children spend an average of only 24 minutes a day with access to mathematics activities versus 77 minutes for literacy (Piasta et al., 2014). Our study Given this lack of exposure, and the importance of mathematical opportunities, we need to investigate often overlooked areas for mathematics learning. Classroom libraries have long been seen as effective ways to promote literacy development, especially when teachers take an active interest in supporting children’s use of it (Neuman, 1999). This research, along with what we know about reading storybooks with mathematical concepts to improve learning (Hassinger-Das et al., 2015), indicates that the classroom library should not be left out when looking for ways to support young children’s mathematics learning. We recently surveyed 150 preschool teachers in the USA about the types of books in their classroom libraries and the availability of mathematics-related story books for the children they teach. While we focused our study on preschool classrooms, the information provided likely applies to elementary classrooms as well due to the frequent presence of libraries in these settings. The majority of the teachers who responded to our study indicated they have a classroom library (98%) that they encourage children to use throughout the day during times like free choice and transitions, with children on average spending 10-30 minutes a day exploring books in the library. While most teachers indicated having a well-used library, we found that over 81% of teachers’ classroom libraries contained significantly fewer math books than other types of books. Many teachers reported mathematics storybooks and materials were kept for use in the math center, which are often not as freely available to students as the library. Despite not necessarily viewing the library as a place for math learning, and despite having relatively few math books there, many teachers did indicate their classrooms were full of hands-on, engaging materials for mathematics. Teachers understand how important mathematical opportunities are for young children (Stites et al, 2021) and work hard to ensure the children they teach are provided mathematical opportunities. However, they may be missing simple way to integrate mathematics in their everyday routine: the classroom library. By envisioning the classroom library as a means of fostering mathematics development, teachers can make the most of a frequently used, child-centered part of the day. Teachers can choose mathematics-themed storybooks and incorporate them into their classroom libraries. These books can then be used to talk with children about different mathematical topics and therefore “do” more mathematics. Although teacher read alouds of mathematics books is limited (Pentimonti et al., 2011), when teachers do read mathematics storybooks, children’s mathematical engagement increases (Langford, 1994) and more mathematical conversations occur (Hojnoski et al., 2014). Finding resources Resources for using storybooks to teach mathematics are available through different professional organizations. The National Council of Teachers of Mathematics (NCTM; https://www.nctm.org) and Development and Research in Early Math Education (DREME; https://dreme.stanford.edu) provide lists of mathematically relevant texts and suggestions for incorporating them into mathematics instruction. Math storybooks come in two types: (1) explicit mathematics content (often called math storybooks) where the goal of the text is to teach a mathematical topic (e.g. counting); and (2) implicit mathematics content (simply storybooks) where the topics are secondary to the story (Uscianowski et al., 2018). Both types of books can encourage mathematical thinking. A book like Chicka, Chicka, 1-2-3 (Martin & Sampson, 2005), where the point of the book is counting, is an explicit mathematics book. Conversely, The Doorbell Rang (Hutchins, 1989) is an implicit book because the division of the cookies is secondary to the theme of baking and sharing cookies. Using these resources, along with the strategies teachers already have in place to support mathematical learning, makes using the library a simple way to increase the amount of time spent “doing mathematics” because it allows time for both child and teacher directed exposure to concepts. The more frequently the classroom library is used for mathematics learning, the more commonplace it will become. Storybooks are an easy, low cost way to foster a child’s excitement and understanding of mathematics and the classroom libraries are already a part of most classrooms. Mathematics books do not need to be relegated to the math center. Let’s make the most of all the mathematical opportunities in a school day; there are more than we thought! Ginsburg, H. P., Lee, J. S., & Boyd, J. S. (2008). Mathematics education for young children: What it is and how to promote it. Social Policy Report, 22, 3-22. ISSN: ISSN-1075-703. Hassinger-Das, B., Jordan, N. C., & Dyson, N. (2015), Reading stories to learn math. The Elementary School Journal, 116, 242-264.doi: 10.1086/683986. Hojnoski, R. L., Columba, H. L., & Polignano, J. (2014). Embedding mathematical dialogue in parent-child shared book reading: A preliminary investigation. Early Education and Development, 25, Hutchins, P. (1986). The doorbell rang. New York, NY: Mulberry Books. Langford, V. (1994). The picture books of Anno: A search for a perfect world through a fascination with mathematics. Children’s Literature in Education, 25, 193–202. https://doi.org/10.1007/ LeFevre, J.-A., Skwarchuk, S.-L., Smith-Chant, B.L., Fast, L., Kamawar, D., & Bisanz, J. (2009). Home numeracy experiences and children’s math performance in the early school years. Canadian Journal of Behavioral Science, 41, 55-66. doi:10.1037/a0014532. Martin, B. & Sampson, M.R.. (2005). Chicka 1-2-3. New York, New York: Scholastic. National Association for the Education of Young Children and National Council of Teachers of Mathematics (NAEYC and NCTM). (2010). Position statement. Early childhood mathematics: Promoting good beginnings. Retrieved August 20, 2018, from http://www .naeyc.org/positionstatements/mathematics. National Council Teachers of Mathematics (2013). Mathematics in early childhood learning: A position of the National Council of Teachers of Mathematics. Retrieved from: Neuman, S. (1999). Books make a difference: A study of literacy. Reading Research Quarterly, 34, 286-311. doi:10.1598/RRQ.34.3.3. Pentimonti, J. M., Zucker, T. A., & Justice, L. M. (2011). What are preschool teachers reading in their classrooms? Reading Psychology, 32, 197-236. doi:10.1080/02702711003604484. Pianta, R.C., Barnett, S.W., Burchinal, M., & Thornburg, K.R. (2009). The effects of preschool education: What we know, how public policy is or is not aligned with the evidence base, and what we need to know. Psychology in the Public Interest, 10(2), 49-58. doi:10.1177/1529100610381908. Piasta, S.B., Pelatti, C.Y., & Miller, H.L. (2014). Mathematics and science learning opportunities in preschool classrooms. Early Education and Development, 25, 445-468. doi:10.1080/ Pomerantz, E. M., & Grolnick, W. S. (2017). The role of parenting in children’s motivation and competence: What underlies facilitative parenting? In A. Elliot, C. S. Dweck, & D. Yeager (Eds.), Handbook of Competence and Motivation, 2nd Edition: Theory and Application (pp.566-585). New York, NY: Guilford Press. Sonnenschein, S., Metzger, S.R., & Thompson, J.A. (2016). Low-income parents’ socialization of their preschoolers’ early reading and math skills. Research in Human Development, 13, 207-224. Stites, M.L. & Brown, E.T. (2019, online). Observing mathematical learning experiences in preschool. Early Child Development and Care. doi:10.1080/03004430.2019.1601089. Uscianowski, C., Almeda, M. V., & Ginsburg, H. P. (2020). Differences in the complexity of math and literacy questions parents pose during storybook reading. Early Childhood Research Quarterly, 50, 40-50. doi: 10.1016/j.ecresq.2018.07.003. About the authors Dr. Michele L. Stites is an Assistant Professor in Early Childhood Education at the University of Maryland, Baltimore County. Her research focuses on early childhood education, Universal Design for Learning (UDL), early childhood mathematics, STEAM in early childhood education, STEAM in inclusive settings, early childhood teacher education, preparing general educators to work with children with special needs, inclusion, and military dependent children. Dr. Susan Sonnenschein is a Professor of Psychology at the University of Maryland, Baltimore County. Her research addresses ways to promote the academic success of children from diverse (race/ ethnicity, SES, linguistic) backgrounds. Although the research considers home and school factors, there is particular interested in how parental beliefs and practices are associated with children’s academic development. Rebecca Dowling is a doctoral student at the University of Maryland, Baltimore County. Her research interests include associations between early childhood classroom practices, educational technology, the home learning environment, and emergent literacy and numeracy development in diverse populations. Brittany Gay is a doctoral student at the University of Maryland, Baltimore County. Her research interests include impact of poverty on educational contexts, the promotion of educational equity, program evaluation, and the translation of research to policy. Dr. Ann Wheeler and Prof. Winifred Mallam (Texas Woman’s University) put together this blog post to summarize some of the key findings from their research article entitled ‘Examining type and quality of preservice teachers’ lessons based on children’s literature’ which has recently been published in the International Journal on Teaching and Learning Mathematics journal. We hope you will find their blog post interesting and feel free to share it on social media. Using strategies of interdisciplinary lessons with university preservice mathematics teachers (UPMTs) can often enliven lessons in ways that UPMTs oftentimes do not expect. In our mathematics education classes, we engage our UMPTs through children’s literature-based mathematics lesson activities. One such lesson activity was having UMPTs who are preparing to teach in the elementary or middle school classroom create mathematics lessons based on popular children’s literature. The literature did not have a mathematics theme but our UPMTs developed a mathematics activity incorporating the theme, plot, and or characters in the book. (For complete details on the research-based project, see Wheeler and Mallam (2020)). University faculty can view the activity and replicate it with their UPMTs. Elementary and middle school classroom teachers can view the literature and lessons and modify them to meet the mathematical needs of their K-8 students (5-14 year olds). How did we design our research? Sixty UPMTs created 51 lesson activities. Six of the lesson activities were excluded because they did not focus on children’s literature, omitted a middle school Common Core State Standard for Mathematics (CCSSM), or did not include a children’s literature-based mathematics activity. This resulted in 45 lesson activities based on 43 sources of children’s literature. Literature chosen ranged from classics written by Eric Carle and Dr. Seuss to new favorites from Peter H. Reynolds and Mo Willems. The UPMTs were enrolled in an undergraduate mathematics content and pedagogy course at a university in the south central U.S.A. They were to select children’s literature that did not already have an explicit mathematics theme to demonstrate that any children’s literature could potentially be used for elementary or middle school mathematics instruction. After submissions were received, we coded UPMTs’ work based on the Common Core State Standards for Mathematics (CCSSM) (2010) used, as well as what classification in Stein et al.’s (2000) Task Analysis Guide (TAG) each lesson fit. TAG includes four categories: Doing mathematics, Procedure with Connections, Procedures without Connections, and Memorization. The first two categories are the highest-level cognitive demand categories. See Table 1 for a list of books and lesson topics. What did our research find? Through our research, we found that preservice teachers were able to create lessons of the four different TAG (Stein et al., 2000) classifications: Doing Mathematics, Procedures with Connections, Procedures without Connections, and Memorization. With our research, we found most preservice teachers created Procedures with Connections lessons, and most created geometry-based lessons. Let’s look at a couple of the most common lesson types: Procedures with Connections tasks. One such lesson was using the book entitled Don’t Throw It to Mo! by David A. Adler. In this story, Mo is small but loves American football. He is finally put in during an American football game and wins the game for his team! For the UPMT created lesson, students would create a paper American football and punt it 10 times. Data would be collected after each punt. Using the collected data, students would determine the mean, median, and mode. If students are more familiar with soccer, they could create a paper soccer ball, calculate the distance the soccer ball travels during a corner kick and use this data to determine the statistical terms. Another example of a Procedures with Connections lesson was the lesson using the book entitled Stars by Mary Lyn Ray. In this story, the reader learns all about different types of stars, whether actual stars in the sky or star-shaped objects. In the UPMT mathematics lesson, students are to create a book of star transformations, where students define each of the transformations and then plot given points to create stars that would be dilated, translated, reflected, and/or rotated based on given instructions. What this means for other teacher and classroom educators Through creating lessons based on popular children’s literature, teacher educators can show their UPMTs that they can create engaging mathematics lessons using stories their students are familiar with in today’s society. In addition, classroom teachers can utilize children’s books in ways they may not have tried in the past. Students can become engaged when they know that their mathematics lesson is now tied to one of their favorite books. As a kind of classroom mathematics activity exchange, teachers can even have their students select a book, create a mathematics lesson, and then have another class work through the mathematics! When students are finished, they can share their work with the original classes to see what results they obtain. Adler, D. (2016). Don’t throw it to Mo. Penguin Young Readers. National Governors Association Center for Best Practices, Council of Chief State School Officers (2010). Common Core State Standards for Mathematics. National Governors Association Center for Best Practices, Council of Chief State School Officers. www.corestandards.org/assets/CCSSI_Math%20Standards.pdf Ray, M. L. (2011). Stars. Beach Lane Books. Stein, M. K., Smith, M. S., Henningsen, M., & Silver, E. A. (2000). Implementing standards-based mathematics instruction: A casebook for professional development. Teachers College Press. Wheeler, A., & Mallam, W. (2020). Examining type and quality of preservice teachers' lessons based on children's literature. International Journal on Teaching and Learning Mathematics, 3(1), 1-11. doi: 10.18860/ijtlm. v3i1.9206 About the authors Dr. Ann Wheeler is an Associate Professor in the Department of Mathematics & Computer Science at Texas Woman’s University. Her research focuses on using technology and children’s literature in the mathematics classroom. Prof. Winifred Mallam is a Professor in the Department of Mathematics & Computer Science at Texas Woman’s University. Her research interests include teacher efficacy and the development of effective instructional strategies for the teaching and learning of K-16 mathematics, especially incorporating children's literature and problem solving. In this blog post, Professor Camilla Björklund (University of Gothenburg, Sweden) and Professor Hanna Palmér (Linnaeus University, Sweden) highlight some cautions for teachers and parents when consider using stories like Goldilocks and the Three Bears to support mathematics learning of their children. In our collaborations with teachers in projects aiming to develop early childhood mathematics education, we often come across traditional stories used for pedagogical purposes. These stories are used in many creative ways, as role plays, with or without props, keeping the traditional script as well as extending the story including non-traditional elements. Haven’t you, for example, heard about Goldilocks riding a motorcycle (see Pramling et al., 2019)? In early childhood education, mathematics is preferably situated in familiar and meaningful settings for children and research supports the use of stories for pedagogical purposes. Narratives make the learning content interesting and appealing (Carlsen, 2013), and frame the content in ways that facilitate for the children to learn mathematical concepts and relationships (Björklund & Pramling Samuelsson, 2013; Burton, 2002). However, using stories for pedagogical purposes is no simple matter. For example, to facilitate learning, the mathematical content of the story has to be made explicit and challenged. This is not easy as stories often entails a complexity that may hinder the discovery of mathematical content, unless the teacher has an advanced understanding of the concepts in question. To highlight this complexity, the narrative of Goldilocks will here be used as an example. (For a more detailed analysis, see Palmér & Björklund, 2020.) What mathematical challenges does the story impose? In Goldilocks, the number three (bears, bowls, chairs, and beds) is an evident mathematical content. Another content is seriation (size, warmth, and softness). A third content of a mathematical kind is the relation to Goldilocks herself – as a reference point for what is “just right”. In the story, the idea of series may be challenged through this reference point when Goldilocks finds something that is “just right”. This, since what is considered “just right” is not consistent throughout the story, for example: She tasted the porridge from the first bowl [Papa bear’s]. “This porridge is too hot!” she exclaimed. She tasted the porridge from the second bowl [Mama bear’s]. “This porridge is too cold,” she said. She tasted the last bowl of porridge [Baby bear’s]. “Ahhh, this porridge is just right.” The porridge is in accordance with the recurring series Papa bear–Mama bear–Baby bear, by first tasting from the largest bowl, then the middle-sized bowl and last from the smallest bowl. But the temperature of the porridge is first too hot, then too cold and finally just right. Shouldn’t “just right” be a temperature between hot and cold? In the story, the sequence of warmness does not follow the increasing or decreasing structure that the sizes of the bowls indicate. The same ambiguous relation between size and other elements of order occurs also in other parts of the story: “This chair is too big!” she exclaimed, sitting on the first chair [Papa bear’s]. So she sat in the second chair [Mama bear’s]. “This chair is too big, too!” she whined. So she tried the last chair [Baby bear’s]. “Ahhh, this chair is just right,” she sighed. But just as she settled down into the chair to rest, it broke into pieces! Obviously, the last chair, which was the smallest one, was not right for Goldilocks, since she broke the chair while sitting in it. And once more, a visible series of sizes is offered for the children to perceive, but not related to the size of Goldilocks but to her own judgement: She lay down in the first bed [Papa bear’s], but it was too hard. Then she lay in the second bed [Mama bear’s], but it was too soft. Then she lay down in the third bed [Baby bear’s] and it was just Thus, the story entails a complexity that may hinder the emergence of mathematical learning if the mathematical meaning and discrepancies found in the story are not brought to the forefront and explored. Despite these challenges, we strongly encourage teachers to use narratives for pedagogical purposes, not least because of the coherence and interrelationship between concepts that are enabled through the narratives. Visual props may be used to illustrate this mathematics (for example, series of chairs in three different sizes). In our observations, children direct their attention primarily to the visual props, which often are contradictive in meaning in relation to the verbal story, as in Goldilocks. A teacher who may not have noticed the illogical structure of the narrative will probably not elaborate this with the children, resulting in the mathematical concepts becoming difficult for the children to discern. However, being aware of the discrepancies between the visual series and the narrative may open up for many learning opportunities and exploration of mathematical meaning together with children in early childhood education, as the contrast in the series can be compared, explored and the meaning of series learnt. To summarize, using stories for pedagogical purposes is not simple but fun and well-suited matter. Björklund, C. & Pramling Samuelsson, I. (2013). Challenges of teaching mathematics within the frame of a story – a case study. Early Child Development and Care, 183(9), 1339–1354. DOI:10.1080/ Burton, L. (2002). Children’s mathematical narratives as learning stories. European Early Childhood Education Research Journal, 10(2), 5–18. Carlsen, M. (2013). Engaging with mathematics in the kindergarten. Orchestrating a fairy tale through questioning and use of tools. European Early Childhood Education Research Journal, 21(4), Palmér, H. & Björklund, C. (2020). Framing mathematics teaching with narratives – the ambiguity of Goldilocks. In M. Carlsen, I. Erfjord & P.S. Hundeland (Eds.), Mathematics Education in the Early Years. Results from the POEM4 Conference, 2018. (pp. 249-262). Cham: Springer. Pramling, N., Wallerstedt, C., Lagerlöf, P., Björklund, C., Kultti, A., Palmér, H., Magnusson, M., Thulin, S., Jonsson, A., & Pramling Samuelsson, I. (2019). Play-responsive teaching in early childhood education. Dordrecht, the Netherlands: Springer. https://link.springer.com/book/10.1007%2F978-3-030-15958-0 About the authors Prof. Camilla Björklund is a Professor in Education at the University of Gothenburg, Sweden. She is involved in several research projects within the field of mathematics learning and teaching in early childhood education, characterized by practice-oriented research questions and designs. She has frequently published scientific reports and books for teacher students and practicing teachers, particularly within the field of teaching about numbers and arithmetic in the early years. Prof. Hanna Palmér is a Professor in Mathematics Education at Linnaeus University, Sweden. Her research is focused on mathematics teaching and learning in preschool, preschool class and primary school. Of special interest is problem solving, digital technology and entrepreneurial competences in mathematics education, as well as the professional identity development of mathematics teachers. With 114 entries from 22 schools across six countries to judge, the Cindy Neuschwander Award* is very competitive this year. In this interview, 15-year-old Ayal Kaffman, the Award's winner, tells us a bit about himself, his inspiration for his winning maths story picture book entry (titled 'The Tale of Daisy Rabbit and the Autumn Festival') and his message for secondary school maths teachers and students globally. *The Cindy Neuschwander Award (12-15 years old category) and the Stuart J. Murphy Award (8-11 years old category) represent the two age categories of MathsThroughStories.org's Young Mathematical Story Author (YMSA) competition. The competition is the world's first international mathematical story writing competition. Could you start by telling us a bit about yourself? I am a student in the tenth grade at Hamden High School in Hamden, Connecticut and, in the afternoon, I attend a visual arts program at the Educational Center for the Arts in New Haven, CT. I have always loved animals and biology, especially evolutionary biology, and I have liked maths in school; I also love the fine arts, especially the illustration process. I also am an avid reader and I especially like book series with animal protagonists, like the Red Wall and Mistmantle series, as well as the Brambly Hedge and Beatrix Potter books. What inspired you to take part in the Young Mathematical Story Author (YMSA) competition? My excellent Trigonometry teacher, Mrs. Nolan, gave an assignment to my class in the fall 2019 to make a children's book about math. My original book for this project was on a smaller scale with fewer pages. Following its completion, I stumbled upon the MathsThroughStories.com website and its YMSA competition which seemed like a great opportunity. Usually, I create my art for my immediate peers and teachers, but this project was to be shared with people I don't know from around the world. This felt somewhat new and different, but ultimately it led me to make a book I am truly proud of. Creating this math story picture book was a delight from the very beginning. I love to draw and I love math, and this was the perfect combination of the two. I would certainly encourage as many students (aged 8-15 years old) as I can to create math story picture book and enter the YMSA competition. Your winning YMSA entry focused on the Trigonometric functions. What inspired you to choose this topic? I focused on the Trigonometric functions, such as such as Cosine, Sine, and Tangent, because that was what I was learning in the fall 2019. When the project was assigned, the objective was explicitly to explore these topics, and so I was very prepared to write a second book with these concepts in mind. It was good for me to focus on something I was learning simultaneously, instead of something I already knew well, because it challenged me to fully engage with new material. I was highly inspired by the works of Beatrix Potter, author of the popular “Peter Rabbit” book. My storyline is fairly simple, involving Daisy Rabbit moving around to different locations in preparation for a festival and solving different trigonometric problems with the help of her daughter. I found that focusing on the trigonometric functions to make this story helped me to understand the concepts more fully. Thinking of plausible situations for the concepts to be applied both required a good understanding of the uses of such functions, and it illustrated exactly how useful maths can be in the real world. Without knowledge of such functions, I wouldn't know how to solve these problems in the real world, highlighting their significance. Expressing my knowledge through art and writing made learning these math concepts enjoyable and satisfying for me. Lastly, what would you like to say to secondary school maths teachers and students who are reading this interview? Creating math story picture books could be an excellent addition to the math curriculum for learners of any age. Any math assignment that encourages developing an in-depth understanding of a concept while being fun and creative is great, and creating a story picture book is a good example of this. Also, story picture books, being aimed at children, have the expectation that the reader has very little past knowledge, and thus the writer has to go out of the way to really flesh out the concept. Moreover, creating a story picture book based on a singular math concept can impact your understanding in that, through illustration, the concept can be visualised in a way that is very easy to understand. By taking what may seem to be an abstract math concept and turning it into images, the concept is rendered easy to grasp and simple to understand. This can help not only those reading the story and viewing the visuals, but also the creator of the story who must complete this process and illustrate the story. If you want to learn more about our 2021 YMSA competition, click here. With schools in many countries now being shut indefinitely in an effort to stop the coronavirus outbreak, many parents are increasingly worried about what they can do to support their child’s learning at home over the next few weeks and possibly months. One subject in particular that many parents need help with is maths. While some children may have already been given maths worksheets by their teachers to work on, it will likely be a matter of days before they start to lose their interest in having to routinely solve pages and pages of maths problems. Here at MathsThroughStories.org, we believe that the use of storytelling could serve as an effective maths learning strategy while keeping your child firmly engaged in their maths learning. What’s more – this strategy is not limited to just pre-school children, but is applicable to those in primary (elementary) schools and beyond too. Which stories and how to access them? We have previously explained what we mean by maths stories in our earlier blog post, and how storytelling (particularly the picture book format) could help to enrich maths learning in another blog post, so we won’t be repeating them here, though we would encourage you to read these blog posts as well if you have not done so already. The focus of this blog post is more practical in nature, specifically how parents go about choosing which maths story picture books they could use and how to access them. Concerning the former, our non-profit MathsThroughStories.org website contains the world’s largest database of recommendations for maths stories (500+). For your convenience, these recommendations are sorted firstly by maths topics, and then by age groups. You can find our Recommendations page here. To access these books, you can either purchase them on-line (using platforms like Amazon) or find free videos of people reading many of these books on YouTube. We have partnered up with KidTime StoryTime, a YouTube channel, who works hard to get proper permission from publishers to record videos of them reading various story picture books including the many titles that are also on our Recommendations database. To access these videos by KidTime StoryTime, click here. There are also several other YouTube videos of people reading maths story picture books. However, as many of them have not sought permission from publishers to record videos of themselves reading these stories in their entirety, we would not be able to officially and publicly recommend them here, but “if you look, you shall find”. If where you live is not under a lockdown, then you may also find some of our recommended maths story picture books in your local bookstores or public library. It is also worth briefly noting here that all the maths story picture books on our Recommendations database are in English language. That said, many of these books have also been translated by their publishers to several other languages. If you want to use these maths story picture books in non-English languages, some of them do exist, but you will need to do a bit of detective work on Google to find them or simply directly contact relevant publishers. An example below is of ‘A Mousy Mess’ (Driscoll, 2014) with a maths focus on sorting and classifying skills. The story has been translated to Chinese and Spanish. (We will talk more about this story later in this blog post.) How to use stories to enhance your child's maths learning? Many well-written maths story picture books can almost “do the teaching” to young readers themselves without the support of teachers and parents. However, if your children are too young to read independently, then of course you will need to support them with the reading if a printed copy of the story books is used. If the YouTube video version is used, then your children can listen to the stories being read to them. However, you should not limit your role to being just a reader. You can do more to help your child get the most out of their maths stories, for example, by asking a series of questions to draw their attention to the maths elements in the story or by providing resources to facilitate their maths learning based on the story. • Concerning the former (i.e. asking questions), we would suggest you giving your child an opportunity to read the story purely for pleasure the first time, and then to go over the story again the second time with the mathematical lens on. The kind of questions you can ask your child can be formulated in a way to foster their mathematical reasoning or to extend their mathematical thinking. To illustrate these examples, let’s use the the ‘A Mousy Mess’ story again. This is a story about a young mouse named Albert, his sister Wanda and their friend Leo who come out to play with a child’s toys before Albert accidentally knocks over the toys out of their different containers. Panicked, they quickly think of different ways to put these toys back to their containers so the people would not know that they had been there. Initially, some toys are sorted by their colour, while others are sorted by their shapes and sizes. Then, Albert finds a “big blue round roll-y ball” which can go into more than one pile. This prompts the mice to rethink about how best the toys should be sorted and organised. To foster your child’s mathematical reasoning, you could ask them why the “big blue round roll-y ball” could go into more than one pile or group. To extend their mathematical thinking, you could give them a few other everyday objects for them to try to sort them into one of the groups (or a combination of the groups). Moreover, you could also ask them to come up with their own sorting criteria (e.g. texture, price, etc.). • Concerning the latter (i.e. providing resources), once your child has read or listened to the story, you could then ensure that your child has access to a wide range of everyday objects for them to try to sort them out into groups, that is to provide opportunities for your child to interact with concrete materials to give them a solid foundation that could subsequently lead them to the development of more advanced abstract thinking about classification (e.g. sorting numbers into odd and even numbers; sorting numbers into prime numbers and square numbers; sorting 2D shapes based on the number of vertices [angles], etc.). The MathsThroughStories.org website has several ideas on what questions could be asked and what resources could be used to maximise maths learning opportunities. These ideas are drawn from story-based maths lessons taught by experienced teachers in different countries around the world and you can access them for free here. Catering for the maths learning needs of older students Maths story picture books for older students It is crucial to stress that the use of maths story picture books is not for pre-school children only. There are several well written maths story picture books that cater to the learning needs of older primary (elementary) school students and secondary school students, for example, most titles in the Sir Cumference series as well as ‘What's Your Angle, Pythagoras?’ (Ellis, 2004) and ‘One Grain of Rice’ (Demi, 1997) which can be used to introduce the Pythagorean theorem and exponential growth respectively. Again, several other age-appropriate titles can be found on our Recommendations Older students creating their own mini maths story picture books Another way to enhance maths learning through storytelling for older students is by getting them to create their own mini maths story picture books (e.g. just 5-10 pages). It is worth highlighting here that we are not talking about just “writing maths stories”, but for them to actually “create maths story picture books”, that is for them to carefully think about how to visually illustrate abstract maths concepts via their page illustrations too. By contextualising and visualising abstract maths concept (i.e. through coming up with a relevant context/storyline and page illustrations respectively), we argue that it could foster students’ conceptual understanding in maths concepts. Just because a child knows, for example, that 5 x 4 equals 20 does not necessarily mean that they conceptually understand what the concept of multiplication means. In one of our on-going research projects, many 8-9 years-old children in the study when asked to come up with a word problem to represent, for example, 5 x 4, they would write something like: “If Jim has 5 sweets, and his mum gives him 4 more sweets, how many sweets does Jim have altogether?” Learning maths in a way that also helps to develop one’s conceptual understanding is thus crucial. A few simple steps to help your child develop their very own mini maths story picture books: 1) decide on which maths topic to focus on in their story (e.g. multiplication); 2) ask them to think of a problem or a crisis in which having the knowledge of that topic could help to meaningfully solve that problem / crisis; 3) think of settings and characters; 4) think of how that maths topic could be visually represented in their page illustrations; and 5) bring all of these ideas together using our suggested maths story template which is downloadable here. If you would like to learn more about enhancing maths learning through creating maths story picture books, read another one of our blog posts on this topic here. In 2019, the MathsThroughStories.org project launched the world’s first international maths story writing competition for young maths learners, and you can find examples of winning and shortlisted maths story picture books from the 2019 competition here. One particular example of maths story picture books we like from the 2019 competition is by Harriet, an 11-years-old pupil from the UK, and her story is titled 'Mindfulness through Maths'. You can read her story here. (Details of the 2020 competition can be found here.) Final words We hope you find this blog post useful in giving you some ideas of how to enrich your child’s maths learning at home. Of course, we do not suggest that what we have recommended should be the only thing your child do to enhance their maths learning. What we simply argue for here is that instead of just letting your child routinely solving pages and pages of maths worksheets mindlessly, let’s balance their diet by giving them something different, cross curricular, effective and engaging as well. If you do decide to use storytelling to enhance your child's maths learning at home, please share your experience with us on social media by, for example, tagging us @MathsStories on Twitter and @MathsThroughStories on Facebook and Instagram. To learn more about our MathsThroughStories.org initiative, please take time to explore our website or watch this video. We should be grateful if you could help share this blog post with other parents on social media! About the author Dr. Natthapoj Vincent Trakulphadetkrai is a Lecturer in Primary Mathematics Education at the University of Reading’s Institute of Education (UK), and founder of MathsThroughStories.org. His website is Natthapoj.org, and he tweets at @NatthapojVinceT. In my previous blog posts, I have explained what mathematical story picture books (MSPBs) are, and their key features that could help to enhance pupils’ mathematics learning. In this blog post, I will attempt to convince you to see that pupils should be encouraged not just to read MSPBs, but also to create them. The idea of getting pupils to develop their mathematical understanding through creating their own MSPBs is an innovative mathematics learning strategy that I have been trying to highlight to mathematics teachers (and curriculum developers) in the UK and abroad. Here, I am not talking about asking pupils to create a full-feature 30-page MSPB in one lesson. As a mathematics learning activity, pupils can simply be asked to create their own mini MSPB with just 10 pages (or so) whereby, for example, the first 2 pages set the scene and the problem to be solved by the characters; the next 6 pages can feature three variations (or attempts) in which the characters try to use their mathematical knowledge to solve the problem; and the story can come to a close on the last two pages. Why should it matter? When pupils create their own MSPBs, they need to carefully think about the storyline, which requires them to consider practical and meaningful applications of the mathematical concept in question. In brief, they need to contextualise abstract mathematical concepts. Additionally, as the focus is on presenting the stories in the picture book format, pupils also need to actively think about page illustrations, and how best to communicate abstract mathematical concepts and situations visually to their readers. As previously highlighted in my other blog post, not only could learning mathematics through storytelling benefit pupils mathematically, it could also develop their language and creative writing skills and make possible a great cross-curricular teaching and learning opportunity. Equally important, the approach would allow pupils to see mathematics in a different light – one that is less test-driven, and more fun and imaginative. This is crucial especially if we want to improve pupils’ perceptions of the subject. The preliminary findings of my pilot research with 8-9 years old Year 4 (Grade 3) pupils on the effectiveness of this mathematics learning activity is promising. Specifically, the results indicate that pupils in the intervention class (i.e. those that were asked to create MSPB on multiplication) had better conceptual understanding on multiplication (as measured through the study’s test) than their peers in the comparison class who learned multiplication the normal way (e.g. worksheets and textbooks, etc.). This pilot study was very small in scale, so I am spending this academic year to scale up the study to include over 1,300 pupils across 24 primary schools in the south east of England. Updates of this study will be posted on the project’s webpage here. From a distance, having pupils create their own MSPB might look like a cute, fun activity. However, when you carefully examine this approach, you see just how pedagogically powerful it can be. I am surprised this approach has not been used more often, because it costs nothing in terms of resources – just a few sheets of A4 paper, a pencil and a splash of imagination! This mathematics learning activity can also save you time! For example, if the concept in focus is multiplication, you could start the day with your maths lesson by getting your pupils to consider which everyday situations having knowledge about multiplication can help solve problems, and how the concept can be represented visually. Later in the literacy lesson, you could get your pupils to come up with the plot, characters and setting. You could also get them to work on their draft writing paying attention to things like grammar. After lunch, in the art lesson, you could get them to work on page illustrations, and putting their MSPB together. Before home time, the pupils could read their MSPB with the help of a visualiser to their peers. This one activity can be meaningfully integrated across different curricular subjects throughout the day. What’s more – you would have just one set of works to mark. What’s next? If you are inspired by this blog post, I hope you will give this pedagogical approach a go in your future maths lessons! You do not need to ask your pupils to create a MSPB in every single lesson throughout the school year: all I am asking is for you to consider, for example, adopting this approach at the end of each maths topic unit where pupils can use this opportunity to consolidate their learning of that topic unit. Moreover, I hope you will encourage your pupils to take part in MathsThroughStories.org’s Young Mathematical Story Author (YMSA) competition, which is an annual international competition set up to encourage young mathematics learners (8-15 years old) from around the world to embed their mathematics learning in a meaningful and engaging context through creating their own MSPBs. More details of this competition can be found here and you can find winning and shortlisted entries from the 2019 competition here. About the author Dr. Natthapoj Vincent Trakulphadetkrai is a Lecturer in Primary Mathematics Education at the University of Reading’s Institute of Education (UK), and founder of MathsThroughStories.org. His website is Natthapoj.org, and he tweets at @NatthapojVinceT. The idea of using mathematical story picture books (MSPBs) to enrich mathematics learning is not a new idea. In fact, it has been around for almost three decades, particularly in the early years setting. What is less common is using MSPBs to enrich mathematics learning beyond the early years level. I have been arguing - and will continue to argue - that the approach could also benefit mathematics learning of older pupils. Specifically, I would argue that the use of MSPBs could: foster pupils’ conceptual understanding through multi-representation of mathematical concepts, variation of mathematical situations and the use of common misconceptions as a teaching point; develop language skills; and foster engagement with mathematics learning. Photo courtesy of the University of Reading's Institute of Education Foster conceptual understanding through multi-representation We can all (hopefully) agree that we do not teach mathematics so that our pupils become a human calculator, that is someone who is good at churning out correct mathematical answers but without conceptually understanding the concept behind it. As part of one of my research projects, when Jack (pseudonym), a 9-year-old pupil, was asked by me what 20 ÷ 5 equals to, he was able to give me the correct answer (4) almost instantly. Then, when he was asked to (contextually) represent 20 ÷ 5 using a word problem, this is what he came up with: “Spanish Yoda had a can of Coke and a bag of bananas and apples and paint. How much did it cost her? Coke: £1.00. Bag of bananas: £2.00. Apples: £8.00. Paint: £9.00. Total £20.00”. How Jack’s word problem is related to 20 ÷ 5 remains a mystery. What Jack demonstrates here is a classic example of pupils whose procedural fluency (i.e. the mechanic aspect of mathematical learning) in relation to division is good, but have yet to fully grasp what the concept means conceptually. As many mathematics education scholars have argued, in order to demonstrate conceptual understanding in mathematics, pupils must be able to represent mathematical concepts in different ways using different representations (e.g. contextualisation, visualisation, etc.). Here, I would argue that key features of MSPBs, such as narrative and page illustrations, make learning mathematics conceptually effective as pupils get to learn mathematical concepts through these different representations. Take ‘Divide and Ride’ (Murphy, 1997), for example. This is a story about a group of eleven friends who want to go on carnival rides. Some of these rides have two-people seats, others have three- and four-people seats. As these seats have to be filled up before each ride can begin, the children constantly have to work out how to group themselves. Due to 11 being a prime number, there is always, at least, one person being left out (a remainder), and additional children are consequently invited to join their group to fill up the seats for each ride. Through the storyline, children can visually see how division works and what a remainder means in real life. This helps children to contextualize the concept. Additionally, not only do the illustrations depict division through images of children filling up the seats, they also include a mathematical model at the bottom of each page to represent the divisional situation in a different way as well as corresponding numerals to help children connect visual representation with symbolic representation. Theoretically speaking, the more children are able to make meaningful connections between different representations of mathematical concepts, the more conceptual understanding they are demonstrating. Thus, effective mathematical story picture books carefully look at how these different representations can be combined seamlessly throughout the story. 'Divide and Ride' by Stuart J. Murphy. (Illustrations copyright © 1997 by George Ulrich) 'Divide and Ride' by Stuart J. Murphy. (Illustrations copyright © 1997 by George Ulrich) Foster conceptual understanding through variation Another key strength of teaching mathematics using MSPBs is the development of pupils’ conceptual understanding in mathematics through what I refer to as the variation of mathematical situations that are often found in well-written MSPBs. To explain this concept, take ‘Bean Thirteen’ (McElligott, 2007) as an example. The story follows two crickets, Ralph and Flora, who have collected twelve beans to bring home for dinner. When Flora decides to pick one more bean (i.e. Bean Thirteen), Ralph is convinced it will bring bad luck. No matter how many friends they invite to try to share the 13 beans equally, it is always impossible. Situation 1: 13 beans to be shared between 2 crickets (Ralph and Flora) resulting in 1 remaining bean (6 beans each) Situation 2: 13 beans to be shared between 3 crickets (Ralph, Flora and 1 friend) resulting in 1 remaining bean (4 beans each) Situation 3: 13 beans to be shared between 4 crickets (Ralph, Flora and 2 friends) resulting in 1 remaining bean (3 beans each) Situation 4: 13 beans to be shared between 5 crickets (Ralph, Flora and 3 friends) resulting in 3 remaining beans (2 beans each) Situation 5: 13 beans to be shared between 6 crickets (Ralph, Flora and 4 friends) resulting in 1 remaining bean (2 beans each) In this example, while the number of crickets varies, the number of beans is invariant (kept the same). Through this variation of mathematical situations, rich mathematical investigations are made possible. Pupils can be asked, for example, to continue the pattern to prove that 13 cannot be divided evenly by any other numbers except for 13 itself (and hence demonstrating the meaning of prime numbers in the process). I argue that such variation of mathematical situations is crucial to foster pupils’ conceptual understanding in mathematics. ‘Bean Thirteen’ by Matthew McElligott. (Illustrations copyright © 2007 by Matthew McElligott) ‘Bean Thirteen’ by Matthew McElligott. (Illustrations copyright © 2007 by Matthew McElligott) Another example of variation of mathematical situations can be found in ‘Fractions in Disguise’ (Einhorn, 2014). This story focuses on the concept of equivalent fractions and it is about how George Cornelius Factor (who happens to share the same acronym, GCF, with – wait for it – the greatest common factor!) invents a machine, called ‘Reducer’ to help him find a very sought-after fraction (5/9) that has been stolen from a fraction auction, and has been disguised as another fraction by the villainous Dr. Brok. While at Dr. Brok’s mansion, GCF uses his Reducer machine to reveal the true form of a range of fractions (e.g. 3/21 is really 1/7; 34/63 is already in its true form; 8/10 is really 4/5, and so on) before he comes across 35/63 which is later revealed as the 5/9 fraction he has been looking for. ‘Fractions in Disguise’ by Edward Einhorn. (Illustrations copyright © 2014 by David Clark) ‘Fractions in Disguise’ by Edward Einhorn. (Illustrations copyright © 2014 by David Clark) Through such variation of mathematical situations, both ‘Bean Thirteen’ and ‘Fractions in Disguise’ make it possible for their readers to take their time to digest the new mathematical concept they are learning by providing them with several mathematical situations or examples to show them what is and what is not prime numbers and equivalent fractions, for example. The goal is that once they have seen enough examples of what is and what is not prime numbers and equivalent fractions, they will arrive at their own definition (and hence understanding) of these concepts themselves. Authors of well-written mathematical stories think carefully about what kind of variation their story needs that could help scaffold students’ learning of a mathematical concept in question. Foster deeper understanding using common misconceptions as a teaching point Effective mathematical stories incorporate readers’ common misconceptions about a particular mathematical topic in the stories as a teaching point. A good example is ‘Sir Cumference and the Fracton Faire’ (Neuschwander, 2017) which follows Sir Cumference and his wife, Lady Di of Ameter, to a local Fracton Faire where local goods are sold and where different shopkeepers show how numerators and denominators can be useful for customers to indicate how much of each product they want to buy (e.g. one-fourth of a roll of fabric, four-eights of a cheese wheel). The story addresses a common misconception that the bigger the denominators, the larger the parts. Specifically, in the story, Sir Cumference is surprised to learn that four-eights of a cheese wheel that he wants is the same size as two-fourths of the same cheese wheel that Lady Di has chosen. Authors of effective mathematical stories do research and consult with experienced mathematics educators to identify such common mathematical misconceptions and weave them in their story. 'Sir Cumference and the Fracton Faire’ by Cindy Neuschwander. (Illustrations copyright © 2017 by 'Sir Cumference and the Fracton Faire’ by Cindy Neuschwander. (Illustrations copyright © 2017 by Wayne Geehan) Wayne Geehan) Develop language skills From my earlier research (Trakulphadetkrai, Courtney, Clenton, Treffers-Daller, & Tsakalaki, 2017) and those of others, it has been found that children’ mathematical abilities are linked to their language abilities. What is exciting is how recent research (e.g. Hassinger-Das, Jordan, & Dyson, 2015; Purpura, Napoli, Wehrspann, & Gold, 2017) have also found the positive impact of using stories when teaching mathematics concepts to young children on the development of their language abilities particularly their vocabulary knowledge. Why not kill two birds with one stone? Why not teach mathematics using MSPBs to develop both pupils’ mathematical and language development at the same time? Engagement through emotional investment Another key advantage of teaching mathematics using MSPBs is that pupils arguably do not see MSPBs in the same way that they see, for example, mathematics textbooks or worksheets with word problems after word problems to be solved. They are more likely to view MSPBs as something that they can be emotionally invested in, and something that they can enjoy interacting with over and over again either together with the whole class or in their own time at their own pace. Research (e.g. McAndrew, Morris, & Fennell, 2017) has recently found that the use of stories in mathematics teaching can help to foster children’s positive attitude towards the subject. Photo courtesy of the University of Reading's Institute of Education Final words Teaching mathematics using MSPBs should not only be found in Nursery and Reception classes. This creative, cross-curricular and research-informed mathematics teaching and learning approach should too be utilised by teachers teaching at the primary school level and beyond. If you want to explore our 500+ recommendations for MSPBs that can be used to cover 40+ mathematical concepts, please head to our Recommendations here, and if you want to see examples of how MSPBs can be integrated as part of your mathematics teaching, see some of our examples here. Einhorn, E. (2014). Fractions in Disguise. Watertown, MA: Charlesbridge. Hassinger-Das, B., Jordan, N. C., & Dyson, N. (2015). Reading stories to learn math: Mathematics vocabulary instruction for children with early numeracy difficulties. Elementary School Journal, 116 (2), 242–264. McAndrew, E. M., Morris, W. L., & Fennell, F. (2017). Geometry-related children’s literature improves the geometry achievement and attitudes of second-grade students. School Science and Mathematics, 117(1-2), 34-51. McElligott, M. (2007). Bean Thirteen. New York, NY: Penguin's Putnam Publishing Group. Murphy, S. J. (1997). Divide and Ride. New York, NY: HarperCollins. Purpura, D. J., Napoli, A. R., Wehrspann, E. A., & Gold, Z. S. (2017). Causal connections between mathematical language and mathematical knowledge: A dialogic reading Intervention. Journal of Research on Educational Effectiveness, 10(1), 116-137. Trakulphadetkrai, N. V., Courtney, L., Clenton, J., Treffers-Daller, J., & Tsakalaki, A. (2017). The contribution of general language ability, reading comprehension and working memory to mathematics achievement among children with English as additional language (EAL): An exploratory study. International Journal of Bilingual Education and Bilingualism. DOI: https://doi.org/10.1080/ About the author Dr. Natthapoj Vincent Trakulphadetkrai is a Lecturer in Primary Mathematics Education at the University of Reading’s Institute of Education (UK), and founder of MathsThroughStories.org. His website is Natthapoj.org, and he tweets at @NatthapojVinceT. The ‘story’ in mathematical story picture books Defining stories can be difficult. Haven (2007) attributes this difficulty to the fact that stories are “so interwoven into the fabric of our lives and minds that we can’t step far enough away from our storied world to view stories objectively” (p. 10). Nevertheless, attempts to define the concept have been made. For example, Bruner (2002, pp. 16-17), a leading scholar in the field of educational psychology and narrative, argued that a story must involve a cast of characters who have: recognisable expectations about the ordinary state of the world, the story’s world [...]. A story begins with some breach in expected state of things [...]. Something goes awry, otherwise there’s nothing to tell about. The story concerns efforts to cope [...] with the breach and its consequences. And finally there is an outcome, some sort of resolution. The way stories involve not only characters but also some sort of a struggle or a problem for the characters to solve lends itself perfectly for mathematics teaching and learning whereby characters find themselves having to use their mathematical knowledge and skill to solve a problem that they face in the story. An example of mathematical stories, defined in this way, include ‘A Mousy Mess’ (Driscoll, 2014). This is a story about a young mouse named Albert, his sister Wanda and their friend Leo who come out to play with a child’s toys before Albert accidentally knocks over the toys out of their different containers. Panicked, they quickly think of different ways to put these toys back to their containers so the people would not know that they had been there. Initially, some toys are sorted by their colour, while others are sorted by their shapes and sizes. Then, Albert finds a big blue round roll-y ball which can go into more than one pile. This prompts the mice to rethink about how best the toys should be sorted and organised. The story encourages young readers to think of other ways the toys can be sorted to help Albert, Wanda and Leo solve the problem. ‘A Mousy Mess’ by Laura Driscoll. (Illustrations copyright © 2014 by Deborah Melmon) However, we know from experience that not every story necessarily has to have a struggle or a problem for characters to solve. Take the much-loved ‘Handa’s Surprise’ (Browne, 2014) as an example. Set in a village of the Luo tribe in south-west Kenya, this story is about a girl, Handa, who wants to surprise her friend, Akeyo, with seven delicious fruits. Handa put the fruits in a basket, which itself sits on Handa’s head. Along the journey to see Akeyo, seven different mischievous animals take the fruits one by one from the basket without Handa knowing, until there is nothing left. As she is walking past a tangerine tree, lots of tangerines fall onto the basket on top of Handa’s head (presumably very gently and very quietly!). By the time she sees Akeyo, Handa is herself surprised to see lots of tangerines in the basket instead of the seven different fruits she picked for her friend. In this story, there are certainly characters and a storyline (regardless of how simple it is), but no problems nor struggles for Handa to solve, and yet this story provides a meaningful context for young readers to learn a range of mathematical concepts, such as subtraction. ‘Handa’s Surprise’ by Eileen Browne. (Illustrations copyright © 1994/2014 by Eileen Browne) Stories can thus be defined more liberally as any narratives that simply have a storyline and a character(s). While such characterisation might seem painfully obvious to some, it is crucial to make explicit these key components of a story. This is particularly important as not every picture book has a story. Let’s take, for example, ‘One is a Snail, Ten is a Crab’ (Sayre & Sayre, 2003). While this book is very useful in helping young children learn to count and add through a series of illustrations of people and animals with different number of legs (e.g. “1 is a snail. 2 is a person. 3 is a person and a snail.”), it is not a mathematical story in that it contains no plots or storylines. It is a concept book with some lovely illustrations and some texts. That said, this should not necessarily be interpreted as concept books are inferior to mathematical story picture books. The key message here is simply that it is important that we all have a precise language to communicate with one another on what it is that we are referring to. Here at MathsThroughStories.org, our focus is on mathematical story picture books, as opposed to concept books. The reason for this is that we strongly believe that the story component has the potential to make mathematics learning experience much more engaging and could be particularly useful for developing students' reading comprehension ability and vocabulary knowledge. I talk a bit more about these two points in my other blog post here. In summary, mathematical story picture books (or MSPBs) are here referred to as picture books that contain a narrative relating to mathematical ideas or applications. ‘One is a Snail, Ten is a Crab’ by April Pulley Sayre and Jeffrey Sayre. (Illustrations copyright © 2003 by Randy Cecil) Types of mathematical story picture books Story picture books that are used to enhance mathematics teaching and learning can have either an explicit or implicit mathematical focus. Story picture books with an explicit mathematical focus can be quite easy to spot as they often have mathematical terms in their title as well as recommendations for relevant mathematics learning activities which can usually be found at the end of the book. Many of them are also often found as part of a mathematical story series. For example, ‘A Mousy Mess’ (Driscoll, 2014) that we have come across in the previous section is part of Kane Press’s Mouse Math series (20 titles altogether). Other series include: HarperCollins’s MathStarts series (63 titles), Scholastic’s Hello Math Reader series (40 titles), and Kane Press’s Math Matters series (42 titles). These series are perfect for Early Years and Key Stage 1 children. There are also Charlesbridge’s Math Adventure series (16 titles) and Sir Cumference series (10 titles), which are great for Key Stages 2 and 3 pupils (7-14 years old). However, stories that are great for mathematical teaching and learning do not have to have an explicit mathematical focus. In addition to ‘Handa’s Surprise’ (Browne, 2014) that we have already seen in the previous section, another example of story picture books with an implicit mathematical focus include ‘The Doorbell Rang’ (Hutchins, 1986). Nowhere in this story picture book did the author claim it to be written for mathematics teaching and learning, and yet the narrative lends itself nicely to mathematical investigations as shown here. ‘The Doorbell Rang’ by Pat Hutchins. (Illustrations copyright © 1986 by Pat Hutchins) Quite often, the teachers who have what I refer to as mathematical lens, or the ability to identify meaningful opportunities for mathematics teaching and learning in story picture books with an implicit mathematical do not need to rely on explicit mathematical story picture books. Such ability can be particularly useful for teachers working in a class or a school that may not be able to afford to buy new picture books with an explicit mathematical focus. With the mathematical lens on, teachers can use any of their existing story picture books and turn them into a mathematics teaching and learning resource. Final words Regardless of how explicit or implicit the mathematical focus is, mathematical story picture books can be a powerful mathematics teaching and learning tool for the reasons outlined in my other blog If you want to explore our 500+ recommendations for mathematical story picture books that can be used to cover 40+ mathematical concepts, please head to our Recommendations section here, and if you want to see examples of how mathematical story picture books can be integrated as part of your mathematics teaching, see some of our examples here. Please use the Comments section below if you would like to discuss some of the ideas presented in this blog post. Browne, E. (2014). Handa’s Surprise. London, UK: Walker Books. Bruner, J. (2002). Making stories: Law, literature, life. Cambridge, MA: Harvard University Press. Driscoll, L. (2014). A Mousy Mess. New York, NY: Kane Press. Haven, H. (2007). Story proof: The science behind the startling power of story. Westport, CT: Libraries Unlimited. Hutchins, P. (1986). The Doorbell Rang. New York, NY: Mulberry Books. Sayre, A. P. & Sayre, J. (2003). One is a Snail, Ten is a Crab. London, UK: Walker Books Ltd. About the author Dr. Natthapoj Vincent Trakulphadetkrai is a Lecturer in Primary Mathematics Education at the University of Reading’s Institute of Education (UK), and founder of MathsThroughStories.org. His website is Natthapoj.org, and he tweets at @NatthapojVinceT.
{"url":"https://www.mathsthroughstories.org/blog","timestamp":"2024-11-12T00:12:34Z","content_type":"text/html","content_length":"471483","record_id":"<urn:uuid:69b3f0f1-3782-4399-bd57-b5df3c8dc86e>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00589.warc.gz"}
Fuglede's conjecture holds on cyclic groups $\mathbb{Z}_{pqr}$ | Published in Discrete Analysis Harmonic Analysis October 15, 2019 BST Fuglede’s conjecture holds on cyclic groups $\mathbb{Z}_{pqr}$ Shi, Ruxi. 2019. “Fuglede’s Conjecture Holds on Cyclic Groups <span Class="embedded-Math" Style="white-Space: Nowrap;"> Discrete Analysis , October. Fuglede’s conjecture holds on cyclic groups $\mathbb{Z}_{pqr}$, Discrete Analysis 2019:14, 14 pp. A conjecture of Fuglede from 1974 states that a measurable set $E\subset \mathbb R^n$ of positive Lebesgue measure has a set of translates that tile $\mathbb R^n$ if and only if the space $L^2(E)$ admits an orthonormal basis of exponential functions $\{ e^{2\pi i \lambda\cdot x}:\ \lambda\in\Lambda\}$. The set $\Lambda$ is called a spectrum for $E$. The conjecture is known to be false in dimensions 3 and higher, with counterexamples due to Tao, Kolountzakis, Matolcsi, Farkas, Révész, and Móra. Nonetheless, there are important special cases for which the conjecture has been confirmed. Fuglede proved it under the assumption that either the spectrum or the translation set is a lattice. The conjecture has also been proved for convex bodies in $\mathbb R^n$, first by Iosevich, Katz and Tao for $n=2$, and, very recently, by Lev and Matolcsi (building on the earlier work by Greenfeld and Lev) for $n\geq 3$. For general non-convex sets in dimensions 1 and 2, the conjecture remains open in both directions. In dimension 1, the “tiling implies spectrum” part would follow if one can prove that all finite sets that tile the integers (or, equivalently, finite Abelian groups) by translations must satisfy certain conditions formulated by Coven and Meyerovitz. More generally, understanding tiling and spectral properties of subsets of finite groups is a key part of the problem. In this article, Ruxi Shi proves that the conjecture is true in both directions in cyclic groups of order $N=pqr$, where $p,q,r$ are distinct primes. In this case, the “tiling implies spectrum” direction follows immediately from the earlier work on the subject. Specifically, an argument due to Coven and Meyerowitz shows that if $A$ tiles $\mathbb{Z}_N$ and $N$ is a product of distinct primes, then $A$ also tiles with period $|A|$, so in particular the translation set is a lattice. However, the “spectrum implies tiling” question is much harder. While there are earlier articles (by Malikiosis-Kolountzakis and Kiss-Malikiosis-Somlai-Viser) resolving certain 2-prime cases, there is a significant increase in difficulty between 2-prime and 3-prime settings. For example, the 2-prime results rely on a structure theorem (due to R’edei, de Bruijn, and Schoenberg) for vanishing sums of $N$-th roots of unity. If $N$ has three or more distinct prime factors, the structure forced by that theorem becomes much more complicated, and in particular it becomes more difficult to prove certain quantitative results needed here. After this paper was accepted for publication, Gábor Somlai extended the result here to groups of order $\mathbb Z_{p^2qr}$, where $p,q,r$ are distinct primes. Powered by , the modern academic journal management system
{"url":"https://discreteanalysisjournal.com/article/10570","timestamp":"2024-11-13T14:47:28Z","content_type":"text/html","content_length":"151448","record_id":"<urn:uuid:ee6e7e88-c36b-43de-b519-e5a12f96e7d8>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00504.warc.gz"}
What is the net present value of the project Reference no: EM132014411 Explain the role of fincial manager This assignment explain the role of fincial manager, function of manger. And what are the motives of financial manager. Long term financial planning This assignment is designed for analyze Long term financial planning begins with the sales forecast and the key input in the long term fincial planning. Sources of finance for expansion into new foreign markets A quoted company is considering several long-term sources of finance for expansion into new foreign markets. Personal financial management How much will you have left over each half year if you adopt the latter course of action? Determine operational expenditures Organisations' behaviour is guided by financial data. In the short term, such data will help determine operational expenditures; in the long term, historical data may help generate forecasts aimed at determining strategic plans. In both instances. Conduct a what-if analysis Review the readings and media for this unit, including the Anthony's Orchard case study media. Familiarise yourself with the Anthony's Orchard company and its current situation. Analysis of the investment In this project, you will focus on one of these: the additional cost resulting from the purchase of an apple press (a piece of equipment required to manufacture apple juice). Business finance task - capital budgeting Your company is considering using the payback period for capital-budgeting. Discuss the advantages and disadvantages of this technique. Replacement analysis This document show the Replacement Analysis of modling machine. Is replacement give profit to company or not? Method for estimating a venture''s value Evaluate venture's present value, cash and surplus cash and basic venture capital. Financial management for profit and non profit organizations In this essay, we are going to discuss the issues of financial management in a non-profit organisation. Foreign company acquisition Acquisition by a foreign company and the effects of that decision and the results of foreign exchange in Euro and the exchange rate differences. Assured A++ Grade Get guaranteed satisfaction & time on delivery in every assignment order you paid with us! We ensure premium quality solution document along with free turntin report!
{"url":"https://www.expertsmind.com/library/what-is-the-net-present-value-of-the-project-52014411.aspx","timestamp":"2024-11-11T15:10:50Z","content_type":"text/html","content_length":"67946","record_id":"<urn:uuid:dc375b57-73d0-4037-9418-98b8ec0aa83c>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00536.warc.gz"}
5 Best Ways to Generate Random Numbers within a Given Range and Store in a List in Python π ‘ Problem Formulation: You want to generate a list of random numbers within a specified range in Python. For example, you may want to create a list containing 10 random numbers between 1 and 50. The expected output is a Python list, such as [14, 29, 39, 7, 35, 22, 17, 3, 47, 18], where each element is a unique random number within the given range. Method 1: Using random.randint() in a loop This method involves using a for loop along with the random.randint() function to generate random numbers and store them in a list. It provides a straightforward approach to generate a specific count of random numbers within a desired range. Here’s an example: import random random_numbers = [] for _ in range(10): random_number = random.randint(1, 50) [23, 19, 8, 47, 34, 39, 29, 10, 5, 16] This code snippet imports the random module and initializes an empty list. Then, it iterates 10 times, generating a random number within the range of 1 to 50 during each iteration. This random number is then appended to the list. Note that each execution will yield a different set of random numbers. Method 2: Using list comprehension with random.randint() List comprehension in Python is a succinct way to create lists. Combining list comprehension with random.randint() provides an elegant one-liner solution to generate random numbers within a specified range and store them in a list. Here’s an example: import random random_numbers = [random.randint(1, 50) for _ in range(10)] [42, 28, 7, 21, 37, 14, 3, 35, 11, 30] In this snippet, we use a list comprehension that runs a loop 10 times, calling random.randint(1, 50) for each iteration and collecting the results in a new list. It’s a more Pythonic and concise way to achieve the same result as Method 1. Method 3: Using random.sample() The random.sample() function is used when you need to generate a list of unique random numbers. This method ensures there are no duplicates in the list, which can be particularly useful for scenarios like lottery draws or unique ID generation. Here’s an example: import random random_numbers = random.sample(range(1, 51), 10) [2, 44, 37, 23, 13, 50, 31, 7, 42, 14] This code makes use of the random.sample() function to pick 10 unique numbers from a range of 1 to 50. Note that the range second parameter needs to be one greater than our upper limit to be inclusive. This method returns non-repeating numbers, which may or may not be desired based on your use case. Method 4: Using numpy.random.randint() NumPy, a library for numerical operations in Python, provides the numpy.random.randint() function. It’s particularly optimized for generating large arrays of random numbers and is much faster than the pure Python approach when dealing with large datasets. Here’s an example: import numpy as np random_numbers = list(np.random.randint(1, 51, size=10)) [18, 2, 34, 17, 25, 33, 48, 4, 36, 15] In the snippet, the np.random.randint() function generates a NumPy array of 10 random integers between 1 and 50, which is then converted to a Python list using the list() function. This method works well for high performance and large scale random number generation tasks. Bonus One-Liner Method 5: Using random.choices() If needing duplicates in your randomly generated list is not a concern, random.choices() function is a suitable one-liner. It also allows for weighting of the random numbers if needed. Here’s an example: import random random_numbers = random.choices(range(1, 51), k=10) [22, 14, 1, 49, 3, 3, 31, 22, 39, 31] This snippet elegantly generates a list of 10 random numbers, possibly with duplicates, from 1 to 50 using random.choices(). The key argument k specifies the number of random choices to make. It’s concise and allows for additional complexity such as weighted probabilities. • Method 1: Loop with random.randint(). Straightforward. May contain duplicates. Not as efficient for very large lists. • Method 2: List Comprehension with random.randint(). Concise. May contain duplicates. Pythonic syntax and still not as efficient for massive lists. • Method 3: random.sample(). Ensures uniqueness. Quick and efficient for small ranges but is limited by the size of the input range. • Method 4: numpy.random.randint(). Optimized for performance. Ideal for numerical computations and large arrays. Requires NumPy installation. • Bonus Method 5: random.choices(). Allows duplicates and weighted options. Simple one-liner. Not suitable when unique items are needed.
{"url":"https://blog.finxter.com/5-best-ways-to-generate-random-numbers-within-a-given-range-and-store-in-a-list-in-python/","timestamp":"2024-11-06T11:38:03Z","content_type":"text/html","content_length":"72296","record_id":"<urn:uuid:68affe66-2697-4a0b-928c-69b466b10b6f>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00335.warc.gz"}
Stepwise Regression Control Panel Use the Stepwise Regression Control panel to limit regressor effect probabilities, determine the method of selecting effects, begin or stop the selection process, and run a model. A note appears beneath the Go button to indicate whether you have excluded or missing rows. Figure 5.3 Stepwise Regression Control Panel Stopping Rule The Stopping Rule determines which model is selected. For all stopping rules other than P-value Threshold, only the Forward and Backward directions are allowed. The only stopping rules that use validation are Max Validation RSquare and Max K-Fold RSquare. See Validation Options in Stepwise Regression. P-value Threshold Uses p-values (significance levels) to enter and remove effects from the model. Two other options appear when you choose P-value Threshold: Prob to Enter Specifies the maximum p-value that an effect must have to be entered into the model during a forward step. Prob to Leave Specifies the minimum p-value that an effect must have to be removed from the model during a backward step. Minimum AICc Uses the minimum corrected Akaike Information Criterion to choose the best model. For more details, see Likelihood, AICc, and BIC. Minimum BIC Uses the minimum Bayesian Information Criterion to choose the best model. For more details, see Likelihood, AICc, and BIC. Max Validation RSquare Uses the maximum R-square from the validation set to choose the best model. This is available only when you use a validation column with two or three distinct values. For more information about validation, see Validation Set with Two or Three Values in Stepwise Regression. Max K-Fold RSquare Uses the maximum RSquare from K-fold cross validation to choose the best model. You can access the Max K-Fold RSquare stopping rule by selecting this option from the Stepwise red triangle menu. JMP Pro users can access the option by using a validation set with four or more values. When you select this option, you are asked to specify the number of folds. For more information about validation, see K-Fold Cross Validation in Stepwise Regression. The Direction you choose controls how effects enter and leave the model. Select one of the following options: Enters the term with the smallest p-value. If the P-value Threshold stopping rule is selected, that term must be significant at the level specified by Prob to Enter. See Forward Selection Example. Removes the term with the largest p-value. If the P-value Threshold stopping rule is selected, that term must not be significant at the level specified in Prob to Leave. See Backward Selection Note: When Backward is selected as the Direction, you must click Enter All before clicking Go or Step. Available only when the P-value Stopping Rule is selected. It alternates the forward and backward steps. It includes the most significant term that satisfies Prob to Enter and removes the least significant term satisfying Prob to Leave. It continues removing terms until the remaining terms are significant and then it changes to the forward direction. Go, Stop, Step Buttons The Go, Stop, and Step buttons enable you to control how terms are entered or removed from the model. Note: All Stopping Rules consider only models defined by p-value entry (Forward direction) or removal (Backward direction). Stopping rules do not consider all possible models. Automates the process of entering (Forward direction) or removing (Backward direction) terms. Among the fitted models, the model that is considered best based on the selected Stopping Rule is listed last. Except for the P-value Threshold stopping rule, the model selected as Best is one that overlooks local dips in the behavior of the stopping rule statistic. The button to the right the Best model selects it for the Make Model and Run Model options, but you are free to change this selection. – For P-value Threshold, the best model is based on the Prob to Enter and Prob to Leave criteria. See P-value Threshold. – For Min AICc and Min BIC, the automatic fits continue until a Best model is found. The Best model is one with a minimum AICc or BIC that can be followed by as many as ten models with larger values of AICc or BIC, respectively. This model is designated by the terms Best in the Parameter column and Specific in the Action column. – For Max Validation RSquare (JMP Pro only) and Max K-Fold RSquare, the automatic fits continue until a Best model is found. The Best model is one with an RSquare Validation or RSquare K-Fold value that can be followed by as many as ten models with smaller values of RSquare Validation or RSquare K-Fold, respectively. This model is designated by the terms Best in the Parameter column and Specific in the Action column. Stops the automatic selection process started by the Go button. Enters terms one-by-one in the Forward direction or removes them one-by one in the Backward direction. At any point, you can select a model by clicking its button on the right in the Step History report. The selection of model terms is updated in the Current Estimates report. This is the model that is used once you click Make Model or Run Model. Note: Appears only if your model contains related terms. When you have a nominal or ordinal variable, related terms are constructed and appear in the Current Estimates table. Use Rules to change the rules that are applied when there is a hierarchy of terms in the model. A hierarchy can occur in the following ways: • A hierarchy results when a variable is a component of another variable. For example, if your model contains variables A, B, and A*B, then A and B are precedent terms to A*B in the hierarchy. • A hierarchy also results when you include nominal or ordinal variables. A term that is above another term in the tree structure is a precedent term. See Construction of Hierarchical Terms. Select one of the following options: Calculates p-values for two separate tests when considering entry for a term that has precedents. The first p-value, p1, is calculated by grouping the term with its precedent terms and testing the group’s significance probability for entry as a joint F test. The second p-value, p2, is the result of testing the term’s significance probability for entry after the precedent terms have already entered into the model. The final significance probability for entry for the term that has precedents is max(p1, p2). Tip: The Combine rule avoids including non-significant interaction terms, whose precedent terms can have particularly strong effects. In this scenario, the strong main effects might make the group’s significance probability for entry, p1, very small. However, the second test finds that the interaction by itself is not significant. As a result, p2 is large and is used as the final significance probability for entry. Caution: The degrees of freedom value for a term that has precedents depends on which of the two significance probabilities for entry is larger. The test used for the final significance probability for entry determines the degrees of freedom, nDF, in the Current Estimates table. Therefore, if p1 is used, nDF equals the number of terms in the group for the joint test, and if p2 is used, nDF equals 1. The Combine option is the default rule. See Models with Crossed, Interaction, or Polynomial Terms. Restricts the terms that have precedents so that they cannot be entered until their precedents are entered. See Models with Nominal and Ordinal Effects and Example of the Restrict Rule for Hierarchical Terms. No Rules Gives the selection routine complete freedom to choose terms, regardless of whether the routine breaks a hierarchy or not. Whole Effects Enters only whole effects, when terms involving that effect are significant. This rule applies only when categorical variables with more than two levels are entered as possible model effects. See The Stepwise Control Panel contains the following buttons: Automates the selection process to completion. Stops the selection process. Increments the selection process one step at a time. Arrow buttons Enter All Enters all unlocked terms into the model. Remove All Removes all unlocked terms from the model. Make Model Creates a model for the Fit Model window from the model currently showing in the Current Estimates table. In cases where there are nominal or ordinal terms, Make Model creates temporary transform columns that contain terms that are needed for the model. Run Model Runs the model currently showing in the Current Estimates table. In cases where there are nominal or ordinal terms, Run Model creates temporary transform columns that contain terms that are needed for the model. The following statistics appear below the Stepwise Regression Control panel. Sum of squared errors for the current model. Error degrees of freedom for the current model. Root mean square error (residual) for the current model. Proportion of the variation in the response that can be attributed to terms in the model rather than to random error. RSquare Adj Adjusts R2 to make it more comparable over models with different numbers of parameters by using the degrees of freedom in its computation. The adjusted R2 is useful in stepwise procedure because you are looking at many different models and want to adjust for the number of terms in the model. Mallow’s Cp criterion for selecting a model. It is an alternative measure of total squared error and can be defined as follows: where s2 is the MSE for the full model and SSEp is the sum-of-squares error for a model with p variables, including the intercept. Note that p is the number of x-variables+1. If Cp is graphed with p, Mallows (1973) recommends choosing the model where Cp first approaches p. Number of parameters in the model, including the intercept. Corrected Akaike’s Information Criterion. For more details, see Likelihood, AICc, and BIC. Bayesian Information Criterion. For more details, see Likelihood, AICc, and BIC. Forward Selection Example In forward selection, terms are entered into the model and most significant terms are added until all of the terms are significant. 1. Complete the steps in Example Using Stepwise Regression. Notice that the default selection for Direction is Forward. 2. Click Step. In Figure 5.4, you can see that after one step, the most significant term, Runtime, is entered into the model. 3. Click Go. In Figure 5.5 you can see that all of the terms have been added, except RstPulse and Weight. Figure 5.4 Current Estimates Table for Forward Selection after One Step Figure 5.5 Current Estimates Table for Forward Selection after Three Steps Backward Selection Example In backward selection, all terms are entered into the model and then the least significant terms are removed until all of the remaining terms are significant. 1. Complete the steps in Example Using Stepwise Regression. 2. Click Enter All. Figure 5.6 All Effects Entered into the Model 3. For Direction, select Backward. 4. Click Step two times. The first backward step removes RstPulse and the second backward step removes Weight. Figure 5.7 Current Estimates with Terms Removed and Step History Table The Current Estimates and Step History tables shown in Figure 5.7 summarize the backward stepwise selection process. Note the BIC value of 156.362 for the third step in the Step History table. If you click Step again to remove another parameter from the model, the BIC value increases to 159.984. For this reason, you choose the step 3 model. This is also the model that the Go button produces.
{"url":"https://www.jmp.com/support/help/en/16.2/jmp/stepwise-regression-control-panel.shtml","timestamp":"2024-11-12T05:41:00Z","content_type":"application/xhtml+xml","content_length":"30257","record_id":"<urn:uuid:bca23545-7733-41f0-bc18-d98afa53dde6>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00403.warc.gz"}
Solve HCF and LCM Questions Class 10: Submitted by Atanu Chaudhuri on Sun, 23/12/2018 - 18:50 Solve HCF and LCM Questions Class 10: NCERT Solutions for Exercise 1.2 HCF and LCM questions Class 10 made easy with NCERT solutions and factorization methods. Extra practice exercises included. Get better grades now! How to find HCF and LCM by factorization is explained in details. Sections are, To skip the following section and go directly to method of HCF and LCM by factorization click here. And to go directly to the Solutions to NCERT maths for Class Exercise 1.2, click here. Fundamental Theorem of Arithmetic: The Key to HCF and LCM Before stating the theorem, let's have a short recap on the concepts of factors, prime numbers and prime factors. A Factor of an integer is a second integer that divides the first integer fully leaving zero remainder. For example, 3 and 4 are two factors of the integer 12. A prime number is an integer that has only 1 and itself as its factors. For example, 6 is not a prime number as it has 2 and 3 as factors. But 31 is a prime number as it has only 1 and itself as its factors. From these two concepts, we can define a prime factor as, A prime factor is a prime number and is a factor of a second integer. For example, among the two factors 3 and 4 of 12, 3 is a prime factor but 4 is not, as 4 has 2 as a prime factor. That's why when we express an integer as a product of factors by factorization, we find all the prime factors of the integer. For example, when we factorize 12, we find its three factors as 2, 2 and 3 and express it as the product of these prime factors as, Now we are ready to state the Fundamental theorem of Arithmetic as, Every positive integer can be expressed as a product of a unique set of prime factors, where the individual factors may appear in any position in the product. Reason why fundamental theorem of arithmetic is true Let's assume the integer $N$ is factorized into two unique set of prime factors as, $N_1=a_1\times{a_2}\times{c_3}\times{b_1}\times{b_2}\times{b_3}$, and, Here, $N_1=N_2=N$. Dividing the two and cancelling out common factors we get, $\displaystyle\frac{N_1}{N_2}=\displaystyle\frac{c_3\times{b_3}}{a_3\times{c_2}}\neq 1$. As $N_1$ and $N_2$ are two representations of same number $N$, result of division of one by the other must be 1 and so, $N$ cannot be expressed as a product of more than one unique set of prime In other words, Any positive integer $N$ can have only one unique set of factors. Let us now show you a few examples of breaking up a number into its component unique set of prime factors. Examples of numbers expressed as product of prime factors Notice that in every example, the factors are prime factors, and a prime factor might appear more than once in a number, as in factor expansion of 32, 60 and 168. Now we will introduce the mechanism of finding HCF and LCM by factorization. How to Find HCF and LCM for Class 10: Step-by-Step Factorization Method Once you find the prime factors of two or more numbers, to find the HCF, Just identify the prime factors common to all the numbers and form the product of these common factors as the HCF of the numbers. Example: Find the HCF of 60, 168 and 330. Solution: Expressing the three numbers in the form of product of factors, Inspecting the three sets of factors, we identify the common factors 2 and 3, and the product 6 as the HCF. The figure below shows the two factors common to each integer circled and HCF formed as the product of these two common factors. Note that by factorization method you can find out HCF of any number of integers just by identifying the factors common to ALL of the integers and forming their product. To find the LCM instead of HCF, Just cross out the prime factors that are common to more than one integer while preserving the highest power of every factor in an integer. Collect the remaining factors to a product to form the LCM. For LCM of two integers, you can divide one by the HCF and multiply the quotient with the other. By the removal of all occurrences of a factor in more than one integer, it is ensured that in LCM, HCF appears only once as well as it fulfills the requirement of smallest multiple of all the Example: Find the LCM of 60, 168 and 330. Solution: Expressing the three numbers in the form of product of factors, Inspecting the three sets of factors, we identify the common factors 2 and 3, and remove these two from the first two numbers. Next we identify the factors 2 and 5 common to 60 and 330 and remove these two also. The product of the remaining factors form the LCM as, Note that for preserving $2^3$, the highest power of 2 in 168 and all integers combined, we couldn't remove 2 common between 168 and 330. The figure below shows this operation to find LCM of 60, 168 and 330. The preservation of the highest power of each factor in LCM gives rise to the mathematically rugged method of finding LCM as, Identify highest power of each factor in each of the numbers so that product of the highest power of each factor taken once form the LCM. In the three integers 60, 168 and 330, $2^3=8$ is the highest power of 2 more than 1, and rest of the factors 3, 5, 7 and 11 occur with power unity. So the LCM would be, Let us take up a simpler example. Example 2. Find the LCM of 18 and 84. $18=2\times{3}\times{3}$, and HCF is $2\times{3}=6$. LCM is, $3\times{84}=252$, factors of HCF 2 and 3 are removed from the first number leaving only 3 contributed by the first number. Just multiplying 3 by the second number ensures inclusion of only the highest powers (in this case 1) of each factor only once. Notice that, $18\times{84}=6\times{3}\times{84}=\text{HCF}\times{LCM}$. This relation between HCF and LCM of two numbers as a product of the two numbers follows from the definition of the two concepts. Note: It is easier to find the HCF first and then using it, find the LCM from the set of factors. Though the processes seem to be simple, you need to find out the factors first. How would you find out factors of an integer? Let us briefly go through this process of factorization of an integer. If you decide, you may skip this section. Finding out factors or factorization The method is as follows, Step 1: If the number is even, divide by 2 to get the quotient. Repeat till the quotient, which we call as result number, remains even. In this step we will extract all factors 2 from the number. We call 2 as the test factor. Step 2: Test factor as the next prime number: Determine the next prime integer and repeat Step 1 with with this integer as the test factor in place of 2. For the first time, either when the number is odd to start with, or after extracting all factors of 2 in Step 1, the test factor with which we will test the result number will be 3. Step 3: End condition: Repeat Step 2 and stop only when the result number becomes 1, or square of the test factor becomes larger than the result number at a particular stage. For example if we factorize 8400, Step 1: yields the factors, 2, 2, 2 and 2 with final quotient or result number as 525. Step 2: test factor 3: factors: 3; result number 175. Step 3: test factor 5: factors: 5 and 5; result number 7, the last factor. So the factors of 8400 are, 2, 2, 2, 2, 3, 5, 5 and 7. To determine at any stage whether the result number at that stage is divisible by the test factor, we need to do divisibility test before actual division. Briefly the divisibility tests are, • for 2: if the number is even it is divisible by 2. • for 4: if the number formed by the first two digits from right is divisible by 4, the main number is divisible by 4. For example, 168 is divisible by 4 as 68 is divisible by 4. • for 8: if the number formed by the first three digits from right is divisible by 8, the main number is divisible by 8. For example, 311144 is divisible by 8 as 144 is divisible by 8. • for 3: if the sum of the digits of the number (called integer sum) is divisible by 3, the number is divisible by 3. For example, 87123 is divisible by 3 as integer sum 21 is divisible by 3. • for 9: if the sum of the digits of the number or integer sum is divisible by 9, the number is divisible by 9. For example, 4959 is divisible by 9 as integer sum 27 is divisible by 9. • for 25: if the number formed by first two digits from right is 00, 25, 50 or 75, the main number is divisible by 25. For example 17625 is divisible by 25. • for 11: if the sum of alternate digits are equal or difference between the sums is divisible by 11, the number is divisible by 11. For example, 539 is divisible by 11 as difference of alternate digits sums $14-3=11$ is divisible by 11. Similarly, 253 is divisible by 11 as alternate digit sums are equal to single value 5. • for 5: if the unit's digit of the number is either 0 or 5, the number is divisible by 5. For example, 2345 is divisible by 5. For rest of the tests recommendation is to directly divide. Recommendation: To speed up factorization, if the number is even test for 8, then 4 and the 2. If the number is odd, test for 25 followed 5; then 11; and then 9 followed by 3. NCERT Solutions for Class 10 Maths: HCF and LCM Questions Problem 1.1 Express 140 as a product of its prime factors. Solution problem 1.1. Step 1: As 140 is even: test for 8: fails. Test for 4: quotient result number 35 which is $7\times{5}$. So, $140=2\times{2}\times{5}\times{7}$. Problem 1.2 Express 156 as a product of its prime factors. Solution problem 1.2. Step 1: As 156 is even: test for 8: fails. Test for 4: quotient result number 39 which is $3\times{13}$. So, $156=2\times{2}\times{3}\times{13}$. Problem 1.3 Express 3825 as a product of its prime factors. Solution problem 1.3. Step 1: Test 25 success: quotient result number: 153. Step 2: Result number 153: test for 9 success: quotient result number 17. So, $3825=3\times{3}\times{5}\times{5}\times{17}$. Problem 1.4 Express 5005 as a product of its prime factors. Solution problem 1.4. Step 1. Test for 5: quotient result number 1001. Step 2: Result number 1001: Test for 3 fails. Test for 7 by actual division mentally: quotient result number 143. Step 3: Result number 143: Test for 11: quotient result number is the prime factor 13. So, $5005=5\times{7}\times{11}\times{13}$. Problem 1.5 Express 7429 as a product of its prime factors. Solution problem 1.5. Step 1: Test for 3 fails; test for 7 fails; test for 11 fails; test for 13 fails; test for 17 success: quotient result number 437. Step 2: Result number 437: test for next prime integer 19: quotient result number 23, which is a prime. So, $7429=17\times{19}\times{23}$. Problem 2.1. Find the HCF and LCM of 26 and 91 and verify $\text{HCF}\times{\text{LCM}}=26\times{91}$. Solution problem 2.1. By observation we express, $26=2\times{13}$, and So HCF is the single common factor 13 and LCM is, $2\times{91}=182$. Problem 2.2. Find the HCF and LCM of 510 and 92 and verify $\text{HCF}\times{\text{LCM}}=510\times{92}$. Solution problem 2.2. By observation we can express, $510=2\times{3}\times{5}\times{17}$, and So, $\text{HCF}=2$, and $\text{LCM}=510\times{46}=23460$. Problem 2.3. Find the HCF and LCM of 336 and 54 and verify $\text{HCF}\times{\text{LCM}}=336\times{54}$. Solution problem 2.3. By observation we can express, $336=2\times{2}\times{2}\times{2}\times{3}\times{7}$, factors taken our first 8, then 2, then 3 and then 7. So, $\text{HCF}=2\times{3}=6$. Dividing 54 by 6 and taking the product of quotient 9 and 336 we get LCM as, Thus, $\text{HCF}\times{\text{LCM}}=6\times{9}\times{336}=336\times{54}$ Problem 3.1. Find HCF and LCM of 12, 15 and 21 by prime factorization method. Solution problem 3.1. By observation we can express the three integers as product of prime factors, Single common factor 3 is the HCF. Dividing the first two numbers by the HCF 3, the quotients 4 and 5 along with 21 are multiplied together to form the LCM as, Problem 3.2. Find the HCF and LCM of 17, 23, and 29 by prime factorization method. Solution problem 3.2. As the three given integers are all prime numbers, their HCF is 1. So $\text{LCM}=17\times{23}\times{29}$ Problem 3.3. Find the HCF and LCM of 8, 9, and 25. Solution problem 3.3. As in the previous problem, the three given integers 8, 9 and 25 do not have any common factor except 1 which is the HCF. The numbers in their product of prime factors are, Thus the LCM of the three numbers is their product as shown below, Problem 4. Given that HCF of 306 and 657 as 9, find their LCM. Solution problem 4. Dividing 306 by 9 quotient is 34. So LCM of the two numbers is, Problem 5. Check whether $6^n$ can end with the digit 0 for any natural number $n$. Solution problem 5. For a number to end with digit 0, that is, a multiple of 10, it must have a factor 2 as well as a factor 5. As 6 has factors only 2 and 3 and no 5, any natural number power of 6 cannot have end digit as 0. Problem 6. Explain why $7\times{11}\times{13}+13$ and $7\times{6}\times{5}\times{4}\times{3}\times{2}\times{1}+5$ are composite numbers. Solution problem 6. In the first number, all three of the product term factors are prime factors and the product term itself is a composite number. Adding 13 to it effectively adds 1 to the product $7\times{11}$ and converts it from an odd number to an even number as follow, Now the first given number has more number of prime factors as follows, $7\times{11}\times{13} + 13$ Thus the first given number is a composite number. If you add any combination of factors of a product of factors to the product of factors itself, the result will always remain a composite number. This happens because the additive term is absorbed into the product of factors generating an additional group of factors. The final result will thus have at least two prime factors. For the second number we get, $=5\times{(1008+1)}=5\times{1009}$, a composite number. An example of an addition of a factor to a product of factors resulting just two prime factors Even though 5 and 31 are two prime factors, their product by definition is a composite number. Problem 7. There is a circular path around a sports field. Sonia takes 18 minutes to drive one round of the field, while Ravi takes 12 minutes for the same. If they both start at the same point at the same time going in the same direction, after how many minutes will they meet again at the starting point? Solution problem 7. At LCM 36 minutes of 18 and 12, the two numbers meet for the first time and that's why 36 is the smallest number fully divisible by both 18 and 12. In this period of 36 minutes, Ravi covers 3 rounds and Sonia 2 rounds of the circular course, 3 and 2 being the quotients of division of 36 by 12 and 18 respectively. Answer: 36 minutes. Bonus HCF and LCM Questions for Class 10: Challenge Yourself! Q1. There are 24 peaches, 36 apricots and 60 bananas and these have to be arranged in several rows in such a way that every row contains the same number of fruits of only one kind. What is the minimum total number of rows required to make this arrangement? Q2. A milk vendor has 21 litres of cow milk, 42 litres of toned milk and 63 litres of double toned milk. If he wants to pack them in cans so that each can contains same volume of milk and does not want to mix any two kinds of milk in a can, then what is the least number of cans that is required? Q3. Taru and Vicky take 168 seconds and 330 seconds respectively to drive one round of a circular course. Starting together at same time in the same direction, how long will they take to meet at the starting point each driving at constant speed? Q4. What is the least number which when divided by 16, 18, 20 and 25 leaves 4 as remainder in each case but when divided by 7 leaves no remainder? Q5. What is the number between 1000 and 2000 which when divided by 30, 36 and 80 gives a remainder 11 in each case? Q6. What is the smallest 5 digit number that is divisible by 12, 18, and 21? Q7. What is the least number of square tiles required to pave the floor of a room 15m 17cm long and 9m 2cm broad? Q8. What is the smallest number which when increased by 17 becomes exactly divisible by both 520 and 468? Q9. What is the smallest number that has all the numbers between 1 and 10 (both inclusive) as factors? Q10. The HCF and LCM of two numbers are 6 and 180 respectively. If one of the numbers is 30, what is the other? Answers to the Bonus Exercise of new HCF and LCM Questions Class 10 with hints for solution Q1. Answer: 10. Hint: For total minimum number of rows, the equal row sizes for the three fruits must be maximum. This maximum common row size must be the HCF of 24, 36, 60, that is 12. With row size as 12 fruits, the total minimum number of rows is, 2 + 3 + 5 = 10. Q2. Answer: 6. Hint: For minimum number of total cans, the can size is the maximum common factor 21. Total number of cans, 1 + 2 + 3 = 6. Q3. Answer: 9240 secs. Hint: The two will come together again for the first time after the period that is the LCM of 168 and 330, that is, 9240 secs. Q4. Answer: 18004. Hint: The least such number will be a multiple of the LCM of the four numbers 16, 18, 20, 25 with 4 added to it. LCM of the four numbers is 3600. Testing first multiple, remainder of 3604 divided by 7 is 6. Testing second multiple, remainder of 7204 divided by 7 is 1. Testing third multiple, remainder of 10804 divided by 7 is 3. Testing fourth multiple, remainder of 14404 divided by 7 is 5. Testing fifth multiple, remainder of 18004 divided by 7 is 0. This is the answer. Quick way: Remainder of 3600 divided by 7 is 2. For nth multiple that satisfies the condition of 7 as a factor after adding 4 results in the relation, 2n +4 divisible by 7. The smallest value of n that satisfies this condition is 5. So the desired least number is 5 times 3600 plus 4, that is, 18004. Q5. Answer: 1451. Hint: The number you want must be a multiple of LCM of 30, 36 and 80 plus 11. And that must be the least such number greater than 10000. LCM of 30, 36, 80 is 720. The least required number will then be twice 720 plus 11, that is, 1440 + 11 = 1451. Q6. Answer: 10080. Hint: The desired number must be the first multiple of LCM of 12, 18, 21 that is greater than or equal to 10000. LCM of the three is 252. And 40 times 252 is the least 5 digit number. This is, 10080. Q7. Answer: 814. Hint: In a square tile, both its length and breadth are equal to each other. For the least number of tiles to cover an area which is the product of 1517cm and 902cm must have the side length as the HCF of 1517 and 902. 1517 is not easy to factorize, so factorize 902 first, And dividing 1517 by 41 get its factors as well, HCF of the two is 41. To cover the length of 1517cm with 41cm side, 37 number of tiles is required and to cover the breadth of 902cm 22 number of tiles is required. This means the tiles of side length 41cm will be in 37 rows of 22 columns, in total, 37 times 22, that is, 814. Q8. Answer: 4663. Hint: LCM of 520 and 468 is LCM of, $520=2\times{2}\times{2}\times{5}\times{13}$, and, LCM is, Subtract 17 from it to get the answer as, 4663. Q9. Answer: 2520. Hint: The desired number is the LCM of 2, 3, 4, 5, 6, 7, 8, 9 and 10. LCM is the minimum set of prime factors that supplies all the prime factors of these numbers. These factors of LCM are, 5, 7, 8 and 9. Product of these four is, 2520, the desired number. Q10. Answer: 36. Hint: Product of 6 and 180 divided by 30, that is, 36 is the other number. This is because product of LCM and HCF of two numbers equals the product of two numbers. Why the product of HCF and LCM of two numbers equals the product of the two numbers This is true because of the definitions of LCM and HCF itself. HCF is the product of all prime factors COMMON to the two numbers. The product of the two numbers is the product of all prime factors of the two numbers. LCM is the set of all prime factors of the two numbers that occur only once so that each of the numbers can just be represented in the LCM without any excess prime factor. In the product of the two numbers HCF occurs twice. To get the LCM, one instance of the prime factors in HCF must be dropped from the set of all prime factors in the product of two numbers. This means, on dividing the product of two numbers by HCF, result will be the LCM of the two numbers. Read the other NCERT Class 10 Chapter 1 solutions and refer to the full list of NCERT Class 10 Math solutions.
{"url":"https://suresolv.com/high-school-math/ncert-solutions-class-10-maths-real-numbers-part-3-hcf-and-lcm-factorization-and","timestamp":"2024-11-04T04:47:20Z","content_type":"text/html","content_length":"57154","record_id":"<urn:uuid:cc163d10-a2df-43df-9b66-1d5c93ce4fb9>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00193.warc.gz"}
Practice Strings and Primes with the exercise "Eratosthenes' Wallpaper" Learning Opportunities This puzzle can be solved using the following concepts. Practice using these concepts and improve your skills. Eratosthenes wants a wallpaper containing the prime factors of numbers. You are tasked to supply a nice wallpaper. Print consecutive numbers and its prime factors on wallpaper with fixed width and height. The number is written first, then an equal sign and then the prime factors in ascending order. Examples are: - 10=2*5 - 11=11 - 12=2*2*3 If multiple numbers fit on a single they must be separated by a comma. If no more numbers fit on a line, the line must be right padded with minus signs to the full width. If the number won't fit on the next line either, all the next lines should be filled with only minus signs. Numbers are added to the wallpaper consecutively. So if the first number on the wallpaper is 2, the first line will start with 2, then 3, then 4 etc. If 5 won't fit on the first line then the second line will start with 5 With the above example if the width of the wallpaper is 15, the line starting with 10 will contain 10=2*5,11=11---. 10 and 11 will fit on the line and 12 won't fit. the three dashes fill up the line to it's width of 15. Line 1: Three integers separated by a space: width of the wallpaper in characters, height of the wallpaper in characters and the first number to be prime factorized height lines: strings containing the content of the wallpaper 3 ≤ width ≤ 100 1 ≤ height ≤ 100 2 ≤ number ≤ 1000000000 A higher resolution is required to access the IDE
{"url":"https://www.codingame.com/training/medium/eratosthenes-wallpaper","timestamp":"2024-11-03T10:40:25Z","content_type":"text/html","content_length":"146873","record_id":"<urn:uuid:0fc71d41-d4c6-4006-a7a7-b524d2306da4>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00245.warc.gz"}
Construction Loan Calculator | Ladabo Advanced Construction Loan Calculator Construction Loan Calculator Steps to Use Construction Loan Calculator: 1. Input the Loan Amount: Enter the amount needed for your construction project. 2. Enter the Annual Interest Rate: Provide the interest rate on the loan. 3. Input the Loan Term: Specify the loan term in years. 4. Click the “Calculate” Button: View your estimated monthly payment, total interest, and total payment. Step-by-Step Example for Construction Loan Calculator Initial Loan Setup: • Loan Amount: $250,000 • Annual Interest Rate: 4.5% • Loan Term: 30 years (360 months) • Required Monthly Payment: $1,266.71 Payments Over Time: • Total Paid Over the Loan Term: $456,015.60 • Total Interest Paid: $206,015.60 Impact of Loan Terms: Choosing a longer loan term results in lower monthly payments but increases the total interest paid. A shorter term means higher monthly payments but less total interest, allowing you to repay the loan faster. The Construction Loan Calculator helps you estimate a construction loan’s financial obligations. For example, a $250,000 loan at 4.5% interest over 30 years results in a monthly payment of $1,266.71, total payments amounting to $456,015.60, and total interest paid of $206,015.60. This tool is essential for planning and managing your construction project financing effectively. Construction Loan Calculator This Construction Loan Calculator helps you estimate the costs of financing a construction project. Whether you’re purchasing land or planning new construction, this calculator provides a simple way to determine your total project costs, loan amount, monthly payments, and overall loan expenses. How to Use the Calculator: 1. Land Purchase Price: Enter the cost of purchasing the land where the construction will take place. 2. Estimated Construction Costs: Input the anticipated cost of building the property. 3. Loan-to-Cost (LTC) Ratio: Enter the percentage of the total project cost the lender is willing to finance. For example, if the lender offers a 75% LTC ratio, input “75”. 4. Loan Interest Rate: Provide the annual interest rate charged on the loan. 5. Loan Term (Months): Enter the loan’s duration in months, which typically lasts until the construction is completed. 6. Miscellaneous Monthly Expenses: Add any additional costs you expect to incur monthly during the loan term, such as insurance or taxes. Once you’ve entered the required information, click Calculate to view: • Total Project Cost: The sum of your land purchase and construction costs. • Maximum Loan Amount Based on LTC: The maximum amount of financing you can expect to receive based on your lender’s Loan-to-Cost ratio. • Total Loan Payment for the Term: The overall cost of your loan over the specified term. • Estimated Monthly Loan Payment: The approximate monthly payment based on simple interest during the construction loan period. Use this calculator to gain insight into your financial needs and make informed decisions about your construction project.
{"url":"https://ladabo.com/construction-loan-calculator","timestamp":"2024-11-14T00:44:48Z","content_type":"text/html","content_length":"156338","record_id":"<urn:uuid:07155b65-6264-4e14-aa81-30fe1e0a4ea1>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00268.warc.gz"}
Making floating point math highly efficient for AI hardware POSTED ON TO AI Research, Data Infrastructure Making floating point math highly efficient for AI hardware By Jeff Johnson In recent years, compute-intensive artificial intelligence tasks have prompted creation of a wide variety of custom hardware to run these powerful new systems efficiently. Deep learning models, such as the ResNet-50 convolutional neural network, are trained using floating point arithmetic. But because floating point has been extremely resource-intensive, AI deployment systems typically rely upon one of a handful of now-standard integer quantization techniques using int8/32 math. We have developed an alternate approach to making AI models run efficiently. Building on a lineage of ideas reaching back to the early days of computer science more than 70 years ago, our method optimizes floating point itself. We have made radical changes to floating point to make it as much as 16 percent more efficient than int8/32 math. Our approach is still highly accurate for convolutional neural networks, and it offers several additional benefits: • Our technique can improve the speed of AI research and development. When applied to higher-precision floating point used in AI model training, it is as much as 69 percent more efficient. • Today, models are typically trained using floating point, but then they must be converted to a more efficient quantized format that can be deployed to production. With our approach, nothing needs to be retrained or relearned to deploy a model. AI developers can thus deploy efficient new models more easily. • Integer quantization schemes today are growing ever more complicated and in some cases might be “overfitting” on a particular task (and thereby not retaining their general-purpose application). An efficient, general-purpose floating point arithmetic that preserves accuracy can avoid this issue. Our techniques are discussed in detail in the research paper “Rethinking floating point for deep learning.” It will take time to develop new chips designed to perform floating point math with these techniques. But the potential benefits include faster AI computation in data centers, lower-power designs for better AI on mobile devices, and simple, faster ways to achieve performance goals with fewer software changes. The slowdown of Moore’s Law and the age of “dark silicon” is at hand. Continued performance gains will require rethinking low-level hardware design decisions made decades ago, such as the IEEE 754 floating point standard, and use of mathematical approximation where applicable. Neural networks in particular provide an excellent opportunity for this reevaluation, as they are quite tolerant of variation and experimentation. Our hardware designs for ASIC/FPGA and C++/PyTorch code for its evaluation are now publicly available to the AI community. We hope that the AI community will join us in exploring this new approach. Traditional floating point Engineers who work in other fields may not be familiar with how traditional floating point would compare with our alternatives, so a brief summary may be helpful. As is commonly known, floating point can represent both large and small real numbers in a reasonable amount of computer storage, using a system that is broadly similar to scientific notation. This format can be used to represent values such as 1,000,000 and 0.0625 in a fixed-width encoding and radix (typically binary). It is important to note that floating point can precisely represent only a limited choice of real numbers, as we have a limited number of bits. All other values can be represented by one of several forms of rounding to a nearest available floating point value. A traditional binary floating point format has a sign, a significand, and an exponent. A sign bit indicates whether the number is positive or negative. The significand (whose fractional part is commonly known as the mantissa) is a binary fixed point number of the form 0.bbb… or 1.bbb…, where the fractional part bbb… is represented by some fixed number of binary bits after the radix point. (In decimal arithmetic, the radix point is also known as the decimal point, separating integral from fractional values.) The exponent is a signed integer that represents multiplication of the significand by a power of 2. A significand with a leading binary 1 (1.bbb…) is known as normal, whereas one with a leading binary 0 (0.bbb…) is denormal. The IEEE 754 floating point standard, common in most modern-day computers, has both normal and denormal significands. The leading digit of the significand need not be explicitly stored; in IEEE 754, the exponent field determines whether it is 1 or 0. This graphic shows an encoding of -1.625 in 16-bit IEEE 754 binary16 half-precision floating point, with a fixed-size, 5-bit exponent and 10-bit significand fraction. The IEEE exponent has a bias of -15 added to it, so the encoded exponent 15 below actually represents (15 – 15) or 0. AI arithmetic today and tomorrow The neural networks that power many AI systems are usually trained using 32-bit IEEE 754 binary32 single precision floating point. Reduction to 16 bits (half precision or formats such as bfloat16) yields some performance gains, but it still pales in comparison to the efficiency of equivalent bit width integer arithmetic. These floating point variants can use the original 32-bit floating point neural network data quite readily, but integer quantization to 8 (or fewer) bits often needs learned quantization parameters and model retraining. Many int8/32 quantization schemes can work as accurately as the original floating point model, but they might also be overfitting on the task at hand, unable to retain their accuracy when tested on tasks other than the ImageNet validation set. But there are a variety of alternatives to integer, fixed point, or floating point for computer arithmetic as practiced today. Some of these methods reach back to the 1950s: We’ve used this line of ideas to produce a floating point arithmetic that can outperform int8/32. Our implementation is quite different from floating point as seen today in hardware, even with variations such as denormal flush-to-zero or word size/field bit width changes such as bfloat16 or minifloat. Unlike int8/32 quantization, our implementation is still a general-purpose floating point arithmetic, with results interpretable out of the box. Keys to more efficient floating point To develop a new method for highly efficient floating point, we considered various sources of hardware floating point inefficiency: 1. Large word size: Much compute energy is spent moving data: external DRAM to internal SRAM, SRAM to register, or register to register (flip-flops). The larger the floating point word size, the more energy is spent. 2. General fixed point machinery: Significands are fixed point, and fixed point adders, multipliers, and dividers on these are needed for arithmetic operations. The greater the precision (significand length) of the floating point type, the larger these components will be. Hardware multipliers and dividers are usually much more resource-intensive (chip area, power, and latency) than hardware adders. 3. General floating point machinery: This handles the “floating” of the radix point and is thus integral to a floating point representation. Examples are leading zero (LZ) counters for renormalization, shifters for significand alignment, and rounding logic. Floating point precision also dominates the hardware resources used for this machinery. 4. IEEE 754 specific machinery: This provides denormal support for gradual underflow as implemented in the IEEE 754 standard, with additional shifter, LZ counter, and other modifications needed for significand renormalization. Denormal handling adds complexity and overhead to most floating point operations. Reducing word size Shrinking word size provides an obvious energy advantage. We can try compressing 32-bit data into 8 or 16 bits. A typical floating point fixed-size field encoding forces difficult choices to be made for reducing dynamic range (exponent) and precision (significand), when what we need is some preservation of both. We can handle this trade-off differently. Floating point is itself a quantization of (infinite precision) real numbers. A quantizer adapted to the seen data distribution has less reproduction error. We typically don’t have much prior knowledge about the data distributions encountered on a general-purpose computer. Neural network distributions, however, are near Gaussian in practice, sometimes further controlled by procedures such as batch normalization. Standard floating point keeps as much significand precision at 10^5 as at 10^-5, but most neural networks perform their calculations in a relatively small range, such as -10.0 to 10.0. Tiny numbers in this range (for example, 0.0001) are frequently used, but not large ones. Ideally, we could change the quantizer to give higher precision where we need it and keep some dynamic range for small numbers. Tapered floating point can let us achieve these goals and reduce word size. Gustafson’s posit is an excellent form of tapering. Posits encode the exponent in a variable number of bits using a prefix-free code, with the significand fraction occupying the rest. It maximizes precision around +/-1.0, with less precision toward 0 or +/-infinity. It is both lossy compression and expansion, losing precision in some places to preserve dynamic range elsewhere. It can thus give both higher precision (in certain places) and greater dynamic range than could be the case with IEEE-style floating point. The posit idea can be extended to other prefix-free codes, such as Huffman coding, when we don’t know the data distribution up front. Fixed point machinery It is possible to avoid multipliers and dividers for operating on significands. A significand can be considered generally as a fraction map f(x), mapping a fixed point value x in [0, 1) to [1, 2). (This approach was detailed in Lindstrom et al. 2018.) In typical normalized floating point, f(x) is the affine function 1+x (which we’ll call a linear domain number). When f(x) = 2^x, we have the logarithmic number system (LNS), in which multiplication and division turn into addition and subtraction. LNS addition, though, requires huge hardware lookup tables to compute the sum or difference of two log domain numbers. This has been one of the main problems with LNS adoption, as these tables can be more cumbersome than hardware multipliers. Note that typical floating point is already a combination of logarithmic (exponent) and linear (significand) representations, but the LNS representation is fully logarithmic. Floating point machinery A useful operation in computer linear algebra is multiply-add: calculating the sum of a value c with a product of other values a x b to produce c + a x b. Typically, thousands of such products may be summed in a single accumulator for a model such as ResNet-50, with many millions of independent accumulations when running a model in deployment, and quadrillions of these for training models. Floating point fused multiply-add (FMA) is a common means of multiply-add with reduced error, but it is much more complicated than a standard floating point adder or multiplier. A technique known as Kulisch accumulation can avoid FMA complexity. A similar operation was in the first programmable digital computer, Konrad Zuse’s Z3 from 1941. Gustafson has also proposed standard usage of Kulisch accumulation in his recent floating point studies. The idea is not to accumulate in floating point but instead maintain a running sum in fixed point, large enough to avoid underflow or overflow. Unlike floating point addition, Kulisch accumulation exactly represents the sum of any number of floating point values. The summation is associative and reproducible regardless of order. When done with all sums, we convert back to floating point by significand alignment and rounding. The diagram below shows an example accumulation step. A Kulisch accumulator currently contains the value 35.5, and we are adding 0.84375 into it, represented as a linear domain floating point value. This floating point value being summed may have come previously from a product of scalar values or just a single value that we wish to accumulate. The floating point value is converted to fixed point by aligning the significand’s radix point based on the floating point exponent. This conversion uses an adjustment factor that is the effective exponent of the accumulator’s most significant bit (6 in our example). The aligned significand and accumulator are then summed together with carry. (For simplicity, we have omitted additional bits of precision that a Kulisch accumulator may have to support underflow and overflow.) Kulisch accumulation is costly in 32+ bit floating point, as the accumulator, shifter, and adder may be 500+ bits in size, but it is quite practical for smaller Kulisch accumulation cannot be used directly for log domain summation. But just as Kulisch accumulation performs the sum in a different form (fixed point) than that of the arguments (floating point), we can take a similar approach here, so we don’t need a huge LNS sum/difference lookup table. We can approximate log values in the linear domain, Kulisch accumulate in the linear domain, and then convert back to log domain when all sums are complete. This strategy works very well for general linear algebra, as vector inner product requires many repeated sums in an accumulator. IEEE 754-specific machinery The posit encoding that was useful for word size reduction also avoids this problem, as the posit significand is always normalized. Gradual underflow prevents precision falling off immediately rather than gradually, which is handled in the IEEE 754 denormal representation by the location of the leading one in the significand fraction. Posit tapering toward smaller numbers results in significand fraction bits being used instead on the exponent, extending the dynamic range and reducing the precision. Posit tapering is functionally similar to denormal gradual underflow, but with no overhead for renormalizing the significand. Posit gradual overflow is likewise supported in a similar manner with tapering. Putting it together To achieve our performance gains, we combine these four techniques. A log domain representation avoids hardware multipliers. We repurpose posit encoding for log numbers. To compete against int8/32, we consider an 8-bit format called (8, 1, alpha, beta, gamma) log. (8, 1) are the posit parameters. This encoding gives a more than 16 million to 1 ratio between our largest and smallest positive values while preserving 4 bits of (log domain) precision around 1.0, all in 8 bits (only 256 possible values). The alpha, beta, and gamma values control log-to-linear and linear-to-log conversion As noted above, we perform log domain sums in the linear domain. This result is very approximate, but unlike FMA, we have no linear domain error with Kulisch accumulation for sequential sums. We call this technique ELMA, or exact log-linear multiply-add. The log domain multiplication is exact, as are all linear domain sums, but the log-to-linear conversion is approximate, as is the return linear-to-log conversion. The trade-off is quite acceptable in practice. Compared with floating point FMA, the ELMA multiply-add circuit at its core is simple. Three adders, a lookup table, and a shifter do most of the work: Drop-in replacement Unlike int8/32, our 8-bit log format for neural networks does not require learning quantization parameters, activation sampling, or retraining of the original network. We simply take the 32-bit floating point parameters of a network such as ResNet-50 and convert them using round-to-nearest-even. Usage of posit encoding preserves both the needed dynamic range and precision in such a small Using (8, 1, 5, 5, 7) log with ELMA in the same manner as original ResNet-50 math, we achieved 75.23 percent top-1 and 92.66 percent top-5 accuracy on the ImageNet validation set, a loss of 0.9 percent and 0.2 percent, respectively, from the original. These results are similar to those of many existing int8/32 quantization methods. It is possible that the fine-tuning training and model tweaks used in int8/32 quantization can further improve our method’s performance, but our baseline result is achieved with minimal software effort. All math is still performed in a general-purpose floating point arithmetic, using compressed encoding as our quantizer. Our design with ELMA can also be used for nonlinear algebra tasks such as polynomial evaluation. Hardware efficiency Using a commercially available 28-nanometer ASIC process technology, we have profiled (8, 1, 5, 5, 7) log ELMA as 0.96x the power of int8/32 multiply-add for a standalone processing element (PE). In a full 32×32 systolic array for matrix multiplication, the log ELMA PE formulation is 0.865x the power of the int8/32 PE version. The power savings largely comes from eliminating hardware Extended to 16 bits — and even without denormal support, which provides a lot of inefficiency for IEEE 754 — this method uses 0.59x the power and 0.68x the area of IEEE 754 half-precision FMA, with reduced latency. These gains at 16 bits can be leveraged to support training more complex AI models in the same amount of time. Against 32-bit IEEE 754 single-precision FMA, ELMA will not be effective, though, as the Kulisch accumulator is massive (increasing adder/shifter sizes and flip-flop power), and the log-to-linear lookup table is prohibitive. What’s next Realizing the promise of AI requires significant efficiency gains that we can achieve only with new approaches, not just building on old ones. For example, software emulation is often too slow to effectively test new arithmetic designs on cutting-edge AI models. It is unfortunately more difficult to perform experiments in FPGA/ASIC hardware than software, leaving the universe of these potential gains largely underexplored. If, however, new hardware is developed to harness these techniques, it could benefit a wide range of AI research and applications. We plan to investigate 16-bit ELMA designs in hardware and comparing behavior with IEEE 754 half-precision floating point and bfloat16 for AI model training and other tasks. These alternative ideas and numerical approximation are not always applicable, but AI provides a unique opportunity to explore their boundaries and help overturn old notions of what is possible in hardware.
{"url":"https://code-dev.fb.com/2018/11/08/ai-research/floating-point-math/","timestamp":"2024-11-07T02:58:52Z","content_type":"text/html","content_length":"101169","record_id":"<urn:uuid:452266e4-2b2a-4f41-a0d8-c66741f311fd>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00637.warc.gz"}
Calculus AB & BC Hello everyone! I’m super busy this week tutoring my students for their finals (which mostly take place next week). Yesterday I had 13 hours of students, yikes!! I’m going to try to blog everyday this week but am going to keep my posts pretty short in the essence of my time crunch 🙂 Here’s a video on how and why it’s good for my BC Calculus students to know the cylindrical shell method for finding volumes of revolution (in addition to their preferred methods of disks and And here’s a video explaining how to look at a graph of a derivative and answer questions about the original function. I ALWAYS get a ton of questions on how to do this! My AB Calculus students have been doing this very recently…
{"url":"https://mathsaver.com/?cat=3&paged=2","timestamp":"2024-11-08T12:03:52Z","content_type":"text/html","content_length":"18391","record_id":"<urn:uuid:ad482cad-537f-418e-bff8-05869dbc1715>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00660.warc.gz"}
Free Throw Probability Calculator - Calculator Wow Free Throw Probability Calculator In basketball, free throws are a crucial aspect of the game, often determining the outcome in close matches. Understanding and improving your free throw performance can significantly impact your overall game. A Free Throw Probability Calculator is a valuable tool for players, coaches, and analysts to measure shooting accuracy and track improvements over time. By providing a precise calculation of free throw success rates, this tool helps in evaluating performance and devising strategies to enhance scoring efficiency. The importance of a Free Throw Probability Calculator cannot be overstated, especially in competitive basketball. Free throws are awarded after certain fouls and are often crucial in high-pressure situations. Knowing your probability of making a free throw can offer insights into your shooting consistency and help identify areas for improvement. For players, tracking free throw percentages allows for targeted practice, helping to refine technique and increase overall shooting accuracy. Coaches benefit from this data by adjusting training programs and game strategies to leverage players’ strengths and address weaknesses. Analysts and statisticians use free throw probabilities to evaluate player performance and make informed decisions regarding team composition and game tactics. How to Use Using a Free Throw Probability Calculator is straightforward and involves the following steps: 1. Input Free Throw Attempts: Enter the total number of free throw attempts made during a game or practice session. This number represents how many free throws were attempted and is crucial for calculating the shooting percentage. 2. Input Free Throws Made: Enter the number of successful free throws made out of the total attempts. This value represents the number of free throws that actually went through the basket. 3. Calculate the Probability: Click the calculate button. The calculator will use the formula P(FT)=FTMFTA×100P(\text{FT}) = \frac{\text{FTM}}{\text{FTA}} \times 100P(FT)=FTAFTM×100 to determine the probability percentage. This result shows how successful a player is at making free throws relative to their attempts. 10 FAQs and Answers 1. What is the formula for calculating free throw probability? The formula is P(FT)=FTMFTA×100P(\text{FT}) = \frac{\text{FTM}}{\text{FTA}} \times 100P(FT)=FTAFTM×100, where P(FT)P(\text{FT})P(FT) is the free throw probability percentage, FTM\text{FTM}FTM is the number of free throws made, and FTA\text{FTA}FTA is the number of free throw attempts. 2. How is the free throw probability expressed? The free throw probability is expressed as a percentage, representing the proportion of successful free throws out of the total attempts. 3. Why is tracking free throw probability important? Tracking free throw probability helps players and coaches understand shooting performance, identify areas for improvement, and develop strategies to enhance game outcomes. 4. Can I use the calculator for different players? Yes, the calculator can be used for any player or multiple players by inputting their individual statistics to evaluate their free throw performance. 5. How often should I use the Free Throw Probability Calculator? It is beneficial to use the calculator regularly during practice sessions and games to track progress and make adjustments to shooting techniques. 6. What if I make no free throws? If no free throws are made, the calculator will show a probability of 0%. This indicates that there were no successful free throws out of the total attempts. 7. Can the calculator be used for historical data? Yes, the calculator can be used to analyze historical data by entering past performance statistics to assess trends and improvements over time. 8. Is there a minimum number of attempts required for accurate results? While there is no strict minimum, more attempts generally provide a more accurate representation of a player’s free throw probability. 9. Can the calculator be used for teams? Yes, the calculator can be used to calculate the average free throw probability for an entire team by inputting the combined statistics of all players. 10. What factors can affect free throw performance? Factors affecting free throw performance include shooting technique, concentration, physical condition, and psychological pressure. Regular practice and analysis can help improve these aspects. A Free Throw Probability Calculator is an essential tool for anyone involved in basketball, from players to coaches and analysts. By providing a clear and precise measurement of free throw performance, this calculator enables users to evaluate their accuracy, track improvements, and develop effective strategies. Whether you are looking to refine your skills, enhance team performance, or analyze historical data, understanding and utilizing free throw probabilities can significantly impact your success on the court.
{"url":"https://calculatorwow.com/free-throw-probability-calculator/","timestamp":"2024-11-07T00:53:37Z","content_type":"text/html","content_length":"66364","record_id":"<urn:uuid:c8258d48-bb15-4a80-ab31-97942582dcf5>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00504.warc.gz"}
Kronecker, God and the Integers Kronecker, God and the Integers Natural numbers were created by God, everything else is the work of men — Kronecker (1823–1891). by Mona Dennaoui on Unsplash. “Natural numbers were created by God, everything else is the work of men.” Kronecker in a lecture for the Berliner Naturforscher Versammlung (1886). Leopold Kronecker (1823–1891) was a German mathematician who worked on number theory and algebra. He is considered a pre-intuitionist, being only close to intuitionism because he rejected Cantor’s Set Theory. He was, in fact, more radical than the intuitionists. Unlike Poincaré, for example, Kronecker didn’t accept the transfinite numbers as valid mathematical entities. Kronecker is often remembered as the first to cast doubts on non-constructive existence proofs (i.e., proofs that show, by contradiction, that something exists). He carried this debate thru many years against Karl Weierstrass, and he insisted that all mathematics should be reduced to the positive whole numbers. Was he right? According to O’Conner and Robertson, “Kronecker believed that mathematics should deal only with finite numbers and with a finite number of operations. He was the first to doubt the significance of non-constructive existence proofs. It appears that, from the early 1870s, Kronecker was opposed to the use of irrational numbers, upper and lower limits, and the Bolzano-Weierstrass theorem, because of their non-constructive nature. Another consequence of his philosophy of mathematics was that to Kronecker transcendental numbers could not exist.” O’Conner, J. and Robertson, E. in Leopold Kronecker (1999). His most famous phrase, “natural numbers were created by God, everything else is the work of men,” summarizes his ideas for the foundation of mathematics, but here he tried to explain it in a less biblical way: “So the results of general arithmetic also belong properly to the special, ordinary theory of numbers, and all the results of the profoundest mathematical research must in the end be expressible in the simple forms of the properties of integers.” Kronecker in From Kant to Hilbert: A Source Book in the Foundations Mathematics Vol. II (2007). A great part of the scientific community doesn’t agree with him. Says, Boniface, “Kronecker’s views on the foundations of mathematics are often reduced to jokes and regarded as an outdated set of ill-assorted ideas. A closer look however shows that they constitute an original and coherent doctrine justified by epistemological convictions.” Jacqueline Boniface in Leopold Kronecker’s conception of the foundations of mathematics (2005). His criticisms, however, can one day be proved right. Mostly because nothing is resolved in the field of the foundations of mathematics… After his death, according to Chaitin, there were a few scientific results that helped endorse his criticisms. Let’s see: “1. The diagonal and probabilistic proofs that reals are uncountable, and 2. The diagonal and probabilistic proofs that there are uncomputable reals. (…) In the first case these are the famous Jules Richard paradox (1905), Emile Borel’s know-it-all real (1927), and the fact that most reals are unnameable, which was the subject of (…) his last book, published when Borel was 81 years old (…). In the second case the frightening features are the unsolvability of the halting problem (Turing, 1936), the fact that most reals are uncomputable, and last but not least, the halting probability Ω, which is irreducibly complex (algorithmically random), maximally unknowable, and dramatically illustrates the limits of reason.” Chaitin, G. in How Real Are Real Numbers? (2006). Therefore, these results about real numbers revealed: “Chasms beneath the feet of mathematicians.” Chaitin, G. in How Real Are Real Numbers? (2006). What to do about his criticisms? I don't know too. Maybe now you can think of a solution.
{"url":"https://www.cantorsparadise.org/kronecker-god-and-the-integers-28269735a638/","timestamp":"2024-11-12T04:14:23Z","content_type":"text/html","content_length":"33612","record_id":"<urn:uuid:73a724f6-1aac-47c4-846d-a29a999c1bec>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00768.warc.gz"}
Accuracy vs. Precision: Key Concepts in Measurement - Chemistry Fundamentals Accuracy and precision are two important concepts in measurement and data analysis, often used in various fields such as science, engineering, and statistics. They describe how close measurements or data points are to a true value or to each other. Here’s how accuracy and precision are defined, along with suitable examples: 1. Accuracy: Accuracy refers to the degree of closeness between a measured or observed value and the true or accepted value. In other words, it measures how well a measurement represents the actual or expected value. Accuracy is often expressed as a percentage or a fraction, where a higher value indicates a more accurate measurement. Imagine you have a target in a shooting range, and you are trying to hit the bullseye. If your shots consistently land near the center of the target, your shooting is accurate. If the shots cluster around the bullseye, even if they are not all in the exact center, you are still accurate because you are close to the true target. 2. Precision: Precision, on the other hand, refers to the degree of consistency or repeatability in a series of measurements. It measures how closely individual measurements or data points agree with each other. A highly precise measurement has very little variation between individual measurements, even if they are far from the true value. Let’s say you have a set of scales, and you are weighing a bag of sugar multiple times. If each time you get a weight close to the actual weight (e.g., 1 kg), you have accuracy. However, if the weights you get are consistently around the same value (e.g., 0.95 kg, 0.96 kg, 0.94 kg), even though they are not exactly 1 kg, you have precision because your measurements are close to each To summarize, accuracy is about how close measurements are to the true value, while precision is about how close multiple measurements are to each other. It’s possible to have measurements that are accurate but not precise (close to the true value but with a lot of variation), precise but not accurate (consistent but consistently off-target), or both accurate and precise (close to the true value and tightly clustered together). Accuracy and precision are two distinct concepts used to evaluate the quality of measurements or data. They describe different aspects of how close measurements are to a true or expected value and how close multiple measurements are to each other. Here’s a summary of the key differences between accuracy and precision: 1. Definition: □ Accuracy refers to the degree of closeness between a measured or observed value and the true or accepted value. It evaluates how well a measurement represents the actual value. □ Precision refers to the degree of consistency or repeatability in a series of measurements. It assesses how closely individual measurements or data points agree with each other. 2. Focus: □ Accuracy focuses on the relationship between the measured value and the true value. It answers the question, “How close is the measurement to the actual or expected value?” □ Precision focuses on the consistency of measurements among themselves. It answers the question, “How closely do repeated measurements agree with each other?” 3. Measurement Error: □ Accuracy is related to systematic errors, which are consistent and tend to shift measurements away from the true value. Inaccurate measurements result from bias or calibration issues. □ Precision is related to random errors, which introduce variability or scatter in measurements. Precise measurements have low random error because they cluster closely together. 4. Representation: □ Accuracy is often expressed as a measure of the difference between the measured value and the true value, typically in the form of a percentage or a fraction. □ Precision is typically quantified by the spread or standard deviation of a set of measurements. It is not directly expressed as a percentage. 5. Example: □ For example, if you are using a ruler to measure the length of a pencil and your ruler consistently reads 0.5 cm longer than the true length of the pencil, your measurements are accurate but not precise because they are consistently off-target. □ If you take multiple measurements of the same object with a ruler, and the measurements vary slightly but cluster closely together around the same value, your measurements are precise but not necessarily accurate if they are consistently offset from the true value. In summary, accuracy is about how close measurements are to the true value, while precision is about how close measurements are to each other. Ideally, good measurements should be both accurate (close to the true value) and precise (consistent and closely clustered together). However, it’s possible to have measurements that are accurate but not precise or precise but not accurate, depending on the sources of error and variability in the measurement process.
{"url":"https://chemistryfundamentals.com/understanding-accuracy-precision-measurements/","timestamp":"2024-11-04T07:26:49Z","content_type":"text/html","content_length":"86560","record_id":"<urn:uuid:936a910b-e65a-4c90-9688-b14196f654c3>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00184.warc.gz"}
Cable attributes Explanation From ITU - Fiber Optic Wiki Cable attributes Explanation From ITU FOWIKI.comComments Off on Cable attributes Explanation From ITU Since the geometrical and optical characteristics of fibres given in clause 5 are barely affected by the cabling process, this clause gives recommendations mainly relevant to transmission characteristics of cabled factory lengths. Environmental and test conditions are paramount and are described in the guidelines for test methods. Optical Fiber Attenuation coefficient The attenuation coefficient is specified with a maximum value at one or more wavelengths in both the 1310 nm and 1550 nm regions. The optical fibre cable attenuation coefficient values shall not exceed the values found in clause 7. NOTE – The attenuation coefficient may be calculated across a spectrum of wavelengths, based on measurements at a few (3 to 4) predictor wavelengths. This procedure is described in clause 5.4.4 of [ITU-T G.650.1] and an example is given in Appendix III of [ITU-T G.650.1]. Polarization mode dispersion coefficient Cabled fibre polarization mode dispersion shall be specified on a statistical basis, not on an individual fibre basis. The requirements pertain only to the aspect of the link calculated from cable information. The metrics of the statistical specification are found below. Methods of calculations are found in [b-IEC/TR 61282-3], and are summarized in Appendix IV of [ITU-T G.650.2]. The manufacturer shall supply a PMD link design value, PMDQ, that serves as a statistical upper bound for the PMD coefficient of the concatenated optical fibre cables within a defined possible link of M cable sections. The upper bound is defined in terms of a small probability level, Q, which is the probability that a concatenated PMD coefficient value exceeds PMDQ. For the values of M and Q given in clause 7, the value of PMDQ shall not exceed the maximum PMD coefficient specified in clause 7. Measurements and specifications on uncabled fibre are necessary, but not sufficient to ensure the cabled fibre specification. The maximum link design value specified on uncabled fibre shall be less than or equal to that specified for the cabled fibre. The ratio of PMD values for uncabled fibre to cabled fibre depends on the details of the cable construction and processing, as well as on the mode coupling condition of the uncabled fibre. [ITU-T G.650.2] recommends a low mode coupling deployment requiring a low tension wrap on a large diameter spool for uncabled fibre PMD measurements. The limits on the distribution of PMD coefficient values can be interpreted as being nearly equivalent to limits on the statistical variation of the differential group delay (DGD), that varies randomly with time and wavelength. When the PMD coefficient distribution is specified for optical fibre cable, equivalent limits on the variation of DGD can be determined. The metrics and values for link DGD distribution limits are found in Appendix I. NOTE 1 – PMD Q specification would be required only where cables are employed for systems that have the specification of the max DGD, i.e., for example, PMDQ specification would not be applied to systems recommended in [ITU-T G.957]. NOTE 2 – PMD Q should be calculated for various types of cables, and they should usually be calculated using sampled PMD values. The samples would be taken from cables of similar construction. NOTE 3 – The PMD Q specification should not be applied to short cables such as jumper cables, indoor cables and drop cables.
{"url":"http://fowiki.com/b/cable-attributes-explanation-from-itu/","timestamp":"2024-11-12T16:05:23Z","content_type":"text/html","content_length":"99419","record_id":"<urn:uuid:720baad1-0580-4e42-a289-7ea04d619906>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00265.warc.gz"}
Data frame - (Advanced R Programming) - Vocab, Definition, Explanations | Fiveable Data frame from class: Advanced R Programming A data frame is a two-dimensional, table-like structure in R that holds data in rows and columns, where each column can contain different types of data (such as numbers, strings, or factors). It is a fundamental data structure used for storing datasets, allowing for easy manipulation and analysis of data. This versatile format is essential for various applications in statistics, data analysis, and machine learning. congrats on reading the definition of data frame. now let's actually learn it. 5 Must Know Facts For Your Next Test 1. Data frames can be created using functions like `data.frame()`, which allows you to specify the columns and their respective data types. 2. Each column in a data frame can be accessed using the `$` operator or by indexing with `[[ ]]`, enabling easy manipulation of specific variables. 3. Data frames support various operations such as subsetting, merging, and reshaping, making them very powerful for data analysis tasks. 4. R provides several functions like `head()`, `summary()`, and `str()` to quickly explore and understand the contents and structure of a data frame. 5. Data frames are often used as input for statistical modeling functions in R, allowing users to fit models using the structured dataset. Review Questions • How do data frames enable efficient data manipulation and analysis in R compared to other data structures? □ Data frames enable efficient data manipulation and analysis because they organize information in rows and columns, which makes it easy to work with datasets. Each column can hold different types of data, allowing for a more flexible representation of real-world information. Functions specific to data frames allow users to easily subset, merge, or modify the dataset without the need for complex indexing methods typical of other structures like matrices. • Discuss the differences between a data frame and a matrix in R, particularly regarding their use cases. □ The main difference between a data frame and a matrix lies in their structure: a matrix requires all elements to be of the same type, while a data frame allows for mixed types within its columns. This makes data frames more suitable for handling real-world datasets that often contain different variable types. Data frames are preferred when performing statistical analysis on datasets with categorical variables or when the dataset includes diverse types of measurements. • Evaluate the role of data frames in statistical modeling within R and how they facilitate this process. □ Data frames play a critical role in statistical modeling within R by providing a structured format that simplifies the input of datasets into modeling functions. They allow users to easily reference specific variables using intuitive notation like `dataframe$column_name`, making it straightforward to specify predictors and response variables. Moreover, their flexibility supports complex datasets with multiple variable types, enabling robust analyses such as regression or classification models while ensuring accurate representation of relationships among © 2024 Fiveable Inc. All rights reserved. AP® and SAT® are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website.
{"url":"https://library.fiveable.me/key-terms/introduction-to-advanced-programming-in-r/data-frame","timestamp":"2024-11-15T03:02:03Z","content_type":"text/html","content_length":"160119","record_id":"<urn:uuid:9b92bdba-0d48-4fec-ac00-5789c9b349f5>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00289.warc.gz"}
Remarks on Ramanujan’s inequality concerning the prime counting function Remarks on Ramanujan’s inequality concerning the prime counting functionArticle In this paper we investigate Ramanujan’s inequality concerning the prime counting function, asserting that π ( x 2 ) &lt; e x log x π ( x e ) \pi \left( {{x^2}} \right) &lt; {{ex} \over {\log x}}\pi \left( {{x \over e}} \right) for x sufficiently large. First, we study its sharpness by giving full asymptotic expansions of its left and right hand sides expressions. Then, we discuss the structure of Ramanujan’s inequality, by replacing the factor x log x {x \over {\log x}} on its right hand side by the factor x log x - h {x \over {\log x - h}} for a given h, and by replacing the numerical factor e by a given positive α. Finally, we introduce and study inequalities analogous to Ramanujan’s inequality. Volume: Volume 29 (2021), Issue 3 Published on: December 23, 2021 Imported on: May 11, 2022 Keywords: General Mathematics,[MATH]Mathematics [math]
{"url":"https://cm.episciences.org/9533","timestamp":"2024-11-09T06:10:56Z","content_type":"application/xhtml+xml","content_length":"41749","record_id":"<urn:uuid:683cb69d-d6f4-4b4c-9810-47574cd064e3>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00340.warc.gz"}
Who Says You Can't Do That? — Trig Identities Ahh, trig identities... a rite of passage for any precalculus student. This is a huge stumbling block for many students, because up until this point, many have been perfectly successful (or at least have gotten by) in their classes by learning canned formulas and procedures and then doing a bunch of exercises that just change a \(2\) to a \(3\) here and a plus to a minus there. Now, all of a sudden, there's no set way of going about things. No "step 1 do this, step 2 do that". Now they have to rely on their intuition and "play" with an identity until they prove that it's correct. And to make matters worse, many textbooks --- and, as a result, many teachers --- make this subject arbitrarily and artificially harder for the students. They insist that students are not allowed to work on both sides of the equation, but instead must specifically start at one end and work their way to the other. I myself once subscribed to this "rule", because it's how I'd always been taught, and I always fed students the old line of "you can't assume the thing you're trying to prove because that's a logical fallacy". Then one of my Honors Precalculus students called me on it. He asked me to come up with an example of a trig non-identity where adding the same thing on both sides would lead to a false proof that the identity was correct. After some thought, I realized that not only couldn't I think of one, but that mathematically, there's no reason that one should exist. To begin with, one valid way to prove an identity is to work with each side of the equation separately and show that they are both equal to the same thing. For example, suppose you want to verify the following identity: Trying to work from one side to the other would be a nightmare, but it's much simpler to show that each side is equal to \(\csc{\theta}-1\). This in fact demonstrates one of the oldest axioms in mathematics, as written by Euclid: "things which are equal to the same thing are equal to each other." But what about doing the same thing to both sides of an equation? There are two important points to realize about what's going on behind the scenes here. The first is that if your "thing you do to both sides" is a reversible step --- that is, if you're applying a one-to-one function to both sides of an equation --- then it's perfectly valid to use that as part of your proof because it establishes an if-and-only-if relationship. If that function is not one-to-one, all bets are off. You can't prove that \(2=-2\) by squaring both sides to get \ (4=4\), because the function \(x\mapsto x^2\) maps multiple inputs to the same output. It baffles me that most Precalculus textbooks mention one-to-one functions in the first chapter or two, yet completely fail to understand how this applies to solving equations.* A notable exception is UCSMP's Precalculus and Discrete Mathematics book, which establishes the following on p. 169: Reversible Steps Theorem Let \(f\), \(g\), and \(h\) be functions. Then, for all \(x\) in the intersection of the domains of functions \(f\), \(g\), and \(h\), 1. \(f(x)=g(x) \Leftrightarrow f(x)+h(x)=g(x)+h(x)\) 2. \(f(x)=g(x) \Leftrightarrow f(x)\cdot h(x)=g(x)\cdot h(x)\) [We'll actually come back to this one in a bit -- there's a slight issue with it.] 3. If \(h\) is 1-1, then for all \(x\) in the domains of \(f\) and \(g\) for which \(f(x)\) and \(g(x)\) are in the domain of \(h\), \[f(x)=g(x) \Leftrightarrow h(f(x))=h(g(x)).\] Later on p. 318, the book says: "...there is no new or special logic for proving identities. Identities are equations and all the logic that was discussed with equation-solving applies to them." Yes, that whole "math isn't just a bunch of arbitrary rules" thing applies here too. The second important point, which you may have noticed while looking at the statement of the Reversible Steps Theorem, is that the implied domain of an identity matters a great deal. When you're proving a trig identity, you are trying to establish that it is true for all inputs that are in the domain of both sides. Most textbooks at least pay lip service to this fact, even though they don't follow it to its logical conclusion. To illustrate why domain is so important, consider this example: \[\dfrac{\cos{x}}{1-\sin{x}} = \dfrac{1+\sin{x}}{\cos{x}}\] To verify this identity, I'm going to do something that may give you a visceral reaction: I'm going to "cross-multiply". Or, more properly, I'm going to multiply both sides by the expression \((1 - \sin x)\cos x\). I claim that this is a perfectly valid step to take, and what's more, it makes the rest of the proof downright easy by reducing to everyone's favorite Pythagorean identity: (\cos{x})(\cos{x}) &= (1+\sin{x})(1-\sin{x})\\ \cos^2{x} &= 1-\sin^2{x}\\ \sin^2{x} + \cos^2{x} &= 1 \quad\blacksquare "But wait," you ask, "what if \(x=\pi/2\)? Then you're multiplying both sides by zero, and that's certainly not reversible!" True. But if \(x=\pi/2\), then the denominators of both sides of the equation are zero, so the identity isn't even true in the first place. For any value of \(x\) that does not yield a zero in either denominator, though, multiplying both sides of an equation by that value is a reversible operation and therefore completely valid. Now, this isn't to say that multiplying both sides of an equation by a function can't lead to problems --- for example, if \(h(x)=0\) (as in the zero function), then \(f(x)\cdot h(x)=g(x)\cdot h(x)\) no matter what. This can even lead to problems in more subtle cases: suppose \(f\) and \(g\) are equal everywhere but a single point \(a\); for example, perhaps \(f(a)=1\) and \(g(a)=2\). If it just so happens that \(h(a)=0\), then \(f\cdot h\) and \(g\cdot h\) will be equal as functions, even though \(f\) and \(g\) are not themselves equal. The real issue here can be explained via a quick foray into higher mathematics. Functions form what's called a ring -- basically meaning you can add, subtract, and multiply them, and these operations have all the nice properties we'd expect. But being able to preserve that if-and-only-if relationship when multiplying a function by both sides of an equation requires a special kind of ring called an integral domain, which means that it's impossible to multiply two nonzero functions together and get a zero function. Unfortunately, functions in general don't form an integral domain --- not even continuous functions, or differentiable functions, or even infinitely differentiable functions do! But if we move up to the complex numbers (where everything works better!), then the set of analytic functions --- functions that can be written as power series (infinite polynomials) on an open domain --- is an integral domain. And most of the functions that precalculus students encounter generally turn out to be analytic**: polynomial, rational, exponential, logarithmic, trigonometric, and even inverse trigonometric. This means that when proving trigonometric identities, multiplying both sides by the same function is a "safe" operation. So in sum, when proving trigonometric identities, as long as you're careful to only use reversible steps (what a great time to spiral back to one-to-one functions, by the way!), you are welcome to apply all the same algebraic operations that you would when solving equations, and the chain of equalities you establish will prove the identity. Even "cross-multiplying" is fair game, because any input that would make the denominator zero would invalidate the identity anyway.*** Since trigonometric functions are generally "safe" (analytic), we're guaranteed to never run into any issues. Now, none of this is to say that there isn't intrinsic merit to learning how to prove an identity by working from one side to the other. Algebraic "tricks" --- like multiplying by an expression over itself (\(1\) in disguise!) to conveniently simplify certain expressions --- are important tools for students to have under their belts, especially when they encounter limits and integrals next year in calculus. What we need to do, then, is encourage our students to come up with multiple solution methods, and perhaps present working from one side to the other as an added challenge to build their mathematical muscles. And if students are going to work on both sides of an equation at once, then we need to hold them to high standards and make them explicitly state in their proofs that all the steps they have taken are reversible! If they're unsure on whether or not a step is valid, have them investigate it until they're convinced one way or the other. If we're artificially limiting our students by claiming that only one solution method is correct, we're sending the wrong message about what mathematics really is. Instead, celebrating and cultivating our students' creativity is the best way to prepare them for problem-solving in the real world. * Rather, I would say it baffles me, but actually I'm quite used to seeing textbooks treat mathematical topics as disparate and unconnected, like how a number of Precalculus books teach vectors in one chapter and matrices in the next, yet never once mentione how they are so beautifully tied together via transformations. ** Except perhaps at a few points. The more correct term for rational functions and certain trigonometric functions is actually meromorphic, which describes functions that are analytic everywhere except a discrete set of points, called the poles of the function, where the function blows up to infinity because of division by zero. *** If you extend the domains of the trig functions to allow for division by zero, you do need to be more careful. Not because there's anything intrinsically wrong with dividing by zero, but because \(0\cdot\infty\) is an indeterminate expression and causes problems that algebra simply can't handle.
{"url":"https://www.solidangl.es/post/who-says-you-cant-do-that-trig-identities","timestamp":"2024-11-02T17:13:57Z","content_type":"text/html","content_length":"19352","record_id":"<urn:uuid:6d284f0e-1424-4ce8-a1bb-052ed3867058>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00050.warc.gz"}
Robust H[∞] performance optimized by musyn The robust H[∞] performance of an uncertain system is the smallest value γ such that the I/O gain of the system stays below γ for all modeled uncertainty up to size 1/γ (in normalized units). The musyn function synthesizes a robust controller by minimizing this quantity for the closed-loop system over all possible choices of controller. musynperf computes this quantity for a specified uncertain model. For a detailed discussion of robust H[∞] performance and how it is computed, see Robust Performance Measure for Mu Synthesis. [gamma,wcu] = musynperf(clp) calculates the robust H[∞] performance for an uncertain closed-loop system clp. The robust H[∞] performance is the smallest value γ for which the peak I/O gain stays below γ for all modeled uncertainty up to 1/γ, in normalized units. For example, a value of γ = 1.125 implies the following: • The I/O gain of clp remains less than 1.125 as long as the uncertain elements stay within 0.8 normalized units of their nominal values. In other words, for uncertain element values within 0.8 normalized units, the largest possible H[∞] norm is 1.125. • For some perturbation of size 0.8 normalized units, the peak I/O gain is 1.125. The peak I/O gain is the maximum I/O gain over all inputs, which is also the peak of the largest singular value over all frequencies and uncertainties. In other words, if Δ represents all possible values of the uncertain parameters in the closed-loop transfer function CLP(jω), then $\gamma =\underset{\Delta }{\mathrm{max}}\underset{\omega }{\mathrm{max}}{\sigma }_{max}\left(CLP\left(j\omega \right)\right).$ The output structure gamma contains upper and lower bounds on the robust H[∞] performance and the critical frequency at which the I/O gain of clp reaches the lower bound. The structure wcu contains the uncertain-element values that drive the peak I/O gain to the lower bound. [gamma,wcu] = musynperf(clp,w) computes the robust H[∞] performance at the frequencies specified by w. • If w is a cell array of the form {wmin,wmax}, then musynperf restricts the computation to the interval between wmin and wmax. • If w is a vector of frequencies, then musynperf computes the H[∞] performance at the specified frequencies only. [gamma,wcu] = musynperf(___,opts) specifies additional options for the computation. Use robOptions to create opts. You can use this syntax with any of the previous input-argument combinations. [gamma,wcu,info] = musynperf(___) returns a structure with additional information about the H[∞] performance values and the perturbations that drive the I/O gain to γ. See info for details about this structure. You can use this syntax with any of the previous input-argument combinations. Reduce Synthesized Controller While Preserving Robust Performance When you use musyn to synthesize an unstructured robust controller, the resulting controller often is of higher order than is necessary to achieve the desired robust performance. One way to mitigate this problem is to perform model reduction, using musynperf to test the robust performance of the reduced-order controller. Create an uncertain model of the control system described in the example "Robust Tuning of Fixed-Structure Controller" on the musyn reference page. G = tf(1,[1 -1]); Wu = 0.25*tf([1/2 1],[1/32 1]); InputUnc = ultidyn('InputUnc',[1 1]); Gpert = G*(1+InputUnc*Wu); Gpert.InputName = 'u'; Gpert.OutputName = 'y1'; Wp = makeweight(100,[1 0.5],0.25); Wp.InputName = 'y'; Wp.OutputName = 'e'; SumD = sumblk('y = y1 + d'); inputs = {'d','u'}; outputs = {'e','y'}; P = connect(Gpert,Wp,SumD,inputs,outputs); [K,CLperf] = musyn(P,1,1); Robust performance Fit order Iter K Step Peak MU D Fit D 1 1.345 1.344 1.36 8 2 0.7923 0.7904 0.7961 4 3 0.6789 0.6789 0.6857 10 4 0.6572 0.6572 0.6598 8 5 0.6538 0.6538 0.6542 8 6 0.6532 0.6532 0.6533 8 Best achieved robust performance: 0.653 musyn returns an 11th-order controller and the robust ${\mathit{H}}_{\infty }$ performance CLperf of the closed-loop system using that controller. The best achieved robust performance of about 0.65 is good, but the controller order is high. Compute reduced-order controllers for orders ranging from 1 to full order. Find the lowest-order controller Klow with performance no worse than 1.05*CLperf, or 5% degradation compared to the full-order controller. for k=1:N Klow = Kred(:,:,k); CL = lft(P,Klow); [gamma,~] = musynperf(CL); if gamma.UpperBound < 1.05*CLperf To validate the reduced-order controller, examine the robust ${\mathit{H}}_{\infty }$ performance of the system using the simplified controller with that of the system using the full-order CLPlow = lft(P,Klow); [gammalow,~] = musynperf(CLPlow); The fourth-order controller achieves very similar robust ${\mathit{H}}_{\infty }$ performance to the 11th-order controller returned by musyn. Input Arguments clp — Closed-loop uncertain system uss | ufrd | genss | genfrd Closed-loop uncertain system, specified as a uss, ufrd, genss, or genfrd model that contains uncertain elements. For genss or genfrd models, musynperf uses the current value of any tunable blocks and folds them into the known (not uncertain) part of the model. w — Frequencies {wmin,wmax} | vector Frequencies at which to compute robust H[∞] performance, specified as the cell array {wmin,wmax} or as a vector of frequency values. • If w is a cell array of the form {wmin,wmax}, then the function computes the H[∞] performance at frequencies ranging between wmin and wmax. • If w is a vector of frequencies, then the function computes the H[∞] performance at each specified frequency. For example, use logspace to generate a row vector with logarithmically spaced frequency values. Specify frequencies in units of rad/TimeUnit, where TimeUnit is the TimeUnit property of the model. opts — Options for H[∞] performance computation robOptions object Options for computation of robust H[∞] performance, specified as robOptions object. Use robOptions to create the options object. The available options include settings that let you: • Extract frequency-dependent H[∞] performance values. • Examine the sensitivity of the H[∞] performance to each uncertain element. • Improve the results of the calculation by setting certain options for the underlying mussv calculation. In particular, setting the option 'MussvOptions' to 'mN' can reduce the gap between the lower bound and upper bound. N is the number of restarts. For more information about all available options, see robOptions. Example: robOptions('Sensitivity','on','MussvOptions','m3') Output Arguments gamma — Robust H[∞] performance and critical frequency Robust H[∞] performance and critical frequency, returned as a structure containing the following fields: Field Description LowerBound Lower bound on the actual robust H[∞] performance γ, returned as a scalar value. The exact value of γ is guaranteed to be no smaller than LowerBound. In other words, some uncertain-element values of magnitude 1/LowerBound exist for which the I/O gain of clp reaches LowerBound. The function returns one such instance in wcu. UpperBound Upper bound on the actual robust H[∞] performance, returned as a scalar value. The exact value is guaranteed to be no larger than UpperBound. In other words, for all modeled uncertainty with normalized magnitude up to 1/UpperBound, the peak I/O gain of clp is less than UpperBound. CriticalFrequency Frequency at which the I/O gain reaches LowerBound, in rad/TimeUnit, where TimeUnit is the TimeUnit property of clp. Use uscale or normalized2actual to convert the normalized uncertainty values 1/LowerBound or 1/UpperBound to actual deviations from nominal values. wcu — Perturbations driving I/O gain to gamma.LowerBound Perturbations driving I/O gain to gamma.LowerBound, returned as a structure whose fields are the names of the uncertain elements of clp. Each field contains the actual value of the corresponding uncertain element. For example, if clp includes an uncertain matrix M and SISO uncertain dynamics delta, then wcu.M is a numeric matrix and wcu.delta is a SISO state-space model. Use usubs(clp,wcu) to substitute these values for the uncertain elements in clp and obtain the corresponding dynamic system. This system has a peak gain of gamma.LowerBound. Use actual2normalized to convert these actual uncertainty values to the normalized units in which 1/gamma.LowerBound or 1/gamma.UpperBound are expressed. info — Additional information about γ values Additional information about the γ values, returned as a structure with the following fields. Field Description Frequency points at which musynperf returns γ values, returned as a vector. • If the 'VaryFrequency' option of robOptions is 'off', then info.Frequency is the critical frequency, the frequency at which the I/O gain reaches gamma.LowerBound. If the smallest lower bound and the smallest upper bound on γ occur at different frequencies, then info.Frequency is a vector containing these two frequencies. Frequency • If the 'VaryFrequency' option of robOptions is 'on', then info.Frequency contains the frequencies selected by musynperf. These frequencies are guaranteed to include the frequency at which the peak gain occurs. • If you specify a vector of frequencies w at which to compute γ, then info.Frequency = w. When you specify a frequency vector, these frequencies are not guaranteed to include the frequency at which the peak gain occurs. The 'VaryFrequency' option is meaningful only for uss and genss models. musynperf ignores the option for ufrd and genfrd models. Bounds Lower and upper bounds on the actual γ values, returned as an array. info.Bounds(:,1) contains the lower bound at each corresponding frequency in info.Frequency, and info.Bounds (:,2) contains the corresponding upper bounds. Smallest perturbations at each frequency point in info.Frequency, returned as a structure array. The fields of info.WorstPerturbation are the names of the uncertain elements in clp. WorstPerturbation Each field contains the value of the corresponding element that drives the I/O gain to the corresponding lower bound at each frequency. For example, if clp includes an uncertain parameter p and SISO uncertain dynamics delta, then info.WorstPerturbation.p is a collection of numeric values and info.WorstPerturbation.delta is a collection of SISO state-space Sensitivity of γ to each uncertain element, returned as a structure when the 'Sensitivity' option of robOptions is 'on'. The fields of info.Sensitivity are the names of the uncertain elements in clp. Each field contains a percentage that measures how much the uncertainty in the corresponding element affects γ. For example, if info.Sensitivity.p is 50, Sensitivity then a given fractional change in the uncertainty range of p causes half as much fractional change in γ. If the 'Sensitivity' option of robOptions is 'off' (the default setting), then info.Sensitivity is NaN. Version History Introduced in R2019b
{"url":"https://it.mathworks.com/help/robust/ref/dynamicsystem.musynperf.html","timestamp":"2024-11-12T09:22:22Z","content_type":"text/html","content_length":"113305","record_id":"<urn:uuid:de29b5b2-974f-4b43-8fdd-f2cce23b89d9>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00257.warc.gz"}
Data Structures in Python - Alps Academy Data Structures in Python Ultimate Guide to Data Structures in Python Data structures can hold multiple values such as numbers, letters, characters even lists. In programming the most common data structure is the array and this is also the primary data structure in python although it is called a list. We explain the wide range of data structures in python and in computer science. Normally online videos or texts cover the same material, such as for job interviews, but we want to present something a bit more interesting. This article goes from beginner to advanced so the early parts will be of interest to readers new to python, whilst the later parts should be of interest to anyone wishing to learn about data structures and how python works. There are built-in data structures in python like sets, tuples and dictionaries, and strings contain multiple characters. We compare these to lists. In computer science we use data structures like stacks, queues, priority queues, hash tables and linked lists. These are often implemented using lists or a built-in function in python. We also have more complex data structures in computer science such as types of trees or graphs. Python is also great for data science where we use types of array data structures like numpy arrays and series in pandas. We also have data frames, a table-like data structure consisting of multiple series in pandas. There are also other types of built in options in python such as collections. Finally, in most universities we study data structures with algorithms. These consist mainly of search-based or sorting algorithms. As efficiency and complexity are concerns we need to have some understanding of these for our data structures. So how we store and access values, and how this is affected both in memory and time, is a concern for the data structure and the common operations applied to it, such as find, select, update, insert and delete. This is an extensive amount of material and you may find it a valuable reference in future studies. Please familiarize yourself with the table of contents and the links to the various sections. Python Data Structures There are built-in data structures in python called lists, sets, tuples and dictionaries, in addition to strings that hold multiple characters. Other data structures can be created using lists, are available in a python library that can be added to your python code using import, or, can be created using python code involving classes and To begin we will look at the main python data structures, initially in a easy-to-understand manner suitable for beginners, and then progress to more detail on how lists work for example. Other data structures in computer science, data science and in python will follow below. Lists in Python for beginners The most common data structure in python is the list, that stores multiple values in the same way a list does in real life and is similar to arrays in other programming languages. If you are a beginner then lists can be used in most of your programs when you need to store more than one item. Lists allow items to be easily added, updated, deleted, accessed and printed. Lets see a simple list with numbers 1 to 5. A list can be considered a series of boxes that can contain values. Now lets see how we can perform actions on the list below. We give the list a name (e.g. my_list) and assign the name to the list by using the equals sign and the list in square brackets, with the values separated by commas. Here is our new list my_list = [20, 40, 5, 15, 20] We will use this list in all the examples below, even after we add, update or delete elements, so that it is easier to understand each operation. The list does not have to be in order, it can contain the same value more than once, and it does not have to be numbers, it could be strings, floating point numbers, a mixture of types or even mixed List Operations There are many list functions that can be applied to the list, lets start with the basic operations. To add an item to the list we use the append() function. This adds the new value to the end of the To print the list in full we can simple use print with the list name in the brackets as we do with single variables. my_list = [20, 40, 5, 15, 20] output: [20, 40, 5, 15, 20, 25] To access and update an element in the list we use the index. The index is a number that represents the position of a value in the list, such as how many boxes is it from the beginning. The first element, in the first position, is zero (0) boxes or places from the beginning, so we use the index number 0. The next item in the list is at the beginning +1, therefore we use the index 1 to access that element. output: 20 output: 15 The index number continues to the end of the list. In our example of a list we can access the first element using the index 0 in square brackets after the list name. If we print my_list[0], the first list element, then the output will be the value in the first position, which is 20. The item in the fourth position, which is 3 places from the beginning, will have an index of 3. If we print my_list[4] the output is the value 15. For reference if you enter a negative number then the index will start at the end of the list so [-1] is the index of the last number, [-2] then second last number, etc. We can change a value using the index to select the item we wish to change, and assign it the new value. Let’s change the second value in our list to 30. my_list = [20, 40, 5, 15, 20] my_list[1] = 30 output: [20, 30, 5, 15, 20] Finally, we can delete or remove a list item. There are two approaches. To remove a list item using the value then use remove(), to delete an item by its index use pop(). my_list = [20, 40, 5, 15, 20] output: [20, 40, 15, 20] If we have multiple values then it will remove the first instance of that numbers, for example: my_list = [20, 40, 5, 15, 20] output: [40, 5, 15, 20] To delete a number from within the list we can use pop() with the index inside the pop function brackets. output: [20, 40, 5, 20] If we wish to remove the last item in the list we can use pop() without the index number. my_list = [20, 40, 5, 15, 20] output: [20, 40, 5, 15] There are many list functions. If you are a beginner and would like to know more about these functions then watch this video on YouTube called ‘Python Coder | # Beginner | Lists‘ Print List Elements So far we have printed the entire list, but we can print each element either by using the index as we have seen or by using a for loop. for i in my_list: To print the list items next to their indexes we can use range(). Range has the advantage of starting from 0, so we only need to know the length of the list. Even if the list size changes the use of the len() function inside range will still result in the full list being printed. for i in range(len(my_list)): print(i, my_list[i]) Slice a List If you wish to have a subset of your list then you use the start and end positions to slice the list. For example, we want to take our original list of five numbers and slice the middle three numbers into a new list. new_list = my_list.[ 1 : 4 ] output: [40, 5, 15] We use the start position or index of 1, then use a colon, then use the end position of 4. If you leave either of the start or end position empty the list will be sliced from the start or end position as follows: new_list = my_list.[ : 4 ] output: [20, 40, 5, 15] new_list = my_list.[ 1 : ] output: [40, 5, 15, 20] Sets in Python Sets, as it mathematics, are a collection of unique elements without order. Usual set operations (union, intersection, etc) are available in python. Sets are very different from lists in python and unless you are specifically dealing with mathematical sets then it is more likely that a list is the data structure for your needs. A common use of sets is to convert a list to a set to remove duplicates and then to convert the set back to a list. This is not a suggested approach if efficiency and memory use is a concern, but it does have interesting consequences. Here is an example: alist = [10,20,30,30,40,40,50] alist = list(set(alist)) print (alist) Output: [40, 10, 50, 20, 30] The duplicates are removed but the list has changed order. Sets not only have unique elements but are not ordered. Lists in Python for non-beginners Data structures are often taught with algorithms and the complexity of operations applied on the data structures are important. For example, with a list we can add an item to the end of a list in one operation, but what if we want to add an item not at the end, at the beginning for example. This is more complex or expensive, computationally expensive. output: [40, 5, 15, 20] In the remove example, the second item with the value 40 is moved to the first position, the third item with the value 5 is moved to the second position, 15 moves one place to index 2 position, and the last item is now at index 3 not index 4 position. As a beginner this is not important but if you write programs with larger lists or change list values often, then you may not want this operational expense. The numbers are not in order in the list, this could be desired. But if access time is important then the list could be sorted and elements located easier. This depends on use as the frequent expense of sorting the list would have to be worth it for the access benefits. Sort a List in Python The numbers are not necessarily in order in a python list, this could be desired. But if access time is important then the list could be sorted and elements located easier. This depends on use as the frequent expense of sorting the list would have to be worth it for the access benefits. Python has a sort() function for lists, although it is not the fastest or most efficient, it is effective on lists that are frequently ordered. output: [5, 15, 20, 20, 40] The following definition explains the sorting algorithm that is used in the sort function in python called Timsort: Timsort (derived from merge sort and insertion sort) was introduced in 2002 and while slower than quicksort for random data, Timsort performs better on ordered data. [1] Lists vs Arrays Python lists are not the same as arrays. Arrays in other programming languages are initially declared and they store items of the same value. In this approach the array length is known as each position and storage requirement is known in advance. But python lists can contain different data types or even other nested lists. How? Python lists do not store the values directly. Instead in python the lists store a pointer to the values in memory. In this approach the position of each value from the beginning is known as it is a pointer in each place. The value is accessed using the pointer when required. This approach also has the effect on the memory allocation of the python list, seen below. There are arrays in python that are similar to arrays from other languages, but, in python, we use lists. List Memory Allocation When the list is created it has a limited amount of storage (up to 4 elements), and storage is increased at various stages as the amount of elements in the list grows. The size of the list in bytes, and the stages of the increases in regard to elements are as follows: □ 224 bytes empty list □ 736 bytes over 4 elements □ 2272 bytes list over 16 elements □ 8416 bytes list over 76 elements □ 32992 bytes list over 306 elements Built-in Python Data Structures for non-beginners Lists are the predominant built-in data structure in python but we also use sets, tuples, dictionaries and even strings. Tuples are not mutable and therefore are different from lists although they share several functions that use the same syntax. As tuples have little functionality they are easier to store and useful for temporary variables for example. As tuples can’t be changed and list functions like append() are not available, if a user wants to add an element to a tuple then the ‘+’ operator can be used to add another tuple, this is called Sets are implemented in python using a hash table, a popular technique to perform insertion, deletion, and traversal in O(1) on average. Hashing tables are explain in detail later. “Python Sets are implemented using dictionary with dummy variables, where key beings the members set with greater optimizations to the time complexity.” [2] There is also the frozenset that is part of collections and explained in the collections section. Set Time Complexity Set operations have the following complexity: • O(n) for search ‘in‘ set • O(n) for set difference (x, y) where n is length of set x • O(z) where z is length of set x + length of set y for set union (x, y) • O(z) where z is length of set x * length of set y for set intersection (x, y) Set Memory Allocation Sets have a similar style of memory use to lists with increases at different size stages, but use less bytes.There are more frequent increases but the size remains less than the list of an equal size of items. An empty set uses 64 bytes compared to 224 bytes for an empty list. The addition of one set element increases its size to 96 bytes, which then increases to 128 bytes with 5 elements. The size increases with an addition of four more elements to 192, then 8 more elements to 264 when there are 17 elements. Data Structure Code Examples A comprehensive list of functions relating to the data structures featured in this article. This can act as a great resource for students and learners everywhere for all your data structure needs. Lists in Python There are 11 python list functions and several common functions applied on python lists. We provide two examples for pop() as it can remove an item using an index or the default removes the last List Functions alist = [‘A’, ’B’, ’C’] print (alist) Output: [’A’, ’B’, ’C’, ’d’] alist = [‘A’, ’B’, ’C’] blist = [‘d’, ’e’, ’f’] print (alist) Output: [’A’, ’B’, ’C’, ’d’,’e’, ’f’] alist = [‘A’,’B’,’C’,’D’,’E’, ‘F’] print (alist) Output: [’A’, ’B’, ’C’, ’z’, ’D ’, ’E ’, ’F’] alist = [‘A’,’B’,’C’,’D’,’E’, ‘F’] print (alist) Output: [’A’, ’B’, ’C’, ’D ’, ’F’] alist = [‘A’,’B’,’C’,’D’,’E’, ‘F’] x = alist.pop() print (alist) Output: [’A’, ’B’, ’C’, ’z’, ’D ’, ’E ’] alist = [‘A’,’B’,’C’,’D’,’E’, ‘F’] x = alist.pop(2) print (x) Output: 2 alist = [‘A’,’B’,’C’,’D’,’E’, ‘F’] print (alist) Output: [] alist = [‘A’,’B’,’C’,’D’,’E’, ‘F’] x = alist.index(‘C’) print (x) Output: 2 alist = [‘A’,’B’,’C’,’D’,’E’, ‘F’] x = alist.count('B') print (x) Output: 1 alist = [‘E’, ’A’, ’C’, ’D’,’ B’] print (alist) Output: [’A’, ’B’, ’C’, ’D’, ’E’] alist = [‘A’,’B’,’C’,’D’,’E’, ‘F’] print (alist) Output: [‘F’, ’E’, ’D’, ’C’, ’B’, ’A’] alist = [‘A’,’B’,’C’,’D’,’E’, ‘F’] x = alist.copy() print (alist) Output: [’A’, ’B’, ’C’, ’D ’, ’E ’, ’F’] Functions Applied on Lists alist = [‘A’,’B’,’C’,’D’,’E’, ‘F’] print (len(alist)) Output: 6 alist = [‘A’,’B’,’C’,’D’,’E’, ‘F’] print (type(alist)) Output: <class ‘list’> alist = [‘A’,’B’,’C’,’D’,’E’, ‘F’] print (min(alist)) Output: A alist = [‘A’,’B’,’C’,’D’,’E’, ‘F’] print (min(alist)) Output: F alist = [1, 2, 3] print (sum(alist)) Output: 6 Operations Applied on Lists alist = [‘A’,’B’,’C’,’D’,’E’, ‘F’] if ‘B’ in alist: print (‘yes’) Output: yes alist = [‘A’,’B’,’C’,’D’,’E’, ‘F’] if ‘b’ in alist: print (‘yes’) no output not in alist = [‘A’,’B’,’C’,’D’,’E’, ‘F’] if ‘b’ not in alist: print (‘yes’) Output: yes + operator alist = [‘A’, ’B’, ’C’] blist = [‘d’, ’e’, ’f’] alist = alist + blist print (alist) Output: [’A’, ’B’, ’C’, ’d’, ’e’, ’f’] * operator alist = [1, 2, 3] print (alist*2) Output: [1, 2, 3, 1, 2, 3] Apply keyword del to Delete a List alist = [1, 2, 3] del alist print (alist) Output: NameError: name ‘alist’ is not defined Tuples in Python List functions that can be applied on tuples have the same syntax, but not all list functions are available for tuples. List functions that are not available for tuples: • append() • extend() • insert() • remove() • pop() • clear() • copy() There are also sort(), that is not available but tuples can use sorted(), and reversed() that is different. Using the sorted() function on a tuple results in a list. Online there is a tuple function called cmp() that compares tuples in python 2, but it does not work in python 3. Tuple Functions atup = (‘A’,’B’,’C’,’D’,’E’, ‘F’) print (atup) Output: (’A’, ’B’, ’C’, ’z’, ’D ’, ’E ’, ’F’) atup = (‘A’,’B’,’C’,’D’,’E’, ‘F’) x = atup.index(‘C’) print (x) Output: 2 atup = (‘A’,’B’,’C’,’D’,’E’, ‘F’) x = atup.count('B') print (x) Output: 1 Tuple Functions (different from list functions) sorted() - output is a list atup = (‘E’, ’A’, ’C’, ’D’,’ B’) x = sort(atup) print (x) Output: [’A’, ’B’, ’C’, ’D’, ’E’] atup = (‘A’,’B’,’C’,’D’,’E’, ‘F’) x = atup.reverse() print (tupe(atup)) Output: (‘F’, ’E’, ’D’, ’C’, ’B’, ’A’) Functions Applied on Tuples atup = (‘A’,’B’,’C’,’D’,’E’, ‘F’) print (len(atup)) Output: 6 atup = (‘A’,’B’,’C’,’D’,’E’, ‘F’) print (type(atup)) Output: <class ‘tuple’> atup = (‘A’,’B’,’C’,’D’,’E’, ‘F’) print (min(atup)) Output: A atup = (‘A’,’B’,’C’,’D’,’E’, ‘F’) print (min(atup)) Output: F atup = (1, 3, 5) print (sum(atup)) Output: 9 Operations Applied on Tuples atup = (‘A’,’B’,’C’,’D’,’E’, ‘F’) if ‘B’ in atup: print (‘yes’) Output: yes atup = (‘A’,’B’,’C’,’D’,’E’, ‘F’) if ‘b’ in atup: print (‘yes’) no output not in atup = (‘A’,’B’,’C’,’D’,’E’, ‘F’) if ‘b’ not in atup: print (‘yes’) Output: yes + operator atup = (‘A’, ’B’, ’C’) btup = (‘d’, ’e’, ’f’) atup = atup + btup print (atup) Output: (’A’, ’B’, ’C’, ’d’, ’e’, ’f’) * operator atup = (1, 2, 3) print (atup*2) Output: (1, 2, 3, 1, 2, 3) Apply keyword del to Delete a Tuple atup = (1, 2, 3) del atup print (atup) Output: NameError: name ‘atup’ is not defined Sets in Python create a set set_X = {10,20,30} Output: <class ‘set’> set_X = {10,20,30} Output: 3 set_X = {10,20,30} print(10 in set_X) Output: True not in set_X = {10,20,30} print(10 not in set_X) Output: False Set Functions set_X = {10,20,30} Output: {40, 10, 20, 30} x = {"a", "b", "c"} y = {"b", "c", "d"} z = x.union(y) Output: {‘a’, ‘b’, ‘d’, ‘c’} x = {"a", "b", "c"} y = {"b", "c", "d"} z = x.intersection(y) Output: {‘b’, ‘c’} x = {"a", "b", "c"} y = {"b", "c", "d"} Output: {‘b’, ‘c’} x = {"a", "b", "c"} y = {"b", "c", "d"} print (x.difference(y)) print (y.difference(x)) Output: {‘a’} x = {"a", "b", "c"} y = {"b", "c", "d"} print (x) x = {"a", "b", "c"} y = {"b", "c", "d"} Output: {‘a’} set_X = {10,20,30} t = {10,20,30,40,50} t = {10,20} Output: True set_X = {10,20,30} t = {10,20,30,40,50} t = {10,20} Output: False x = {"a", "b", "c"} y = {"d","e"} print (x.disjoint(y)) Output: True set_X = {10,20,30} t = {10,20,30,40,50} t = {10,20} Output: {50, 40} set_X = {10,20,30} t = {10,20,30,40,50} t = {10,20} Output: {40, 50} x = {"a", "b", "c"} y = {"b", "c", "d"} Output: {‘a’, ‘b’, ‘d’, ‘c’} set_X = {10,20,30} Output: {20, 30} pop() – removes a random item set_X = {10,20,30} Output: {20, 30} set_X = {10,20,30} x = 20 Output: {10, 30} set_X = {10,20,30} Output: set() set_X = {10,20,30} i = set_X.copy() Output: {10,20,30} Dictionaries in Python Although there are several dictionary functions that we have already seen in other data structures, we mainly use the primary functions of values(), keys(), items() and get(). Here is the full list of dictionary functions: • items() • keys() • values() • fromkeys() • get() • pop() • popitem() • clear() • copy() • update() • setdefault() Dictionary Functions Here is a list of dictionary functions and common operations that are applied on dictionaries. adict={1:'a', 7:'d', 4:'z'} for i in adict.items(): (1, ‘a’) (7, ‘d’) (4, ‘z’) adict={1:'a', 7:'d', 4:'z'} for i in adict.keys(): adict={1:'a', 7:'d', 4:'z'} for i in adict.values(): Order a Dictionary To change the order of the keys or values of the dictionary it is possible to use a sorted function and a reverse function. Here is a series of ways to order a dictionary: adict= {1:'a', 7:'d', 4:'z'} rdict = dict(reversed(list(adict.items()))) for k,v in rdict.items(): 4 z 7 d 1 a #ordered by keys for k in sorted(adict.keys()): #ordered by values for v in sorted(adict.values()): sdict = dict(sorted(adict.items())) for k,v in sdict.items(): 1 a 4 z 7 d #ordered in reverse by key for i in sorted(adict.keys(), reverse=True): print (i,adict[i]) 7 d 4 z 1 a #ordered by value for k in sorted(adict, key=adict.get): print(k, adict[k]) 1 a 7 d 4 z #ordered in reverse by value for k in sorted(adict, key=adict.get, reverse=True): print(k, adict[k]) 4 z 7 d 1 a [1] Is There a Sorting Algorithm Faster than Quicksort and Timsort? slashdot, at https://developers.slashdot.org/story/20/07/25/0050202/is-there-a-sorting-algorithm-faster-than-quicksort-and-timsort, accessed 13th July 2022 [2] Sets in Python, Geeks for Geeks, at https://www.geeksforgeeks.org/sets-in-python/?ref=lbp, accessed 13th July 2022 Leave a Comment You must be logged in to post a comment.
{"url":"https://www.alps.academy/data-structures-in-python/","timestamp":"2024-11-11T04:42:22Z","content_type":"text/html","content_length":"150191","record_id":"<urn:uuid:54994a92-4865-4968-9435-e113bfc7dbc0>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00345.warc.gz"}
Random Curiosities (II) Again a full reading list that I've compiled, the most interesting items are selected for the 'random curiosities' of this week. It seems to me that these lists will surely help me to keep track of my readings and interests over time and it will be definitely interesting to look back.. • The first item on the list is a great documentary I've watched recently which is called 'The Emergence of Network Science' featuring Steven Strogatz. It is mainly focused on graphs and especially the famous 'six-degree of seperation' problem and gives the new emerging science of networks. I've met many new names from the film and one of the major ones is A. Barabasi and his wonderful new book 'Network Science'. It is published online and it seems like a great resource. I will definitely check it out before the 'Applied Graph Theory' course in Nesin Matematik Village which I'll be participating this summer. • A game-theoretical ecological article called 'The hawk–dove game in a sexually reproducing species explains a colourful polymorphism of an endangered bird' in the recent issue of Proceedings of the Royal Society B. It examines the famous 'hawk-dove game' behaviour in the Gouldian finches population and derives the conditions for the evolutionary beneficial polymorphism to be retained in the population. Link for the article. • A recent article from C. Veller and M. Nowak titled "Extended flowering intervals of bamboos evolved by discrete multiplication" which is about the mathematical pattern involving the flowering of bamboo plants (Link for the article). There is a nice overview of the paper called 'Bamboo Mathematicians' in Carl Zimmer's blog. • Another interesting find from Strogatz twitter feed: 'Winning at Rock Paper Scissors'.. A great numberphile video discussing a 'real-life' strategy for the renowned Rock-Paper-Scissors game using game theory. In relation to that, I've discovered an interesting article of Strogatz, namely 'Nonlinear Dynamics of the Rock-Paper-Scissors Game with Mutations'... • From New York Times, an article on mathematical population model of blue crabs in Chesapeake Bay which is based on a field data: Mathematicians and Blue Crabs • Another interesting article on the models of evolutionary mechanism from Yale which proposes “house of cards” model — which holds that mutations with large effects effectively reshuffle the genomic deck — explains evolutionary processes better than the theory that species undergo the accumulation of many mutations with small effects. Further read: In evolution, ‘house of cards’ model wins • Quanta Magazine has published an interesting article about genetically identical flies and their diverging individual behaviours studied through genetic and environmental variations. Details in the article: 'Animal Copies Reveal Roots of Individuality' • Nautilus is running a very interesting theme this month: 'Error' with a sub-title "How does the scientist negotiate the hall of mirrors and come out clutching the truth?..." Worth checking out.. • An inspiring read from the great mathematician V. I. Arnold 'On Mathematics Education' from the archives of Dynamical Systems Magazine. • Book find of the week: A Mathematical Nature Walk by John A. Adam. Full of wonderful questions about various natural phenomena and inspiring models and answers for them. Definitely a gem. (Book
{"url":"https://nonelephantdynamics.blogspot.com/2015/05/random-curiosities-ii.html","timestamp":"2024-11-14T04:43:33Z","content_type":"application/xhtml+xml","content_length":"46236","record_id":"<urn:uuid:ecdefb5a-bf53-427b-a53d-ae838a65d392>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00378.warc.gz"}
The return on Luccasen Corp. next year depends on how strong the economy is: Economy Probability... The return on Luccasen Corp. next year depends on how strong the economy is: Economy Probability... The return on Luccasen Corp. next year depends on how strong the economy is: Economy Probability Return very strong 0.2 25% normal 0.4 8% weak 0.3 4% recession 0.1 -10% What is the expected return (average) based on this information? What is the standard deviation of these returns? Answer :- Expected Return (based on information above) :- 8.40% Standard Deviation of these returns :- 9.74 Detailed calculation & explanation is attached below :
{"url":"https://justaaa.com/finance/387550-the-return-on-luccasen-corp-next-year-depends-on","timestamp":"2024-11-10T18:13:56Z","content_type":"text/html","content_length":"46017","record_id":"<urn:uuid:b1415e5e-a12a-4c62-8480-27cb919f1c4a>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00561.warc.gz"}
Commutant hypercyclicity of Hilbert space operators An operator $T$ on a Hilbert space $H$ is commutant hypercyclic if there is a vector $x$ in $H$ such that the set $\{Sx: TS=ST\}$ is dense in $H$. We prove that operators on finite dimensional Hilbert space, a rich class of weighted shift operators, isometries, exponentially isometries and idempotents are all commutant hypercyclic. Then we discuss on commutant hypercyclicity of $2\times 2$ operator matrices.Moreover, for each integer number $n \geq 2$, we give a commutant hypercyclic nilpotent operator of order $n$ on an infinite dimensional Hilbert space. Finally, we study commutant transitivity of operators and give necessary and sufficient conditions for a vector to be a commutant hypercyclic vector. • There are currently no refbacks.
{"url":"https://journal.pmf.ni.ac.rs/filomat/index.php/filomat/article/view/18896","timestamp":"2024-11-13T17:37:38Z","content_type":"application/xhtml+xml","content_length":"15896","record_id":"<urn:uuid:49bf71d0-e1e6-4200-af3a-e3e444aa4d78>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00739.warc.gz"}
Algebra 1 | MathMedics+ top of page This service is not available, please contact for more information. Algebra 1 Prerequisites: Saxon Algebra 1/2, Saxon 8/7, or other PreAlgebra equivalent Service Description This is a moderately-paced course designed for students who are well prepared for Algebra 1 and are able to maintain a brisk pace and reasonable weekly assignments. This class will provide one credit of algebra and students can list "Algebra 1" on their transcripts. Algebra 1 covers all topics in a first-year algebra course - from proofs, statistics, and probability to algebra-based, real-world problems. With Algebra 1, students employ higher-order thinking skills, real-world applications, reasoning, and justification to make connections to math strands. Algebra 1 focuses on algebraic thinking and multiple representations (verbal, numeric, symbolic, and graphical). REQUIRED CLASS MATERIALS Saxon Algebra 1 Homeschool Kit, 3rd edition OPTIONAL CLASS MATERIALS Saxon Algebra 1 Tests and Worksheets, 3rd edition (if you choose not to purchase Homeschool Kit) Contact Details • USA + 470-242-1701 bottom of page
{"url":"https://www.mathmedicsplus.com/service-page/algebra-1-1","timestamp":"2024-11-11T21:22:24Z","content_type":"text/html","content_length":"1050485","record_id":"<urn:uuid:565bf089-0d65-4aca-969b-9326a51b0b6c>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00562.warc.gz"}
Tech Mahindra Logical Reasoning Interview Questions with Answers Each and every person in this world are having "constant mind"...DIFFERENTIATE others mind and INTEGRATE urs.MATHEMATICS SIMPLY MAKES YOU UNSTOPPABLE !!! "If you able to solve the problems in MATHS, then you also able to solve the problems in your LIFE" (Maths is a great Challenger) Thanks m4 maths for helping to get placed in several companies. I must recommend this website for placement preparations.
{"url":"https://m4maths.com/placement-puzzles.php?SOURCE=Tech%20Mahindra&TOPIC=Logical%20Reasoning","timestamp":"2024-11-12T16:10:48Z","content_type":"text/html","content_length":"103543","record_id":"<urn:uuid:a0656c0c-e4cc-40b5-b818-bb1615770530>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00729.warc.gz"}
Section Properties & General Stress Analysis Calculator | Structural Engineering Section Properties & General Stress Analysis CALCULATORS Calculate Section Properties & General Stress Analysis of any number for free. Enter the values below to calculate Section Properties & General Stress Analysis. #section #properties #analysis # structural #stress #structural engineering #general #engineering Plywood Box Beam Calculator Ceiling Beam Calculator Cofferdam Design Calculations I Beam Load Capacity Calculator Scaffold Dead Load Calculator Download & Save the software to Use (requires Microsoft Excel) Section properties refer to the geometric properties of a structural member that are required for calculating stresses and deflections. These properties include area, moment of inertia, radius of gyration, section modulus, and centroid. Accurate calculations of these properties ensure that the design meets the required safety criteria. Area: The total cross-sectional area of a member is essential in determining its strength based on its material properties, such as yield strength and ultimate strength. Moment of Inertia: The moment of inertia is a measure of a member's resistance to bending. A high moment of inertia means the member is more resistant to bending and can hold more load. Radius of Gyration: The radius of gyration is the distance from the centroid of the cross-section to an axis whose moment of inertia is equal to the moment of inertia of the cross-section. Section Modulus: The section modulus is another important property used to determine a member's resistance to bending. It is the ratio of the moment of inertia to the distance from the neutral axis to the most distant point on the section. Centroid: The centroid is the point in a cross-section where the area is evenly distributed. This property is used to calculate the moment of the force on the member. General stress analysis calculator: A general stress analysis calculator is a tool that calculates the stresses and deflections of a structural member under specific loading conditions. Engineers use these calculators to verify the design of a structure and ensure its safety. The calculators use various inputs, including the material properties of the member, the cross-sectional properties, and the loading conditions. There are many online calculators available for stress analysis, and many are free to use. These calculators have made it easier for engineers to determine the strength and capacity of a structure without having to perform extensive calculations manually. Understanding section properties and stress analysis is crucial for designing safe and efficient structures. The accurate calculation of these properties ensures that a structure meets the required safety criteria. With the availability of online calculators, engineers can perform these calculations easily and quickly, saving time and effort. These tools have revolutionized the field of structural engineering, making it easier for engineers to deliver complex structures safely and efficiently. Recently Viewed Numbers
{"url":"https://calculatorsedge.com/section-properties-general-stress-analysis","timestamp":"2024-11-10T20:38:43Z","content_type":"text/html","content_length":"39209","record_id":"<urn:uuid:ec85709e-ac75-4b50-bcf3-e61d71319b26>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00884.warc.gz"}
Integer operations Number and Operations (NCTM) Understand numbers, ways of representing numbers, relationships among numbers, and number systems. Develop meaning for integers and represent and compare quantities with them. Understand meanings of operations and how they relate to one another. Understand the meaning and effects of arithmetic operations with fractions, decimals, and integers. Compute fluently and make reasonable estimates. Develop and analyze algorithms for computing with fractions, decimals, and integers and develop fluency in their use. Algebra (NCTM) Represent and analyze mathematical situations and structures using algebraic symbols. Recognize and generate equivalent forms for simple algebraic expressions and solve linear equations
{"url":"https://newpathworksheets.com/math/grade-8/integer-operations","timestamp":"2024-11-14T11:44:01Z","content_type":"text/html","content_length":"44892","record_id":"<urn:uuid:f7c64e07-8642-424b-aa20-f6c792751427>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00397.warc.gz"}
Combinatorics is a branch of mathematics that deals with problems in selecting, arranging, and combining objects chosen from a given set. To take simple examples, questions such as how many ways can a line of five people be ordered, or how many possible hands there are in a game of bridge, or how many ways there are to choose the six numbers in a lottery that uses the numbers from 1 to 49, fall under the field of combinatorics. Also available at Math Lair is a Combinatorics Worksheet.
{"url":"http://mathlair.allfunandgames.ca/combinatorics.php","timestamp":"2024-11-13T08:43:47Z","content_type":"text/html","content_length":"2492","record_id":"<urn:uuid:d58530f2-e6c1-4d41-8300-23d5a1989c01>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00753.warc.gz"}
The candela, symbol cd, is the SI unit of luminous intensity in a given direction. It is defined by taking the fixed numerical value of the luminous efficacy of monochromatic radiation of frequency 540 Γ 10^12 Hz, K[cd], to be 683 when expressed in the unit lm W^β 1. Abbreviation: cd Reference: Bureau International des Poids et Mesures 2019 The International System of Units (SI) Quantity Symbol for quantity Q Symbol for dimension Name of abstract unit u[Q] Symbol for unit u[Q] [*] elementary entity ^*,$ U[X] U elementary unit x count ^*,$ N[X] = NΒ·U[X] X elementary unit x amount of substance ^*,Β§ n[X] = N[X]Β·N[A]^-1 N mole mol charge ^*,β ¬ Q[el] = z[X]Β·eΒ·N[X] IΒ·T coulomb C = AΒ·s length l L meter m mass m M kilogram kg time t T second s electric current I I ampere A thermodynamic temperature T Ξ kelvin K luminous intensity I[v] J candela cd [*] SI units, except for the canonical 'elementary unit' [x]. The following footnotes are canonical comments, related to iconic symbols. ^* For the elementary quantities N[X], n[X], and Q[el], the entity-type X of the elementary entity U[X] has to be specified in the text and indicated by a subscript: n[O[2]]; N [ce]; Q[el]. ^$ Count N[X] equals the number of elementary entities U[X]. In the SI, the quantity 'count' is explicitly considered as an exception: "Each of the seven base quantities used in the SI is regarded as having its own dimension. .. All other quantities, with the exception of counts, are derived quantities" (Bureau International des Poids et Mesures 2019 The International System of Units (SI)). An elementary entity U[X] is a material unit, it is not a count (U[X] is not a number of U[X]). N[X] has the dimension X of a count and U[X] has the dimension U of an elementary entity; both quantities have the same abstract unit, the 'elementary unit' [x]. ^Β§ Amount n[X] is an elementary quantity, converting the elementary unit [x] into the SI base unit mole [mol] using the Avogadro constant N[A]. ^β ¬ Charge is a derived SI quantity. Charge is an elementary quantity, converting the elementary unit [x] into coulombs [C] using the elementary charge e, or converting moles [mol] into coulombs [C] using the Faraday constant F. z[X] is the charge number per elementary entity U[X], which is a constant for any defined elementary entity U[X]. Q[el] = z[X Bioblast links: SI base units - >>>>>>> - Click on [Expand] or [Collapse] - >>>>>>> MitoPedia concepts: Ergodynamics
{"url":"https://mitophysiology.org/index.php/Candela","timestamp":"2024-11-05T22:17:07Z","content_type":"text/html","content_length":"45010","record_id":"<urn:uuid:a4aa7714-652c-4903-b440-692150e7b13c>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00118.warc.gz"}
BMO, Munich: Publications "Electrostatics of proteins in dielectric solvent continua. I. An accurate and efficient reaction field description." Sebastian Bauer, Gerald Mathias, and Paul Tavan J. Chem. Phys. 140, 104102 (2014). We present a reaction field (RF) method which accurately solves the Poisson equation for proteins embedded in dielectric solvent continua at a computational effort comparable to that of an electrostatics calculation with polarizable molecular mechanics (MM) force fields. The method combines an approach originally suggested by Egwolf and Tavan (J. Chem. Phys. 118, 2039, 2003) with concepts generalizing the Born solution (Zeitschrift für Physik 1, 45, 1920) for a solvated ion. First, we derive an exact representation of the RF potential in terms of inducible atomic anti-polarization densities and of atomic shielding charge distributions, which immediately suggests the use of Gaussian approximations. The resulting approximate representation of the RF potential and energy takes the form of an electrostatics model, whose atomic sources are Gaussian shielding charge and anti-polarization densities. While the strengths of the shielding charge densities are directly given in terms of the static partial charges as defined, e.g. by standard MM force fields for the various atom types, the strengths of the anti-polarization densities are calculated by a self-consistency iteration. The effective atomic volumes of the Gaussian shaped atoms are calculated by a second self-consistency procedure serving to guarantee that the dielectric function ε(r) is close to one everywhere inside the protein. The Gaussian widths σ[i] of the atoms i are parameters of the RF approximation. The remarkable accuracy of the method is demonstrated by comparison with Kirkwood’s analytical solution for a spherical protein (J. Chem. Phys. 2, 351, 1934) and with computationally expensive gridbased numerical solutions for simple model systems in dielectric continua including a di-peptide (Ac-Ala-NHMe) as modeled by a standard MM force field. The latter example shows how weakly the RF conformational free energy landscape depends on the parameters σ[i]. A summarizing discussion highlights the achievements of the new theory and of its approximate solution particularly by comparison with so-called generalized Born methods. A follow-up paper describes how the method enables Hamiltonian, efficient, and accurate MM molecular dynamics simulations of proteins in dielectric solvent continua. BMO authors (in alphabetic order): Sebastian Bauer Gerald Mathias Paul Tavan Assoziierte Projekte: Long-range electrostatics in molecular dynamics simulations
{"url":"https://www.bmo.physik.uni-muenchen.de/publikationen/paper.php?id=2976","timestamp":"2024-11-06T23:42:01Z","content_type":"text/html","content_length":"9989","record_id":"<urn:uuid:7cb3cbf1-7d3a-4f6f-bbd6-7a88f89ff24d>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00648.warc.gz"}
Daily Dose of Excel Sheet Name A formula to put the sheet’s name into a cell The CELL() function with the “filename” argument will return the full path of the workbook plus the sheet name. The workbook will be enclosed in brackets []. This formula finds the right bracket (]) and returns everything after that. Note that if the workbook has not been saved, there is no path, and this formula returns a #Value error. 25 thoughts on “Sheet Name” 1. Thanks, I will be using this formula in my project. 2. As an alternative, this has two fewer function calls: 255 is just a big number. Could be anything >=31. 3. Can I use this formula somehow to grab the sheet name of a different excel file and place it in a cell of the current excel file? 4. You could use something like this to do that for you. The cell references for the CELL() function have been changed for cells on the sheet that you are interested in in another workbook. This only works if the other workbook is open though otherwise it returns the #N/A error. 5. OH,thanks john. I dont think that will work for me. I have an autocad created excel file. AutoCAD assigns random name to the excel sheets, but the file name is always the same. I need to grab the random name and put it in a different excel file. I have not been able to find anyone who can do it so far. 6. John, Your formula works perfect, it just sucks that it wont work unless the file is open. thanks again! 7. Obtain sheet names from a closed workbook: 8. I want to enter “sheet2? into A1 of sheet1. Then I want to sum A1:A5 of sheet2. the formula would be =SUM(sheet2!A1:A5) I would like to pull the sheet name from A1 of sheet1. The reason for this is because I have data relating to all 10 provinces in one workbook and I want to write a formula that will pull the province name from a cell in Sheet11 to correspond to the sheet name that I want to sum up. I would like to write a formula like this where A1 will return sheet2 I hope this makes sense. Is this possible. 9. Matt, Using the INDIRECT function like this: 10. Juan Thanks so much that will work excellent. 11. I am trying to place the sheet name into a cell in another sheet of the same workbook. 12. I want to use the conditional sum function on sheet 2, but refer to a data range on sheet 1. I have successfully used the conditional sum wizard to enter three conditions, placing the formula on sheet 1. However, I need to put these totals on sheet 2, separate from the data on sheet 1. How do I add a reference to Sheet 1 in this formula? 13. I have answered my own question. When asked for the data range, I clicked to Sheet 1 and selected the data range. I actually had tried manually adding the reference to “Sheet1!” but it did not work until I went back into the Conditional Sum Wizard and highlighted the cell range. Why am I unable to edit my formula after using Conditional Sum Wizard? I tried the above edit and it didn’t work. Then I tried to change the reference to one column (change from C1:C10 to D1:D10), but when I do that it is invalid and I must start all over using the Wizard. I am using Excel 2000. Do the newer versions resolve that problem? 14. Hi Robin – The conditional sum wizard produces an array formula. Notice that it has {curly braces} around it in the formula bar? When you create or edit an array formula, you have to hole CTRL+SHIFT while pressing Enter, in order to produce a valid array formula. – Jon 15. Thank you so much, Jon! I can’t tell you how much this has helped me! 16. I am working in the same array formulas and have another question. I have a Conditional Sum formula and need one of my conditions to say this: “If C2:C17 is not blank (it will contain a number), add the numbers.” What I need to know is how to express the condition “not blank.” Thanks very much! 17. re “put sheet name into a cell” (Excel XP) ie. =RIGHT(CELL(“filename”,A1),LEN(CELL(“filename”,A1))-FIND(“]”,CELL(“filename”,A1))) and similar. The above does not appear to work well when there are multiple sheets in a workbook. eg. I use the formula to display the sheetname on about 12 sheets. Unfortunately, at any one point in time, all the same sheetname is displayed on all sheets. The formula appears to display the sheetname of the sheet with the most recently “recalculated” cell value. Simply switching from one sheet the next does not change the sheetname displayed. The displayed name only changes to the currently open sheet if: + Press F9 to recalculate a value (/values) on the ‘active’ worksheet + or say enter data or a formula and press enter; ie. this appears to make the “open” sheet the “active” sheet. 18. David, Are you sure you have a cell reference after “filename”? eg. CELL(“filename”) vs. CELL(“filename”,A1) — with the cell anchor you shouldn’t have to recalculate in order to see it. 19. This will work in a cell to get the sheetname of a refference (and does not require the file to be saved) Note however you cannot use it on the sheet you are getting the name of… Excel automatically gives local address and then doesn’t have the sheetname.. :P =MID((CELL(“Address”,’Sheet You Want The Name of’!$A$1)),(FIND(“]”,CELL(“Address”,Sheet You Want The Name of’!$A$1),1)+1),(FIND(“!”,CELL(“Address”,’Sheet You Want The Name of’!$A$1),2)-1)-(FIND (“]”,CELL(“Address”,’Sheet You Want The Name of!$A$1),1)+1)) Still Requires Recalculation. Hope this helps someone… 20. This is all facinating, but I was looking for something the other way round. How do I set a worksheet name to the value of a Cell. ie I have a cell H1 containing “January 2008? and I want sheet1 to be labelled this programmatically which would save me manually changing all 7 worksheets every month 21. Derek: You need VBA for that. See 22. excel 2002 I want to use the name of the ACTIVEworksheet into the chart label or textbook. I already can get the worksheet name into a cell. NOTE. I name each tab by date and then want this date on the chart which I print out. 23. THANKS !!! adding in the reference to A1 (ie CELL(“filename”,A1) rather than simply CELL(“filename”) did the trick for me !! :-D 24. I am using formula for Filename in one cell. When another file is opened, the filename of this new file gets updated in the earlier file. For example File named “123.xlsx” is open, the cell A1 shows “123?. When the file named “xyz.xlsx” is opened, cell A1 in file 123.xlsx shows “xyz” How to fix this. Thanks in advance 25. Vinod, Add a cell reference (the 2nd parameter to the CELL function). Posting code? Use <pre> tags for VBA and <code> tags for inline.
{"url":"http://dailydoseofexcel.com/archives/2004/06/14/sheet-name/","timestamp":"2024-11-12T00:16:12Z","content_type":"text/html","content_length":"104895","record_id":"<urn:uuid:977636f7-8a68-41a0-ab84-46a529d26a76>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00331.warc.gz"}
Post-doc position Simulation of maximal entropy random walks on infinite graphs - EIPHI Graduate school Institut Mathématiques de Bourgogne, Université de Bourgogne, France. 12 months. September 2021 to August 2022 Net Salary: Financing Institutions: Maximal entropy random walks on networks consist in considering the probabilities of transitions that maximize the entropy of the trajectory walk. Given the importance of the notion of entropy in both statistical physics and information theory, it is not surprising that these random walks have important connections with quantum physics and theoretical computer science. A maximal entropy random walk can be a good model for locating electrons in irregular lattices. The stationary probability is then the fundamental state of the quantum random walk f(t) = exp(-itA)f (0) where A is the adjacency matrix of the lattice on which the random walk evolves. It is thus directly related to the quantum information. The measurement of the importance of a node in a network is an essential data in modeling, both in computer science with search engines, and in neuroscience to detect neuronal activities. Knowing how to identify the importance of nodes is at the heart of many applications. Maximal entropy random walks allow to quantify this importance. They also provide a model of path integrals in quantum There is generally no easily accessible formula to define the transitions. The challenge of the post-doctoral will be to obtain a numerical simulation method of these random walks with maximum entropy: to reach the maximal entropy step as a limit, in steady state, to quantify the speed of convergence towards this steady state. • The various facets of random walk entropy, Burda, Z. et al., Acta Phys.Polon. B41 (2010) 949-987 • Universal quantum computation using the discrete-time quantum walk Neil B. Lovett, Sally Cooper, Matthew Everitt, Matthew Trevers, and Viv Kendon, Phys. Rev A 81, 042330 – 2010 • From Structure to Activity: Using Centrality Measures to Predict Neuronal Activity International Journal of Neural SystemsVol. 28, No. 02, 1750013 (2018) The applicant should have a PhD thesis in applied mathematics or mathematics, with a working knowledge of probability, and have a marked taste for numerical experiments. He/She will join the research team SPOC of the IMB, where he will be supervised by Peggy Cénac and Yoann Offret. Working language: English or French. To apply To apply, send an e-mail to Peggy Cénac before March 30th, 2021, with the following: • cover letter, • a detailed CV with a list of publications, • a copy of recent works (the PhD thesis dissertation, and/or the corresponding papers, if the PhD has not been defended yet). Support letters can be joined, or sent separately.
{"url":"https://gradschool.eiphi.ubfc.fr/?page_id=2803","timestamp":"2024-11-04T03:52:29Z","content_type":"text/html","content_length":"48939","record_id":"<urn:uuid:ca7c62bf-52e6-40d5-9006-124164953b7c>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00454.warc.gz"}
Why would a Vector Embedding presented as a single Decimal Number Vector embeddings are a type of representation of data that use high-dimensional vectors to represent objects or concepts. They have been used in various applications such as natural language processing, computer vision, and recommendation systems. In this blog post, we will discuss the concept of vector embeddings and how they can be presented as a single decimal number. Vector embeddings are typically represented as dense matrices with a large number of rows and columns. Each row represents an object or concept, and each column represents a feature or attribute of that object or concept. For example, in natural language processing, words can be represented as vectors in a high-dimensional space, where the features represent semantic relationships between One way to represent vector embeddings is by using a single decimal number. This is often done by converting the matrix into a dense array of floating-point numbers and then normalizing it to have unit length. The resulting array can be represented as a single decimal number, where each element represents a coordinate in the high-dimensional space. Here's an example of how this can be done in Python using the NumPy library: import numpy as np # Generate some random data for demonstration purposes data = np.random.rand(10, 5) # Normalize the data to have unit length data_norm = data / np.linalg.norm(data, axis=1)[:, np.newaxis] # Convert the normalized data into a dense array of floating-point numbers data_array = data_norm.astype('float32') # Convert the dense array into a single decimal number data_decimal = np.sum(data_array, axis=0) / data_array.shape[0] In this example, we first generate some random data using NumPy's `random.rand()` function. We then normalize the data to have unit length using NumPy's `linalg.norm()` function. Next, we convert the normalized data into a dense array of floating-point numbers using NumPy's `astype()` function. Finally, we convert the dense array into a single decimal number by summing up all the elements and dividing by the total number of elements in the array. It's important to note that representing vector embeddings as a single decimal number can be useful for certain applications where it's necessary to compare or combine multiple vectors. For example, in recommendation systems, it may be useful to compare the vector embeddings of different items to find similarities and make recommendations based on those similarities. In this case, representing the vector embeddings as a single decimal number can simplify the comparison process. However, there are some limitations to representing vector embeddings as a single decimal number. One limitation is that it may not be possible to capture all the information in the original high-dimensional space with a single decimal number. Additionally, the normalization process used to convert the matrix into a dense array of floating-point numbers can introduce rounding errors, which can affect the accuracy of the resulting decimal number. In conclusion, representing vector embeddings as a single decimal number can be useful for certain applications where it's necessary to compare or combine multiple vectors. However, it's important to be aware of the limitations and potential issues that may arise when using this representation.
{"url":"https://www.ernestech.com/articles/18292","timestamp":"2024-11-11T13:19:27Z","content_type":"text/html","content_length":"66806","record_id":"<urn:uuid:11d3265c-3c67-47e9-98eb-f6f6ca3f04f1>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00108.warc.gz"}
Jiří Pavlů a Tomáš Rosa The advent of quantum computers. Time to panic? Quantum computing started as an idea to make our simulations of quantum effects in nature more precise, and more efficient. However this idea was not given much attention, as the classical computers were still getting faster, and capable of better simulations even without the utilization of quantum effects. This changed when Peter Shor presented a quantum algorithm that can quickly factor RSA moduli and compute discrete logarithms. This has effectively shown that almost entire public key cryptography is vulnerable to an attack by a quantum computer. From that day on, the quantum computers are in the centre of interest of cryptologists and technological giants from all around the world. The power of quantum computers comes from the fact that they are in some sense capable of performing the same calculations on exponentially many inputs in terms of the input length. We shall emphasize this does not mean they can simply break a general encryption algorithm simply by trying all possible keys in parallel and then guess the correct one. Despite the quantum parallelism noted above, we can finally measure only one result that is chosen randomly with certain probability. Fortunately, in quantum computers, the aforementioned probabilities are governed by amplitudes whose coefficients can be negative or even complex numbers. Thanks to this fact, the amplitudes (and so the probabilities) can actually interfere in either constructive or destructive way. The art of designing quantum algorithms is then to try to amplify the probabilities of correct results, and cancel out the probabilities of incorrect ones. In our lecture, we will use practical examples to illustrate this approach. To mitigate the threat of quantum attacks, two main approaches emerge – quantum key distribution (QKD) and quantum resistant classical cryptography (so called post-quantum cryptography, PQC). The advantage of QKD is that it utilizes the very power of quantum mechanics to ensure that the distributed key was not eavesdropped. This works, since any observation of an unknown quantum bit inevitably affects its state, so any attempt to tap the channel can be detected. However, due to the same reason, the protocol is not resilient with respect to DoS attacks, as any attempt to tap the channel will also prevent the key distribution. On the other hand, PQC only requires replacing standard assumptions of computational hardness used in contemporary cryptosystems. In particular, we shall not base our cryptosystems on the hardness of factoring, but use problems that are still hard even for quantum computers. In the lecture, we will also touch the following questions: When should we expect the first quantum attack? What is the threat of retroactive cryptanalysis? How do these attacks actually work? Can we defend ourselves? How to develop countermeasures? Does the advent of quantum computers mean the end of cryptography as we know it today? Jiří Pavlů Holds a master degree in mathematics for information technologies from MFF UK. He specializes in theoretical cryptography, especially on the security and usability of symmetric ciphers, and coding theory. He is a cryptologist of the competence centre of Raiffeisen Bank International group. Tomáš Rosa Holds Ph.D. in cryptology, his doctoral dissertation was awarded the Best Doctoral Work Award of the rector of ČVUT in 2004. He studied on FEL ČVUT and MFF UK in Prague. He deals with mathematical and physical methods of computer security, especially in embedded and radio applications. His work also improved a number of world-wide standards – TLS protocol, EMV scheme, Bluetooth, and GNSS. He is the chief cryptologist of the competence centre of Raiffeisen Bank International group.
{"url":"https://tate.cz/en/articles-is2/155-lectures-2019/912-jiri-pavlu-a-tomas-rosa-en","timestamp":"2024-11-10T06:38:14Z","content_type":"text/html","content_length":"24043","record_id":"<urn:uuid:4b81f652-7a5b-44dc-a9f3-8d6b20b681e9>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00473.warc.gz"}
JAC Class 9 Maths Notes Chapter 1 Number Systems Students should go through these JAC Class 9 Maths Notes Chapter 1 Number Systems will seemingly help to get a clear insight into all the important concepts. JAC Board Class 9 Maths Notes Chapter 1 Number Systems Classification Of Numbers: → Natural numbers: The numbers used for counting are called natural numbers. So, the number 1 and any other number obtained by adding 1 to it repeatedly. Hence 1, 2, 3 … represent natural numbers. Set of natural numbers is denoted by N. → Whole numbers: Natural numbers along with 0 are termed as whole numbers. So, the smallest whole number is 0. The set of whole numbers is represented by W. → Integers: All positive and negative whole numbers that do not include any fractional or decimal parts are called integers. The set of integers is expressed as Z = {……-3, -2, -1, 0, 1, 2, 3, ……}, where Z is the symbol used to denote the set of integers. → Rational numbers: These are numbers which can be expressed in the form of p/q, where p and q are integers and q ≠ 0. e.g. 2/3, 37/15, -17/19. The set of rational numbers is represented by the symbol Q. • All natural numbers, whole numbers and integers are rational numbers. • Rational numbers include all Integers (without any decimal part to it), terminating decimals (e.g. 0.75, -0.02 etc.) and also non-terminating but recurring decimals (e.g. 0.666…, -2.333…… etc.) • All fractions are rational numbers but every rational number is not a fraction, Since both numerator and denominator are always positive in a fraction but a rational number can have both numerator and denominator as negative. → Common fraction: A common fraction is a fraction in which numerator and denominator are both integers, eg, \(\frac{2}{5}\), \(\frac{1}{2}\) etc. → Decimal fraction: Fractions whose denominator is 10 or any power of 10. → Proper fraction: It is a fraction whose numerator is smaller than its denominator eg. \(\frac{3}{5}\). → Improper fraction: It is a fraction whose numerator is greater than its denominator eg. \(\frac{5}{3}\). → Mixed fraction: A combination of a proper fraction and a whole number is called mixed fraction eg. 3\(\frac{2}{7}\) etc. Improper fraction can be written in the form of mixed fractions. → Compound fraction: A fraction in which the numerator or the denominator or both contain one or more fractions eg \(\frac{2 / 3}{5 / 7}\). → Irrational numbers. These are numbers which cannot be expressed in the form of p/q, where p and q are integers and q≠0. These are non-recurring as well as non-terminating type of decimal numbers eg \(\sqrt{2}\), π, 0.202002000…… etc. → Real numbers: Numbers which can represent actual physical quantities in a meaningful way are known as real numbers. These can be represented on the number line Number line is geometrical straight line with arbitrarily defined zero (origin). The set of real number is denoted by the symbol R. → Prime numbers: All natural numbers that have one and themselves only as their factors are called prime numbers le prime numbers are divisible by 1 and themselves only e.g. 2 3, 5, 7, 11, 13, 17, 19, 23….etc. → Composite numbers: All natural numbers, which has three or more different factors are called composite numbers. E.g. 4, 6, 8, 9, 10, 12,… etc. • 1 is neither prime nor composite. → Co-prime numbers: If the H.C.F. of the given numbers (not necessarily prime) is 1 then they are known as co-prime numbers. e.g. 4, 9 are Co-prime as H.C.F of (4, 9) = 1. • Any two consecutive integer numbers will always be co-prime. → Even numbers: All integers which are divisible by 2 are called even numbers. Even numbers are denoted by the expression 2n, where n is any integer. e.g ….., 4, -2, 0, 2, 4….. → Odd numbers: All integers which are not divisible by 2 are called odd numbers. Odd numbers are denoted by the general expression 2n – 1 or 2n + 1, where is any integer. e.g. ……,-5, -3, -1, 1, 3, Identification Of Prime Number: Step 1: Find approximate square root of given number. Step 2: Divide the given number by prime numbers less than approximate square root of number. If given number is not divisible by any of these prime numbers then the number is prime otherwise not. 571, is it prime? Approximate square foot of 571 = 24. Prime number < 24 are 2, 3, 5, 7, 11, 13, 17, 19 and 23. But 571 is not divisible by any of these prime numbers, so 571 is a prime number. Is 1 prime or composite number? 1 is neither a prime nor a composite number. Representation For Rational Number On A Real Number Line: Divide 0 to 1 into 7 equal parts on real number line. • In a positive rational number, if numerator is smaller than its denominator then it lies between 0 and 1 on the number line. • In a negative rational number if absolute value of numerator less than its denominator then it lies between -1 and 0. Decimal Number (Terminating): Here, A represents 2.65 on the number line. Visualize the representation of \(5.3 \overline{7}\) on the number line upto 5 decimal places. i.e. 5.37777. Here, B represents 5.7777 on the number line upto 5 places of decimal. Rational Number In Decimal Representation: → Terminating Decimal: In this, a finite number of digits occurs after decimal point i.e. 0.5, 0.6875, 0.15 etc. → Non-Terminating and Repeating (Recurring) Decimal: In this a set of digits or a digit is repeated continuously Ex. \(\frac{2}{3}\) = 0.6666… = \(0 . \overline{6}\) Ex. \(\frac{1}{2}\) = 0.454545… = \(0 . \overline{45}\). Properties Of Rational Numbers: Let a, b, c be three rational numbers. • Commutative property of addition: a + b = b + a • Associative property of addition: (a + b) + c = a + (b + c) • Additive identity: a + 0 = a is called an identity element of a. • Additive inverse a + (-a) = 0, 0 is identity element, -a is called additive inverse of a. • Commutative property of multiplications: a.b = b.a • Associative property of multiplication: (a × b) × c = a × (b × c) • Multiplicative Identity: 2 × 1 = 2, 1 is called multiplicative identity of a. • Multiplicative inverse: (a) × (\(\frac{1}{a}\)) = 1. 1 is called multiplicative identity and \(\frac{1}{a}\) is called multiplicative inverse of a or reciprocal of a. • Distributive property: a × (b + c) = a × b + a × c Prove that \(\sqrt{3}-\sqrt{2}\) is an irrational number. Let, \(\sqrt{3}-\sqrt{2}\) = r where r, be a rational number. Squaring both sides, (\(\sqrt{3}-\sqrt{2}\))^2 = r^2 ⇒ 3 + 2 – 2\(\sqrt{6}\) = r^2 ⇒ 5 – 2\(\sqrt{6}\) = r^2 ⇒ \(\frac{5-r^2}{2}\) = \(\sqrt{6}\) Here, \(\sqrt{6}\) is an irrational number but \(\frac{5-r^2}{2}\) is a rational number ∴ LH.S. ≠ R.H.S. Hence, it contradicts our assumption that \(\sqrt{3}-\sqrt{2}\) is a rational number. Therefore, \(\sqrt{3}-\sqrt{2}\) is an irrational number. Irrational Numbers: These are real numbers which cannot be expressed in the form of p/q, where p and q are integers and q ≠ 0. → Irrational Number in Decimal Form: \(\sqrt{2}\) = 1.414213..ie. it is non-recurring as well as non-terminating \(\sqrt{3}\) = 1.732050807….. i.e. It is non-recurring as well as non-terminating. Insert an irrational number between 2 and 3. \(\sqrt{2 \times 3}=\sqrt{6}\) is an irrational number between 2 and 3. → Irrational Numbers on a Number Line: Plot \(\sqrt{2}\), \(\sqrt{3}\), \(\sqrt{5}\), \(\sqrt{6}\) on the same number line. → Properties of Irrational Numbers: • Negative of an irrational number is an irrational number eg. \(\sqrt{3}\) and –\(\sqrt{3}\) are irrational numbers. • Sum and difference of a rational and an irrational number is always an irrational number. • Sum and difference of two irrational numbers is either rational or irrational number. • Product of a rational number with an irrational number is either rational or irrational number. • Product of an irrational with an irrational is not always irrational. Ex. Two numbers are 2 and \(\sqrt{3}\). then Sum = 2 + \(\sqrt{3}\), is an irrational number Difference = 2 – \(\sqrt{3}\) is an irrational number Also \(\sqrt{3}\) – 2 is an irrational number. Ex. Two numbers are 4 and \(\sqrt[3]{3}\), then Sum = 4 + \(\sqrt[3]{3}\), is an irrational number. Difference = 4 – \(\sqrt[3]{3}\), is an irrational number. Ex. Two irrational numbers are 13, –\(\sqrt{3}\), then Sum = \(\sqrt{3}\) + (-\(\sqrt{3}\)) = 0, which is rational. Difference = \(\sqrt{3}\) – (-\(\sqrt{3}\)) = 2\(\sqrt{3}\), which is irrational. Geometrical Representation Of Irrational Numbers On The Number Line To represent \(\sqrt{2}\) on number line we follow the following steps: Step I: Draw a number line and mark the centre point as zero. Step II: Mark right side of the zero as (1) and the left side as (-1). Step III: We won’t be considering (-1) for our purpose. Step IV: With same length as between 0 and 1, draw a line perpendicular to point (1), such that new line has a length of 1 unit. Step V: Now join the point (0) and the end of new line of unit length. Step VI: A right angled triangle is constructed. Step VII: Now let us name the triangle as ABC such that AB is the beight (perpendicular), BC is the base of triangle and AC is the hypotenuse of the right angled triangle ABC. Step VIII: Now length of hypotenuse, i.e., AC can be found by applying pythagoras theorem to the right triangle ABC. AC^2 = AB^2 + BC^2 ⇒ AC^2 = 1^2 + 1^2 ⇒ AC^2 = 2 ⇒ AC = \(\sqrt{2}\) Step IX: Now with AC as radius and C as the centre cut an arc on the same number line and name the point as D. Step X: Since AC is the radius of the arc and hence, CD will also be the radius of the arc whose length is \(\sqrt{2}\). Step XI: Hence, D is the representation of \(\sqrt{2}\) on the number line. Any irrational number of the form \(\sqrt[n]{a}\) is given a special name surd, where is called radicand it should always be a positive rational number. Also the symbol \(\sqrt[n]{ }\) is called the radical sign and the index n is called order of the surd. \(\sqrt[n]{a}\) is read as ‘nth root d’ and can also be written as an \(\mathrm{a}^{\frac{1}{n}}\) → Some Identical Surds: • \(\sqrt[3]{4}\) is a surd as radicand is a rational number. Similar examples \(\sqrt[3]{5}, \sqrt[4]{12}, \sqrt[5]{7}, \sqrt{12} . .\) • 2 + 2\(\sqrt{3}\) is a surd (as rational + surd number will give a surd) Similar examples: \(\sqrt{3}+1, \sqrt[3]{3}+1, \ldots\) • \(\sqrt{7-4 \sqrt{3}}\) is a surd as 7-4\(\sqrt{3}\) is a perfect square of (2 – \(\sqrt{3}\)). Similar examples: \(\sqrt{7+4 \sqrt{3}}, \sqrt{9-4 \sqrt{5}}, \sqrt{9+4 \sqrt{5}}, \ldots\) • \(\sqrt[3]{\sqrt{3}}\) is a surd as \(\sqrt[3]{\sqrt{3}}=\left(3^{\frac{1}{2}}\right)^{\frac{1}{3}}\) = \(3^{\frac{1}{6}}=\sqrt[6]{3}\) Similar examples : \(15 \sqrt{6}-\sqrt{216}+\sqrt{96}\) → Some Expressions are not Surds: • \(\sqrt[3]{8}\) is not a surd because \(\sqrt[3]{8}=\sqrt[3]{2^3}\) = 2. which is a rational number. • \(\sqrt{2+\sqrt{3}}\) is not a surd because 2 + \(\sqrt{3}\) is not a perfect square. • \(\sqrt[3]{1+\sqrt{3}}\) is not a surd because radicand is an irrational number. Laws Of Surds: [Important for changing order of surds] Operation Of Surds: → Addition and Subtraction of Surds: Addition and subtraction of surds are possible only when order and radicand are same i.e. only for surds. Example: Simplify → Multiplication and Division of Surds: → Comparison of Surds: It is clear that if x > y > 0 and n > 1 is a positive integer then \(\sqrt[n]{x}>\sqrt[n]{y}\). Which is greater in each of the following: (i) \(\sqrt[3]{6} \text { and } \sqrt[5]{8}\) (ii) \(\sqrt{\frac{1}{2}} \text { and } \sqrt[3]{\frac{1}{3}}\) Exponents Of Real Number: → Positive Integral Power: For any real number a and a positive integer ‘n’ we define: a^n = a × a × a × … × a (n times). a^n is called n^th power of a. The real number ‘a’ is called the base and ‘n’ is called power of a. e.g. 2^3 = 2 × 2 × 2 = 8 Note: For any non-zero real number ‘a’ we define a^0 = 1. e.g, thus 3^0 = 1, 5^0 = 1, \(\left(\frac{3}{4}\right)^0\) = 1 and so on. → Negative Integral Power: For any non-zero real number ‘a’ and a positive integer ‘n’ we define \(a^{-\mathrm{n}}=\frac{1}{a^n}\). Rational Exponents Of A Real Number: → Principal n^th Root of a Positive Real Number: If ‘a’ is a positive real number and ‘n’ is a positive integer, then the principal n^th root of a is the unique positive real number x such that x^n = The principal root of a positive real number a^1/n denoted by \(\sqrt[n]{a}\). → Principal n^th Root of a Negative Real Number: If ‘a’ is a negative real number and ‘n’ is an odd positive integer, then the principal n^th root of a is define as \(-|\mathrm{a}|^{1 / \mathrm{n}}\) i.e. the principal n^th root of -a is negative of the principal n^th root of |a|. Remark: If ‘a’ is negative real number and ‘n’ is an even positive integer, then the principal nth root of a is not defined, because an even power of real number is always positive. Therefore, (-9)^1 /2 is a meaningless quantity, if we confine ourselves to the set of real numbers only. → Rational Power (Exponents): For any positive real number ‘a’ and a rational number \(\frac{p}{q}\) where q ≠ 0, p, q ∈ Z. We define a^p/q = (a^p)^1/q i.e. a^p/qis the principal q^th root of a^p. Laws Of Rational Exponents: The following laws hold for the rational exponents where a, b are positive real numbers and m, n are rational numbers. Leave a Comment
{"url":"https://jharkhandboardsolution.com/jac-class-9-maths-notes-chapter-1/","timestamp":"2024-11-08T17:38:48Z","content_type":"text/html","content_length":"165934","record_id":"<urn:uuid:405c6bad-2a71-46b0-a90a-fe9a963f8c32>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00058.warc.gz"}
Polynomials | THE RIGHT MENTORTHE RIGHT MENTOR Tag: Polynomials Pre-Requisires Test & Enrich Speed Notes Notes For Quick Recap Study Tools Audio, Visual & Digital Content Revision Notes on Polynomials Polynomial A polynomial is an algebraic expression that includes constants, variables, and exponents. It is the expression in which the variables have only positive integral powers. Example 1. 4×3 + 3×2 + x +3… Pre-Requisires Test & Enrich Speed Notes Notes For Quick Recap Study Tools Audio, Visual & Digital Content Revision Notes on Polynomials Polynomial A polynomial is an algebraic expression that includes constants, variables, and exponents. It is the expression in which the variables have only positive integral powers. Example 1. 4×3 + 3×2 + x +3… Pre-Requisires Test & Enrich Polynomials | Speed Notes Notes For Quick Recap Introduction : Polynomial : Any expression of the form a0xn+a1xn-1+a2xn-2+….an is called a polynomial of degree n in variable x ; a0≠0, where n is a non-negative integer and a0, a1, a2, ….., and are real numbers, called the coefficients of the terms of… Pre-Requisires Test & Enrich Polynomials | Speed Notes Notes For Quick Recap Introduction : Polynomial : Any expression of the form a0xn+a1xn-1+a2xn-2+….an is called a polynomial of degree n in variable x ; a0≠0, where n is a non-negative integer and a0, a1, a2, ….., and are real numbers, called the coefficients of the terms of… Pre-Requisires Test & Enrich Speed Notes Notes For Quick Recap Study Tools Audio, Visual & Digital Content Revision Notes on Polynomials Polynomial A polynomial is an algebraic expression that includes constants, variables, and exponents. It is the expression in which the variables have only positive integral powers. Example 1. 4×3 + 3×2 + x +3…
{"url":"https://therightmentor.com/tag/polynomials/","timestamp":"2024-11-04T10:08:50Z","content_type":"text/html","content_length":"233191","record_id":"<urn:uuid:31fe2df9-1845-4b45-998f-36582916631e>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00261.warc.gz"}
Paroc - a Unified Framework Towards the Optimal Design, Operational Operation and Model-Based Control of Process Systems Tuesday, November 10, 2015: 9:33 AM Salon F (Salt Lake Marriott Downtown at City Creek) The presence of uncertainty in process systems is one of the key reasons for deviation from set operation policies, resulting in suboptimal or even infeasible operation. As these uncertainties realize themselves on different time scales such as on a control, scheduling or design level, an integrated, comprehensive approach to consider uncertainty is required. Thus, in this contribution we demonstrate PAROC (PARametric Optimization and Control), a novel unified framework for the design, operational optimization and advanced model-based control of process systems, which decomposes this challenging problem into a series of steps, shown in the Figure below [1]. The first step comprises the formulation of a high-fidelity dynamic model of the original process, as well as its validation using various techniques such as parameter estimation and dynamic optimization. This model does not only serve as the first step in translating a real-world system into a set of equations, but also as a platform for the validation of any receding horizon policy While the high-fidelity model is in general applicable to design purposes, its complexity may render its use for the development of receding horizon policies computationally infeasible. Thus, in the second step, the validated high-fidelity model is reduced in complexity and size using system identification or advanced model reduction techniques, aiming at compromising the accuracy of the original model as little as possible [2]. This approach results in a discretized state-space model, which is used in the next step for the development of receding horizon policies such as control laws and scheduling policies [3, 4]. At this step, based on the discretized state-space model, the problem of devising a suitable receding horizon policy is formulated as a constrained optimization problem. Within our framework, this problem is solved offline employing multi-parametric programming, where the states of the system are treated as parameters and the constrained optimization problem is solved as a function thereof. Due to the parameter-dependence of the constraints, different solutions might be optimal in different parts of the parameter space. This results in a partition of the parameter space into different regions, called critical regions, and each region is associated with a corresponding optimal solution of the optimization problem as a function of the parameters. As a result we obtain the receding horizon policies explicitly as a function of the states of the system[1], and reduce the computational effort of their evaluation to a point location in the parameter space and a function evaluation. However, when solving the receding horizon policies it is assumed that the values of the state vector are exactly known. As this might not be the case, e.g. due to noise, it is necessary to infer the state information from the available output measurements using a state estimator. While a long existing model-based technique for unconstrained state estimation is the Kalman filter, the use of constrained estimation techniques such as the moving horizon estimator (MHE) can lead to significant improvements of the estimation result by adding system knowledge [5, 6]. MHE is an estimation method that obtains the estimates by solving a constrained optimization problem given a horizon of past measurements. Thus similarly to the problem of receding horizon policies, the presented framework solves the MHE problem in a multi-parametric fashion, where the past and current measurements and inputs and the initial guess for the estimated states are the parameters of the problem [5, As a last step, the obtained receding horizon policies are validated 'in-silico' using the original high-fidelity model, thus closing the loop. This validation is of crucial importance as iterative experiments on real plants might be too costly or dangerous to run. In particular in the case of multiple objectives such as minimization of error, safe operation and economically optimal performance, the possibility of performing 'in-silico' tests of a developed control strategy allows for the fine-tuning and optimal design of the control strategy. In order to apply the afore-described framework, we also present software solutions for the different aspects of the framework. Due to its modeling and dynamic optimization capabilities, we employ PSE's gPROMS® ModelBuilder to formulate and validate the high-fidelity model of the process. Similarly, due to its wide-spread use and numerous in-build functions, the steps of model approximation as well as formulation and solution of multi-parametric programming problems is performed in MATLAB® using state-of the art software [8, 9] based on the POP® toolbox [10]. Lastly, the solution of the multi-parametric programming problem is integrated into gPROMS® using a specifically designed foreign process written in C++. This approach avoids the use of tools such as gO:MATLAB, and thus enables the use of the dynamic simulation and optimization capabilities of PSE's gPROMS®. The applicability of this novel framework will be demonstrated on a wide range of problems such as industrial processes [11], bio-medical [5, 12, 13], cogeneration heat and power systems [14]. 1. Pistikopoulos, E.N., et al., PAROC—An integrated framework and software platform for the optimisation and advanced model-based control of process systems. Chemical Engineering Science, in print. 2. Lambert, R.S.C., P. Rivotti, and E.N. Pistikopoulos, A Monte-Carlo based model approximation technique for linear model predictive control of nonlinear systems. Computers & Chemical Engineering, 2013. 54: p. 60–67. 3. Bemporad, A., et al., The explicit linear quadratic regulator for constrained systems. Automatica, 2002. 38(1): p. 3–20. 4. Kopanos, G.M. and E.N. Pistikopoulos, Reactive Scheduling by a Multiparametric Programming Rolling Horizon Framework: A Case of a Network of Combined Heat and Power Units. Industrial & Engineering Chemistry Research, 2014. 53(11): p. 4366–4386. 5. Nascu, I., et al., Simultaneous Multi-Parametric Model Predictive Control and State Estimation with Application to Distillation Column and Intravenous Anaesthesia, in 24th European Symposium on Computer Aided Process Engineering, 2014, Elsevier. p. 541–546. 6. Rao, C.V., Moving Horizon Strategies for the Constrained Monitoring and Control of Nonlinear Discrete-time Systems, in Chemical Engineering. 2000, University of Wisconsin-Madison. 7. Voelker, A., K. Kouramas, and E.N. Pistikopoulos, Simultaneous state estimation and model predictive control by multi-parametric programming. Computer Aided Chemical Engineering, 2010. 28(C): p. 607–612. 8. Dua, V., N.A. Bozinis, and E.N. Pistikopoulos, A multiparametric programming approach for mixed-integer quadratic engineering problems. Computers & Chemical Engineering, 2002. 26(4–5): p. 715–733. 9. Oberdieck, R., M. Wittmann-Hohlbein, and E.N. Pistikopoulos, A branch and bound method for the solution of multiparametric mixed integer linear programming problems. Journal of Global Optimization, 2014. 59(2-3): p. 527–543. 10. ParOs, Parametric Optimization Programming (POP). 2004, ParOS. 11. Papathanasiou, M.M., et al., A control strategy for periodic systems - application to the twin-column MCSGP. Computer Aided Chemical Engineering, 2015. Accepted for publication. 12. Chang, H., et al., Robust multi-parametric model predictive control for LPV systems with application to anaesthesia. Journal of Process Control, 2014. 24(10): p. 1538–1547. 13. Nascu, I., R. Oberdieck, and E.N. Pistikopoulos, A framework for hybrid multi-parametric model-predictive control with application to intravenous anaesthesia. Computer Aided Chemical Engineering, 2015. Accepted for publication. 14. Diangelakis, N.A. and E.N. Pistikopoulos, A Decentralised Multi-parametric Model Predictive Control Study for a Domestic Heat and Power Cogeneration System. Computer Aided Chemical Engineering, 2015. Accepted for publication. [1] In addition, also measured disturbances and output set points are treated as parameters. Extended Abstract: File Not Uploaded
{"url":"https://aiche.confex.com/aiche/2015/webprogram/Paper424509.html","timestamp":"2024-11-04T09:09:57Z","content_type":"text/html","content_length":"17108","record_id":"<urn:uuid:ee2cba9c-8a1c-423a-b07c-16f570c8f8e1>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00721.warc.gz"}
Q26: Question – Paper 3 November 17 – AQA GCSE Maths Foundation Helpful Links for a printable version of this question. 26) 42 men and 38 women visit a restaurant. • 44 of these people have a voucher. • Three times as many men as women do not have a voucher. a) Complete the frequency tree. [4 marks] b) A voucher takes 15% off the bill. After using the voucher, the bill for a meal is £27.20 How much was the bill before using the voucher? [3 marks]
{"url":"https://www.elevise.co.uk/ax26q.html","timestamp":"2024-11-03T09:27:09Z","content_type":"text/html","content_length":"91851","record_id":"<urn:uuid:279b3fc5-0650-46b6-b070-438e08ed5301>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00104.warc.gz"}
1.11. Ensembles: Gradient boosting, random forests, bagging, voting, stacking 1.11. Ensembles: Gradient boosting, random forests, bagging, voting, stacking# Ensemble methods combine the predictions of several base estimators built with a given learning algorithm in order to improve generalizability / robustness over a single estimator. Two very famous examples of ensemble methods are gradient-boosted trees and random forests. More generally, ensemble models can be applied to any base learner beyond trees, in averaging methods such as Bagging methods, model stacking, or Voting, or in boosting, as AdaBoost. 1.11.1. Gradient-boosted trees# Gradient Tree Boosting or Gradient Boosted Decision Trees (GBDT) is a generalization of boosting to arbitrary differentiable loss functions, see the seminal work of [Friedman2001]. GBDT is an excellent model for both regression and classification, in particular for tabular data. 1.11.1.1. Histogram-Based Gradient Boosting# Scikit-learn 0.21 introduced two new implementations of gradient boosted trees, namely HistGradientBoostingClassifier and HistGradientBoostingRegressor, inspired by LightGBM (See [LightGBM]). These histogram-based estimators can be orders of magnitude faster than GradientBoostingClassifier and GradientBoostingRegressor when the number of samples is larger than tens of thousands of They also have built-in support for missing values, which avoids the need for an imputer. These fast estimators first bin the input samples X into integer-valued bins (typically 256 bins) which tremendously reduces the number of splitting points to consider, and allows the algorithm to leverage integer-based data structures (histograms) instead of relying on sorted continuous values when building the trees. The API of these estimators is slightly different, and some of the features from GradientBoostingClassifier and GradientBoostingRegressor are not yet supported, for instance some loss functions. 1.11.1.1.1. Usage# Most of the parameters are unchanged from GradientBoostingClassifier and GradientBoostingRegressor. One exception is the max_iter parameter that replaces n_estimators, and controls the number of iterations of the boosting process: >>> from sklearn.ensemble import HistGradientBoostingClassifier >>> from sklearn.datasets import make_hastie_10_2 >>> X, y = make_hastie_10_2(random_state=0) >>> X_train, X_test = X[:2000], X[2000:] >>> y_train, y_test = y[:2000], y[2000:] >>> clf = HistGradientBoostingClassifier(max_iter=100).fit(X_train, y_train) >>> clf.score(X_test, y_test) Available losses for regression are: • ‘squared_error’, which is the default loss; • ‘absolute_error’, which is less sensitive to outliers than the squared error; • ‘gamma’, which is well suited to model strictly positive outcomes; • ‘poisson’, which is well suited to model counts and frequencies; • ‘quantile’, which allows for estimating a conditional quantile that can later be used to obtain prediction intervals. For classification, ‘log_loss’ is the only option. For binary classification it uses the binary log loss, also known as binomial deviance or binary cross-entropy. For n_classes >= 3, it uses the multi-class log loss function, with multinomial deviance and categorical cross-entropy as alternative names. The appropriate loss version is selected based on y passed to fit. The size of the trees can be controlled through the max_leaf_nodes, max_depth, and min_samples_leaf parameters. The number of bins used to bin the data is controlled with the max_bins parameter. Using less bins acts as a form of regularization. It is generally recommended to use as many bins as possible (255), which is the default. The l2_regularization parameter acts as a regularizer for the loss function, and corresponds to \(\lambda\) in the following expression (see equation (2) in [XGBoost]): \[\mathcal{L}(\phi) = \sum_i l(\hat{y}_i, y_i) + \frac12 \sum_k \lambda ||w_k||^2\] Details on l2 regularization# It is important to notice that the loss term \(l(\hat{y}_i, y_i)\) describes only half of the actual loss function except for the pinball loss and absolute error. The index \(k\) refers to the k-th tree in the ensemble of trees. In the case of regression and binary classification, gradient boosting models grow one tree per iteration, then \(k\) runs up to max_iter. In the case of multiclass classification problems, the maximal value of the index \(k\) is n_classes \(\times\) max_iter. If \(T_k\) denotes the number of leaves in the k-th tree, then \(w_k\) is a vector of length \(T_k\), which contains the leaf values of the form w = -sum_gradient / (sum_hessian + l2_regularization) (see equation (5) in [XGBoost]). The leaf values \(w_k\) are derived by dividing the sum of the gradients of the loss function by the combined sum of hessians. Adding the regularization to the denominator penalizes the leaves with small hessians (flat regions), resulting in smaller updates. Those \(w_k\) values contribute then to the model’s prediction for a given input that ends up in the corresponding leaf. The final prediction is the sum of the base prediction and the contributions from each tree. The result of that sum is then transformed by the inverse link function depending on the choice of the loss function (see Mathematical formulation). Notice that the original paper [XGBoost] introduces a term \(\gamma\sum_k T_k\) that penalizes the number of leaves (making it a smooth version of max_leaf_nodes) not presented here as it is not implemented in scikit-learn; whereas \(\lambda\) penalizes the magnitude of the individual tree predictions before being rescaled by the learning rate, see Shrinkage via learning rate. Note that early-stopping is enabled by default if the number of samples is larger than 10,000. The early-stopping behaviour is controlled via the early_stopping, scoring, validation_fraction, n_iter_no_change, and tol parameters. It is possible to early-stop using an arbitrary scorer, or just the training or validation loss. Note that for technical reasons, using a callable as a scorer is significantly slower than using the loss. By default, early-stopping is performed if there are at least 10,000 samples in the training set, using the validation loss. 1.11.1.1.2. Missing values support# HistGradientBoostingClassifier and HistGradientBoostingRegressor have built-in support for missing values (NaNs). During training, the tree grower learns at each split point whether samples with missing values should go to the left or right child, based on the potential gain. When predicting, samples with missing values are assigned to the left or right child consequently: >>> from sklearn.ensemble import HistGradientBoostingClassifier >>> import numpy as np >>> X = np.array([0, 1, 2, np.nan]).reshape(-1, 1) >>> y = [0, 0, 1, 1] >>> gbdt = HistGradientBoostingClassifier(min_samples_leaf=1).fit(X, y) >>> gbdt.predict(X) array([0, 0, 1, 1]) When the missingness pattern is predictive, the splits can be performed on whether the feature value is missing or not: >>> X = np.array([0, np.nan, 1, 2, np.nan]).reshape(-1, 1) >>> y = [0, 1, 0, 0, 1] >>> gbdt = HistGradientBoostingClassifier(min_samples_leaf=1, ... max_depth=2, ... learning_rate=1, ... max_iter=1).fit(X, y) >>> gbdt.predict(X) array([0, 1, 0, 0, 1]) If no missing values were encountered for a given feature during training, then samples with missing values are mapped to whichever child has the most samples. 1.11.1.1.3. Sample weight support# HistGradientBoostingClassifier and HistGradientBoostingRegressor support sample weights during fit. The following toy example demonstrates that samples with a sample weight of zero are ignored: >>> X = [[1, 0], ... [1, 0], ... [1, 0], ... [0, 1]] >>> y = [0, 0, 1, 0] >>> # ignore the first 2 training samples by setting their weight to 0 >>> sample_weight = [0, 0, 1, 1] >>> gb = HistGradientBoostingClassifier(min_samples_leaf=1) >>> gb.fit(X, y, sample_weight=sample_weight) >>> gb.predict([[1, 0]]) >>> gb.predict_proba([[1, 0]])[0, 1] As you can see, the [1, 0] is comfortably classified as 1 since the first two samples are ignored due to their sample weights. Implementation detail: taking sample weights into account amounts to multiplying the gradients (and the hessians) by the sample weights. Note that the binning stage (specifically the quantiles computation) does not take the weights into account. 1.11.1.1.4. Categorical Features Support# HistGradientBoostingClassifier and HistGradientBoostingRegressor have native support for categorical features: they can consider splits on non-ordered, categorical data. For datasets with categorical features, using the native categorical support is often better than relying on one-hot encoding (OneHotEncoder), because one-hot encoding requires more tree depth to achieve equivalent splits. It is also usually better to rely on the native categorical support rather than to treat categorical features as continuous (ordinal), which happens for ordinal-encoded categorical data, since categories are nominal quantities where order does not matter. To enable categorical support, a boolean mask can be passed to the categorical_features parameter, indicating which feature is categorical. In the following, the first feature will be treated as categorical and the second feature as numerical: >>> gbdt = HistGradientBoostingClassifier(categorical_features=[True, False]) Equivalently, one can pass a list of integers indicating the indices of the categorical features: >>> gbdt = HistGradientBoostingClassifier(categorical_features=[0]) When the input is a DataFrame, it is also possible to pass a list of column names: >>> gbdt = HistGradientBoostingClassifier(categorical_features=["site", "manufacturer"]) Finally, when the input is a DataFrame we can use categorical_features="from_dtype" in which case all columns with a categorical dtype will be treated as categorical features. The cardinality of each categorical feature must be less than the max_bins parameter. For an example using histogram-based gradient boosting on categorical features, see Categorical Feature Support in Gradient Boosting. If there are missing values during training, the missing values will be treated as a proper category. If there are no missing values during training, then at prediction time, missing values are mapped to the child node that has the most samples (just like for continuous features). When predicting, categories that were not seen during fit time will be treated as missing values. Split finding with categorical features# The canonical way of considering categorical splits in a tree is to consider all of the \(2^{K - 1} - 1\) partitions, where \(K\) is the number of categories. This can quickly become prohibitive when \(K\) is large. Fortunately, since gradient boosting trees are always regression trees (even for classification problems), there exist a faster strategy that can yield equivalent splits. First, the categories of a feature are sorted according to the variance of the target, for each category k. Once the categories are sorted, one can consider continuous partitions, i.e. treat the categories as if they were ordered continuous values (see Fisher [Fisher1958] for a formal proof). As a result, only \(K - 1\) splits need to be considered instead of \(2^{K - 1} - 1\). The initial sorting is a \ (\mathcal{O}(K \log(K))\) operation, leading to a total complexity of \(\mathcal{O}(K \log(K) + K)\), instead of \(\mathcal{O}(2^K)\). 1.11.1.1.5. Monotonic Constraints# Depending on the problem at hand, you may have prior knowledge indicating that a given feature should in general have a positive (or negative) effect on the target value. For example, all else being equal, a higher credit score should increase the probability of getting approved for a loan. Monotonic constraints allow you to incorporate such prior knowledge into the model. For a predictor \(F\) with two features: • a monotonic increase constraint is a constraint of the form: \[x_1 \leq x_1' \implies F(x_1, x_2) \leq F(x_1', x_2)\] • a monotonic decrease constraint is a constraint of the form: \[x_1 \leq x_1' \implies F(x_1, x_2) \geq F(x_1', x_2)\] You can specify a monotonic constraint on each feature using the monotonic_cst parameter. For each feature, a value of 0 indicates no constraint, while 1 and -1 indicate a monotonic increase and monotonic decrease constraint, respectively: >>> from sklearn.ensemble import HistGradientBoostingRegressor ... # monotonic increase, monotonic decrease, and no constraint on the 3 features >>> gbdt = HistGradientBoostingRegressor(monotonic_cst=[1, -1, 0]) In a binary classification context, imposing a monotonic increase (decrease) constraint means that higher values of the feature are supposed to have a positive (negative) effect on the probability of samples to belong to the positive class. Nevertheless, monotonic constraints only marginally constrain feature effects on the output. For instance, monotonic increase and decrease constraints cannot be used to enforce the following modelling constraint: \[x_1 \leq x_1' \implies F(x_1, x_2) \leq F(x_1', x_2')\] Also, monotonic constraints are not supported for multiclass classification. Since categories are unordered quantities, it is not possible to enforce monotonic constraints on categorical features. 1.11.1.1.6. Interaction constraints# A priori, the histogram gradient boosted trees are allowed to use any feature to split a node into child nodes. This creates so called interactions between features, i.e. usage of different features as split along a branch. Sometimes, one wants to restrict the possible interactions, see [Mayer2022]. This can be done by the parameter interaction_cst, where one can specify the indices of features that are allowed to interact. For instance, with 3 features in total, interaction_cst=[{0}, {1}, {2}] forbids all interactions. The constraints [{0, 1}, {1, 2}] specifies two groups of possibly interacting features. Features 0 and 1 may interact with each other, as well as features 1 and 2. But note that features 0 and 2 are forbidden to interact. The following depicts a tree and the possible splits of the tree: 1 <- Both constraint groups could be applied from now on / \ 1 2 <- Left split still fulfills both constraint groups. / \ / \ Right split at feature 2 has only group {1, 2} from now on. LightGBM uses the same logic for overlapping groups. Note that features not listed in interaction_cst are automatically assigned an interaction group for themselves. With again 3 features, this means that [{0}] is equivalent to [{0}, {1, 2}]. 1.11.1.1.7. Low-level parallelism# HistGradientBoostingClassifier and HistGradientBoostingRegressor use OpenMP for parallelization through Cython. For more details on how to control the number of threads, please refer to our Parallelism notes. The following parts are parallelized: • mapping samples from real values to integer-valued bins (finding the bin thresholds is however sequential) • building histograms is parallelized over features • finding the best split point at a node is parallelized over features • during fit, mapping samples into the left and right children is parallelized over samples • gradient and hessians computations are parallelized over samples • predicting is parallelized over samples 1.11.1.1.8. Why it’s faster# The bottleneck of a gradient boosting procedure is building the decision trees. Building a traditional decision tree (as in the other GBDTs GradientBoostingClassifier and GradientBoostingRegressor) requires sorting the samples at each node (for each feature). Sorting is needed so that the potential gain of a split point can be computed efficiently. Splitting a single node has thus a complexity of \(\mathcal{O}(n_\text{features} \times n \log(n))\) where \(n\) is the number of samples at the node. HistGradientBoostingClassifier and HistGradientBoostingRegressor, in contrast, do not require sorting the feature values and instead use a data-structure called a histogram, where the samples are implicitly ordered. Building a histogram has a \(\mathcal{O}(n)\) complexity, so the node splitting procedure has a \(\mathcal{O}(n_\text{features} \times n)\) complexity, much smaller than the previous one. In addition, instead of considering \(n\) split points, we consider only max_bins split points, which might be much smaller. In order to build histograms, the input data X needs to be binned into integer-valued bins. This binning procedure does require sorting the feature values, but it only happens once at the very beginning of the boosting process (not at each node, like in GradientBoostingClassifier and GradientBoostingRegressor). Finally, many parts of the implementation of HistGradientBoostingClassifier and HistGradientBoostingRegressor are parallelized. The usage and the parameters of GradientBoostingClassifier and GradientBoostingRegressor are described below. The 2 most important parameters of these estimators are n_estimators and learning_rate. GradientBoostingClassifier supports both binary and multi-class classification. The following example shows how to fit a gradient boosting classifier with 100 decision stumps as weak learners: >>> from sklearn.datasets import make_hastie_10_2 >>> from sklearn.ensemble import GradientBoostingClassifier >>> X, y = make_hastie_10_2(random_state=0) >>> X_train, X_test = X[:2000], X[2000:] >>> y_train, y_test = y[:2000], y[2000:] >>> clf = GradientBoostingClassifier(n_estimators=100, learning_rate=1.0, ... max_depth=1, random_state=0).fit(X_train, y_train) >>> clf.score(X_test, y_test) The number of weak learners (i.e. regression trees) is controlled by the parameter n_estimators; The size of each tree can be controlled either by setting the tree depth via max_depth or by setting the number of leaf nodes via max_leaf_nodes. The learning_rate is a hyper-parameter in the range (0.0, 1.0] that controls overfitting via shrinkage . Classification with more than 2 classes requires the induction of n_classes regression trees at each iteration, thus, the total number of induced trees equals n_classes * n_estimators. For datasets with a large number of classes we strongly recommend to use HistGradientBoostingClassifier as an alternative to GradientBoostingClassifier . GradientBoostingRegressor supports a number of different loss functions for regression which can be specified via the argument loss; the default loss function for regression is squared error ( >>> import numpy as np >>> from sklearn.metrics import mean_squared_error >>> from sklearn.datasets import make_friedman1 >>> from sklearn.ensemble import GradientBoostingRegressor >>> X, y = make_friedman1(n_samples=1200, random_state=0, noise=1.0) >>> X_train, X_test = X[:200], X[200:] >>> y_train, y_test = y[:200], y[200:] >>> est = GradientBoostingRegressor( ... n_estimators=100, learning_rate=0.1, max_depth=1, random_state=0, ... loss='squared_error' ... ).fit(X_train, y_train) >>> mean_squared_error(y_test, est.predict(X_test)) The figure below shows the results of applying GradientBoostingRegressor with least squares loss and 500 base learners to the diabetes dataset (sklearn.datasets.load_diabetes). The plot shows the train and test error at each iteration. The train error at each iteration is stored in the train_score_ attribute of the gradient boosting model. The test error at each iterations can be obtained via the staged_predict method which returns a generator that yields the predictions at each stage. Plots like these can be used to determine the optimal number of trees (i.e. n_estimators) by early 1.11.1.2.1. Fitting additional weak-learners# Both GradientBoostingRegressor and GradientBoostingClassifier support warm_start=True which allows you to add more estimators to an already fitted model. >>> import numpy as np >>> from sklearn.metrics import mean_squared_error >>> from sklearn.datasets import make_friedman1 >>> from sklearn.ensemble import GradientBoostingRegressor >>> X, y = make_friedman1(n_samples=1200, random_state=0, noise=1.0) >>> X_train, X_test = X[:200], X[200:] >>> y_train, y_test = y[:200], y[200:] >>> est = GradientBoostingRegressor( ... n_estimators=100, learning_rate=0.1, max_depth=1, random_state=0, ... loss='squared_error' ... ) >>> est = est.fit(X_train, y_train) # fit with 100 trees >>> mean_squared_error(y_test, est.predict(X_test)) >>> _ = est.set_params(n_estimators=200, warm_start=True) # set warm_start and increase num of trees >>> _ = est.fit(X_train, y_train) # fit additional 100 trees to est >>> mean_squared_error(y_test, est.predict(X_test)) 1.11.1.2.2. Controlling the tree size# The size of the regression tree base learners defines the level of variable interactions that can be captured by the gradient boosting model. In general, a tree of depth h can capture interactions of order h . There are two ways in which the size of the individual regression trees can be controlled. If you specify max_depth=h then complete binary trees of depth h will be grown. Such trees will have (at most) 2**h leaf nodes and 2**h - 1 split nodes. Alternatively, you can control the tree size by specifying the number of leaf nodes via the parameter max_leaf_nodes. In this case, trees will be grown using best-first search where nodes with the highest improvement in impurity will be expanded first. A tree with max_leaf_nodes=k has k - 1 split nodes and thus can model interactions of up to order max_leaf_nodes - 1 . We found that max_leaf_nodes=k gives comparable results to max_depth=k-1 but is significantly faster to train at the expense of a slightly higher training error. The parameter max_leaf_nodes corresponds to the variable J in the chapter on gradient boosting in [Friedman2001] and is related to the parameter interaction.depth in R’s gbm package where max_leaf_nodes == interaction.depth + 1 1.11.1.2.3. Mathematical formulation# We first present GBRT for regression, and then detail the classification case. GBRT regressors are additive models whose prediction \(\hat{y}_i\) for a given input \(x_i\) is of the following form: \[\hat{y}_i = F_M(x_i) = \sum_{m=1}^{M} h_m(x_i)\] where the \(h_m\) are estimators called weak learners in the context of boosting. Gradient Tree Boosting uses decision tree regressors of fixed size as weak learners. The constant M corresponds to the n_estimators parameter. Similar to other boosting algorithms, a GBRT is built in a greedy fashion: \[F_m(x) = F_{m-1}(x) + h_m(x),\] where the newly added tree \(h_m\) is fitted in order to minimize a sum of losses \(L_m\), given the previous ensemble \(F_{m-1}\): \[h_m = \arg\min_{h} L_m = \arg\min_{h} \sum_{i=1}^{n} l(y_i, F_{m-1}(x_i) + h(x_i)),\] where \(l(y_i, F(x_i))\) is defined by the loss parameter, detailed in the next section. By default, the initial model \(F_{0}\) is chosen as the constant that minimizes the loss: for a least-squares loss, this is the empirical mean of the target values. The initial model can also be specified via the init argument. Using a first-order Taylor approximation, the value of \(l\) can be approximated as follows: \[l(y_i, F_{m-1}(x_i) + h_m(x_i)) \approx l(y_i, F_{m-1}(x_i)) + h_m(x_i) \left[ \frac{\partial l(y_i, F(x_i))}{\partial F(x_i)} \right]_{F=F_{m - 1}}.\] Briefly, a first-order Taylor approximation says that \(l(z) \approx l(a) + (z - a) \frac{\partial l}{\partial z}(a)\). Here, \(z\) corresponds to \(F_{m - 1}(x_i) + h_m(x_i)\), and \(a\) corresponds to \(F_{m-1}(x_i)\) The quantity \(\left[ \frac{\partial l(y_i, F(x_i))}{\partial F(x_i)} \right]_{F=F_{m - 1}}\) is the derivative of the loss with respect to its second parameter, evaluated at \(F_{m-1}(x)\). It is easy to compute for any given \(F_{m - 1}(x_i)\) in a closed form since the loss is differentiable. We will denote it by \(g_i\). Removing the constant terms, we have: \[h_m \approx \arg\min_{h} \sum_{i=1}^{n} h(x_i) g_i\] This is minimized if \(h(x_i)\) is fitted to predict a value that is proportional to the negative gradient \(-g_i\). Therefore, at each iteration, the estimator \(h_m\) is fitted to predict the negative gradients of the samples. The gradients are updated at each iteration. This can be considered as some kind of gradient descent in a functional space. For some losses, e.g. 'absolute_error' where the gradients are \(\pm 1\), the values predicted by a fitted \(h_m\) are not accurate enough: the tree can only output integer values. As a result, the leaves values of the tree \(h_m\) are modified once the tree is fitted, such that the leaves values minimize the loss \(L_m\). The update is loss-dependent: for the absolute error loss, the value of a leaf is updated to the median of the samples in that leaf. Gradient boosting for classification is very similar to the regression case. However, the sum of the trees \(F_M(x_i) = \sum_m h_m(x_i)\) is not homogeneous to a prediction: it cannot be a class, since the trees predict continuous values. The mapping from the value \(F_M(x_i)\) to a class or a probability is loss-dependent. For the log-loss, the probability that \(x_i\) belongs to the positive class is modeled as \(p(y_i = 1 | x_i) = \sigma(F_M(x_i))\) where \(\sigma\) is the sigmoid or expit function. For multiclass classification, K trees (for K classes) are built at each of the \(M\) iterations. The probability that \(x_i\) belongs to class k is modeled as a softmax of the \(F_{M,k}(x_i)\) Note that even for a classification task, the \(h_m\) sub-estimator is still a regressor, not a classifier. This is because the sub-estimators are trained to predict (negative) gradients, which are always continuous quantities. 1.11.1.2.4. Loss Functions# The following loss functions are supported and can be specified using the parameter loss: • Squared error ('squared_error'): The natural choice for regression due to its superior computational properties. The initial model is given by the mean of the target values. • Absolute error ('absolute_error'): A robust loss function for regression. The initial model is given by the median of the target values. • Huber ('huber'): Another robust loss function that combines least squares and least absolute deviation; use alpha to control the sensitivity with regards to outliers (see [Friedman2001] for more • Quantile ('quantile'): A loss function for quantile regression. Use 0 < alpha < 1 to specify the quantile. This loss function can be used to create prediction intervals (see Prediction Intervals for Gradient Boosting Regression). • Binary log-loss ('log-loss'): The binomial negative log-likelihood loss function for binary classification. It provides probability estimates. The initial model is given by the log odds-ratio. • Multi-class log-loss ('log-loss'): The multinomial negative log-likelihood loss function for multi-class classification with n_classes mutually exclusive classes. It provides probability estimates. The initial model is given by the prior probability of each class. At each iteration n_classes regression trees have to be constructed which makes GBRT rather inefficient for data sets with a large number of classes. • Exponential loss ('exponential'): The same loss function as AdaBoostClassifier. Less robust to mislabeled examples than 'log-loss'; can only be used for binary classification. 1.11.1.2.5. Shrinkage via learning rate# [Friedman2001] proposed a simple regularization strategy that scales the contribution of each weak learner by a constant factor \(\nu\): \[F_m(x) = F_{m-1}(x) + \nu h_m(x)\] The parameter \(\nu\) is also called the learning rate because it scales the step length the gradient descent procedure; it can be set via the learning_rate parameter. The parameter learning_rate strongly interacts with the parameter n_estimators, the number of weak learners to fit. Smaller values of learning_rate require larger numbers of weak learners to maintain a constant training error. Empirical evidence suggests that small values of learning_rate favor better test error. [HTF] recommend to set the learning rate to a small constant (e.g. learning_rate <= 0.1) and choose n_estimators large enough that early stopping applies, see Early stopping in Gradient Boosting for a more detailed discussion of the interaction between learning_rate and n_estimators see [R2007]. 1.11.1.2.6. Subsampling# [Friedman2002] proposed stochastic gradient boosting, which combines gradient boosting with bootstrap averaging (bagging). At each iteration the base classifier is trained on a fraction subsample of the available training data. The subsample is drawn without replacement. A typical value of subsample is 0.5. The figure below illustrates the effect of shrinkage and subsampling on the goodness-of-fit of the model. We can clearly see that shrinkage outperforms no-shrinkage. Subsampling with shrinkage can further increase the accuracy of the model. Subsampling without shrinkage, on the other hand, does poorly. Another strategy to reduce the variance is by subsampling the features analogous to the random splits in RandomForestClassifier. The number of subsampled features can be controlled via the max_features parameter. Using a small max_features value can significantly decrease the runtime. Stochastic gradient boosting allows to compute out-of-bag estimates of the test deviance by computing the improvement in deviance on the examples that are not included in the bootstrap sample (i.e. the out-of-bag examples). The improvements are stored in the attribute oob_improvement_. oob_improvement_[i] holds the improvement in terms of the loss on the OOB samples if you add the i-th stage to the current predictions. Out-of-bag estimates can be used for model selection, for example to determine the optimal number of iterations. OOB estimates are usually very pessimistic thus we recommend to use cross-validation instead and only use OOB if cross-validation is too time consuming. 1.11.1.2.7. Interpretation with feature importance# Individual decision trees can be interpreted easily by simply visualizing the tree structure. Gradient boosting models, however, comprise hundreds of regression trees thus they cannot be easily interpreted by visual inspection of the individual trees. Fortunately, a number of techniques have been proposed to summarize and interpret gradient boosting models. Often features do not contribute equally to predict the target response; in many situations the majority of the features are in fact irrelevant. When interpreting a model, the first question usually is: what are those important features and how do they contributing in predicting the target response? Individual decision trees intrinsically perform feature selection by selecting appropriate split points. This information can be used to measure the importance of each feature; the basic idea is: the more often a feature is used in the split points of a tree the more important that feature is. This notion of importance can be extended to decision tree ensembles by simply averaging the impurity-based feature importance of each tree (see Feature importance evaluation for more details). The feature importance scores of a fit gradient boosting model can be accessed via the feature_importances_ property: >>> from sklearn.datasets import make_hastie_10_2 >>> from sklearn.ensemble import GradientBoostingClassifier >>> X, y = make_hastie_10_2(random_state=0) >>> clf = GradientBoostingClassifier(n_estimators=100, learning_rate=1.0, ... max_depth=1, random_state=0).fit(X, y) >>> clf.feature_importances_ array([0.10..., 0.10..., 0.11..., ... Note that this computation of feature importance is based on entropy, and it is distinct from sklearn.inspection.permutation_importance which is based on permutation of the features. 1.11.2. Random forests and other randomized tree ensembles# The sklearn.ensemble module includes two averaging algorithms based on randomized decision trees: the RandomForest algorithm and the Extra-Trees method. Both algorithms are perturb-and-combine techniques [B1998] specifically designed for trees. This means a diverse set of classifiers is created by introducing randomness in the classifier construction. The prediction of the ensemble is given as the averaged prediction of the individual classifiers. As other classifiers, forest classifiers have to be fitted with two arrays: a sparse or dense array X of shape (n_samples, n_features) holding the training samples, and an array Y of shape (n_samples,) holding the target values (class labels) for the training samples: >>> from sklearn.ensemble import RandomForestClassifier >>> X = [[0, 0], [1, 1]] >>> Y = [0, 1] >>> clf = RandomForestClassifier(n_estimators=10) >>> clf = clf.fit(X, Y) Like decision trees, forests of trees also extend to multi-output problems (if Y is an array of shape (n_samples, n_outputs)). 1.11.2.1. Random Forests# In random forests (see RandomForestClassifier and RandomForestRegressor classes), each tree in the ensemble is built from a sample drawn with replacement (i.e., a bootstrap sample) from the training Furthermore, when splitting each node during the construction of a tree, the best split is found through an exhaustive search of the features values of either all input features or a random subset of size max_features. (See the parameter tuning guidelines for more details.) The purpose of these two sources of randomness is to decrease the variance of the forest estimator. Indeed, individual decision trees typically exhibit high variance and tend to overfit. The injected randomness in forests yield decision trees with somewhat decoupled prediction errors. By taking an average of those predictions, some errors can cancel out. Random forests achieve a reduced variance by combining diverse trees, sometimes at the cost of a slight increase in bias. In practice the variance reduction is often significant hence yielding an overall better model. In contrast to the original publication [B2001], the scikit-learn implementation combines classifiers by averaging their probabilistic prediction, instead of letting each classifier vote for a single A competitive alternative to random forests are Histogram-Based Gradient Boosting (HGBT) models: • Building trees: Random forests typically rely on deep trees (that overfit individually) which uses much computational resources, as they require several splittings and evaluations of candidate splits. Boosting models build shallow trees (that underfit individually) which are faster to fit and predict. • Sequential boosting: In HGBT, the decision trees are built sequentially, where each tree is trained to correct the errors made by the previous ones. This allows them to iteratively improve the model’s performance using relatively few trees. In contrast, random forests use a majority vote to predict the outcome, which can require a larger number of trees to achieve the same level of • Efficient binning: HGBT uses an efficient binning algorithm that can handle large datasets with a high number of features. The binning algorithm can pre-process the data to speed up the subsequent tree construction (see Why it’s faster). In contrast, the scikit-learn implementation of random forests does not use binning and relies on exact splitting, which can be computationally Overall, the computational cost of HGBT versus RF depends on the specific characteristics of the dataset and the modeling task. It’s a good idea to try both models and compare their performance and computational efficiency on your specific problem to determine which model is the best fit. 1.11.2.2. Extremely Randomized Trees# In extremely randomized trees (see ExtraTreesClassifier and ExtraTreesRegressor classes), randomness goes one step further in the way splits are computed. As in random forests, a random subset of candidate features is used, but instead of looking for the most discriminative thresholds, thresholds are drawn at random for each candidate feature and the best of these randomly-generated thresholds is picked as the splitting rule. This usually allows to reduce the variance of the model a bit more, at the expense of a slightly greater increase in bias: >>> from sklearn.model_selection import cross_val_score >>> from sklearn.datasets import make_blobs >>> from sklearn.ensemble import RandomForestClassifier >>> from sklearn.ensemble import ExtraTreesClassifier >>> from sklearn.tree import DecisionTreeClassifier >>> X, y = make_blobs(n_samples=10000, n_features=10, centers=100, ... random_state=0) >>> clf = DecisionTreeClassifier(max_depth=None, min_samples_split=2, ... random_state=0) >>> scores = cross_val_score(clf, X, y, cv=5) >>> scores.mean() >>> clf = RandomForestClassifier(n_estimators=10, max_depth=None, ... min_samples_split=2, random_state=0) >>> scores = cross_val_score(clf, X, y, cv=5) >>> scores.mean() >>> clf = ExtraTreesClassifier(n_estimators=10, max_depth=None, ... min_samples_split=2, random_state=0) >>> scores = cross_val_score(clf, X, y, cv=5) >>> scores.mean() > 0.999 1.11.2.3. Parameters# The main parameters to adjust when using these methods is n_estimators and max_features. The former is the number of trees in the forest. The larger the better, but also the longer it will take to compute. In addition, note that results will stop getting significantly better beyond a critical number of trees. The latter is the size of the random subsets of features to consider when splitting a node. The lower the greater the reduction of variance, but also the greater the increase in bias. Empirical good default values are max_features=1.0 or equivalently max_features=None (always considering all features instead of a random subset) for regression problems, and max_features="sqrt" (using a random subset of size sqrt(n_features)) for classification tasks (where n_features is the number of features in the data). The default value of max_features=1.0 is equivalent to bagged trees and more randomness can be achieved by setting smaller values (e.g. 0.3 is a typical default in the literature). Good results are often achieved when setting max_depth=None in combination with min_samples_split=2 (i.e., when fully developing the trees). Bear in mind though that these values are usually not optimal, and might result in models that consume a lot of RAM. The best parameter values should always be cross-validated. In addition, note that in random forests, bootstrap samples are used by default (bootstrap=True) while the default strategy for extra-trees is to use the whole dataset (bootstrap=False). When using bootstrap sampling the generalization error can be estimated on the left out or out-of-bag samples. This can be enabled by setting oob_score=True. The size of the model with the default parameters is \(O( M * N * log (N) )\), where \(M\) is the number of trees and \(N\) is the number of samples. In order to reduce the size of the model, you can change these parameters: min_samples_split, max_leaf_nodes, max_depth and min_samples_leaf. 1.11.2.4. Parallelization# Finally, this module also features the parallel construction of the trees and the parallel computation of the predictions through the n_jobs parameter. If n_jobs=k then computations are partitioned into k jobs, and run on k cores of the machine. If n_jobs=-1 then all cores available on the machine are used. Note that because of inter-process communication overhead, the speedup might not be linear (i.e., using k jobs will unfortunately not be k times as fast). Significant speedup can still be achieved though when building a large number of trees, or when building a single tree requires a fair amount of time (e.g., on large datasets). 12. Breiman, “Random Forests”, Machine Learning, 45(1), 5-32, 2001. 12. Breiman, “Arcing Classifiers”, Annals of Statistics 1998. • P. Geurts, D. Ernst., and L. Wehenkel, “Extremely randomized trees”, Machine Learning, 63(1), 3-42, 2006. 1.11.2.5. Feature importance evaluation# The relative rank (i.e. depth) of a feature used as a decision node in a tree can be used to assess the relative importance of that feature with respect to the predictability of the target variable. Features used at the top of the tree contribute to the final prediction decision of a larger fraction of the input samples. The expected fraction of the samples they contribute to can thus be used as an estimate of the relative importance of the features. In scikit-learn, the fraction of samples a feature contributes to is combined with the decrease in impurity from splitting them to create a normalized estimate of the predictive power of that feature. By averaging the estimates of predictive ability over several randomized trees one can reduce the variance of such an estimate and use it for feature selection. This is known as the mean decrease in impurity, or MDI. Refer to [L2014] for more information on MDI and feature importance evaluation with Random Forests. The impurity-based feature importances computed on tree-based models suffer from two flaws that can lead to misleading conclusions. First they are computed on statistics derived from the training dataset and therefore do not necessarily inform us on which features are most important to make good predictions on held-out dataset. Secondly, they favor high cardinality features, that is features with many unique values. Permutation feature importance is an alternative to impurity-based feature importance that does not suffer from these flaws. These two methods of obtaining feature importance are explored in: Permutation Importance vs Random Forest Feature Importance (MDI). In practice those estimates are stored as an attribute named feature_importances_ on the fitted model. This is an array with shape (n_features,) whose values are positive and sum to 1.0. The higher the value, the more important is the contribution of the matching feature to the prediction function. 1.11.2.6. Totally Random Trees Embedding# RandomTreesEmbedding implements an unsupervised transformation of the data. Using a forest of completely random trees, RandomTreesEmbedding encodes the data by the indices of the leaves a data point ends up in. This index is then encoded in a one-of-K manner, leading to a high dimensional, sparse binary coding. This coding can be computed very efficiently and can then be used as a basis for other learning tasks. The size and sparsity of the code can be influenced by choosing the number of trees and the maximum depth per tree. For each tree in the ensemble, the coding contains one entry of one. The size of the coding is at most n_estimators * 2 ** max_depth, the maximum number of leaves in the forest. As neighboring data points are more likely to lie within the same leaf of a tree, the transformation performs an implicit, non-parametric density estimation. See also Manifold learning techniques can also be useful to derive non-linear representations of feature space, also these approaches focus also on dimensionality reduction. 1.11.2.7. Fitting additional trees# RandomForest, Extra-Trees and RandomTreesEmbedding estimators all support warm_start=True which allows you to add more trees to an already fitted model. >>> from sklearn.datasets import make_classification >>> from sklearn.ensemble import RandomForestClassifier >>> X, y = make_classification(n_samples=100, random_state=1) >>> clf = RandomForestClassifier(n_estimators=10) >>> clf = clf.fit(X, y) # fit with 10 trees >>> len(clf.estimators_) >>> # set warm_start and increase num of estimators >>> _ = clf.set_params(n_estimators=20, warm_start=True) >>> _ = clf.fit(X, y) # fit additional 10 trees >>> len(clf.estimators_) When random_state is also set, the internal random state is also preserved between fit calls. This means that training a model once with n estimators is the same as building the model iteratively via multiple fit calls, where the final number of estimators is equal to n. >>> clf = RandomForestClassifier(n_estimators=20) # set `n_estimators` to 10 + 10 >>> _ = clf.fit(X, y) # fit `estimators_` will be the same as `clf` above Note that this differs from the usual behavior of random_state in that it does not result in the same result across different calls. 1.11.3. Bagging meta-estimator# In ensemble algorithms, bagging methods form a class of algorithms which build several instances of a black-box estimator on random subsets of the original training set and then aggregate their individual predictions to form a final prediction. These methods are used as a way to reduce the variance of a base estimator (e.g., a decision tree), by introducing randomization into its construction procedure and then making an ensemble out of it. In many cases, bagging methods constitute a very simple way to improve with respect to a single model, without making it necessary to adapt the underlying base algorithm. As they provide a way to reduce overfitting, bagging methods work best with strong and complex models (e.g., fully developed decision trees), in contrast with boosting methods which usually work best with weak models (e.g., shallow decision trees). Bagging methods come in many flavours but mostly differ from each other by the way they draw random subsets of the training set: In scikit-learn, bagging methods are offered as a unified BaggingClassifier meta-estimator (resp. BaggingRegressor), taking as input a user-specified estimator along with parameters specifying the strategy to draw random subsets. In particular, max_samples and max_features control the size of the subsets (in terms of samples and features), while bootstrap and bootstrap_features control whether samples and features are drawn with or without replacement. When using a subset of the available samples the generalization accuracy can be estimated with the out-of-bag samples by setting oob_score= True. As an example, the snippet below illustrates how to instantiate a bagging ensemble of KNeighborsClassifier estimators, each built on random subsets of 50% of the samples and 50% of the >>> from sklearn.ensemble import BaggingClassifier >>> from sklearn.neighbors import KNeighborsClassifier >>> bagging = BaggingClassifier(KNeighborsClassifier(), ... max_samples=0.5, max_features=0.5) L. Breiman, “Pasting small votes for classification in large databases and on-line”, Machine Learning, 36(1), 85-103, 1999. L. Breiman, “Bagging predictors”, Machine Learning, 24(2), 123-140, 1996. T. Ho, “The random subspace method for constructing decision forests”, Pattern Analysis and Machine Intelligence, 20(8), 832-844, 1998. G. Louppe and P. Geurts, “Ensembles on Random Patches”, Machine Learning and Knowledge Discovery in Databases, 346-361, 2012. 1.11.4. Voting Classifier# The idea behind the VotingClassifier is to combine conceptually different machine learning classifiers and use a majority vote or the average predicted probabilities (soft vote) to predict the class labels. Such a classifier can be useful for a set of equally well performing models in order to balance out their individual weaknesses. 1.11.4.1. Majority Class Labels (Majority/Hard Voting)# In majority voting, the predicted class label for a particular sample is the class label that represents the majority (mode) of the class labels predicted by each individual classifier. E.g., if the prediction for a given sample is • classifier 1 -> class 1 • classifier 2 -> class 1 • classifier 3 -> class 2 the VotingClassifier (with voting='hard') would classify the sample as “class 1” based on the majority class label. In the cases of a tie, the VotingClassifier will select the class based on the ascending sort order. E.g., in the following scenario • classifier 1 -> class 2 • classifier 2 -> class 1 the class label 1 will be assigned to the sample. 1.11.4.2. Usage# The following example shows how to fit the majority rule classifier: >>> from sklearn import datasets >>> from sklearn.model_selection import cross_val_score >>> from sklearn.linear_model import LogisticRegression >>> from sklearn.naive_bayes import GaussianNB >>> from sklearn.ensemble import RandomForestClassifier >>> from sklearn.ensemble import VotingClassifier >>> iris = datasets.load_iris() >>> X, y = iris.data[:, 1:3], iris.target >>> clf1 = LogisticRegression(random_state=1) >>> clf2 = RandomForestClassifier(n_estimators=50, random_state=1) >>> clf3 = GaussianNB() >>> eclf = VotingClassifier( ... estimators=[('lr', clf1), ('rf', clf2), ('gnb', clf3)], ... voting='hard') >>> for clf, label in zip([clf1, clf2, clf3, eclf], ['Logistic Regression', 'Random Forest', 'naive Bayes', 'Ensemble']): ... scores = cross_val_score(clf, X, y, scoring='accuracy', cv=5) ... print("Accuracy: %0.2f (+/- %0.2f) [%s]" % (scores.mean(), scores.std(), label)) Accuracy: 0.95 (+/- 0.04) [Logistic Regression] Accuracy: 0.94 (+/- 0.04) [Random Forest] Accuracy: 0.91 (+/- 0.04) [naive Bayes] Accuracy: 0.95 (+/- 0.04) [Ensemble] 1.11.4.3. Weighted Average Probabilities (Soft Voting)# In contrast to majority voting (hard voting), soft voting returns the class label as argmax of the sum of predicted probabilities. Specific weights can be assigned to each classifier via the weights parameter. When weights are provided, the predicted class probabilities for each classifier are collected, multiplied by the classifier weight, and averaged. The final class label is then derived from the class label with the highest average probability. To illustrate this with a simple example, let’s assume we have 3 classifiers and a 3-class classification problems where we assign equal weights to all classifiers: w1=1, w2=1, w3=1. The weighted average probabilities for a sample would then be calculated as follows: classifier class 1 class 2 class 3 classifier 1 w1 * 0.2 w1 * 0.5 w1 * 0.3 classifier 2 w2 * 0.6 w2 * 0.3 w2 * 0.1 classifier 3 w3 * 0.3 w3 * 0.4 w3 * 0.3 weighted average 0.37 0.4 0.23 Here, the predicted class label is 2, since it has the highest average probability. The following example illustrates how the decision regions may change when a soft VotingClassifier is used based on a linear Support Vector Machine, a Decision Tree, and a K-nearest neighbor >>> from sklearn import datasets >>> from sklearn.tree import DecisionTreeClassifier >>> from sklearn.neighbors import KNeighborsClassifier >>> from sklearn.svm import SVC >>> from itertools import product >>> from sklearn.ensemble import VotingClassifier >>> # Loading some example data >>> iris = datasets.load_iris() >>> X = iris.data[:, [0, 2]] >>> y = iris.target >>> # Training classifiers >>> clf1 = DecisionTreeClassifier(max_depth=4) >>> clf2 = KNeighborsClassifier(n_neighbors=7) >>> clf3 = SVC(kernel='rbf', probability=True) >>> eclf = VotingClassifier(estimators=[('dt', clf1), ('knn', clf2), ('svc', clf3)], ... voting='soft', weights=[2, 1, 2]) >>> clf1 = clf1.fit(X, y) >>> clf2 = clf2.fit(X, y) >>> clf3 = clf3.fit(X, y) >>> eclf = eclf.fit(X, y) 1.11.4.4. Usage# In order to predict the class labels based on the predicted class-probabilities (scikit-learn estimators in the VotingClassifier must support predict_proba method): >>> eclf = VotingClassifier( ... estimators=[('lr', clf1), ('rf', clf2), ('gnb', clf3)], ... voting='soft' ... ) Optionally, weights can be provided for the individual classifiers: >>> eclf = VotingClassifier( ... estimators=[('lr', clf1), ('rf', clf2), ('gnb', clf3)], ... voting='soft', weights=[2,5,1] ... ) Using the VotingClassifier with GridSearchCV# The VotingClassifier can also be used together with GridSearchCV in order to tune the hyperparameters of the individual estimators: >>> from sklearn.model_selection import GridSearchCV >>> clf1 = LogisticRegression(random_state=1) >>> clf2 = RandomForestClassifier(random_state=1) >>> clf3 = GaussianNB() >>> eclf = VotingClassifier( ... estimators=[('lr', clf1), ('rf', clf2), ('gnb', clf3)], ... voting='soft' ... ) >>> params = {'lr__C': [1.0, 100.0], 'rf__n_estimators': [20, 200]} >>> grid = GridSearchCV(estimator=eclf, param_grid=params, cv=5) >>> grid = grid.fit(iris.data, iris.target) 1.11.5. Voting Regressor# The idea behind the VotingRegressor is to combine conceptually different machine learning regressors and return the average predicted values. Such a regressor can be useful for a set of equally well performing models in order to balance out their individual weaknesses. 1.11.5.1. Usage# The following example shows how to fit the VotingRegressor: >>> from sklearn.datasets import load_diabetes >>> from sklearn.ensemble import GradientBoostingRegressor >>> from sklearn.ensemble import RandomForestRegressor >>> from sklearn.linear_model import LinearRegression >>> from sklearn.ensemble import VotingRegressor >>> # Loading some example data >>> X, y = load_diabetes(return_X_y=True) >>> # Training classifiers >>> reg1 = GradientBoostingRegressor(random_state=1) >>> reg2 = RandomForestRegressor(random_state=1) >>> reg3 = LinearRegression() >>> ereg = VotingRegressor(estimators=[('gb', reg1), ('rf', reg2), ('lr', reg3)]) >>> ereg = ereg.fit(X, y) 1.11.6. Stacked generalization# Stacked generalization is a method for combining estimators to reduce their biases [W1992] [HTF]. More precisely, the predictions of each individual estimator are stacked together and used as input to a final estimator to compute the prediction. This final estimator is trained through cross-validation. The StackingClassifier and StackingRegressor provide such strategies which can be applied to classification and regression problems. The estimators parameter corresponds to the list of the estimators which are stacked together in parallel on the input data. It should be given as a list of names and estimators: >>> from sklearn.linear_model import RidgeCV, LassoCV >>> from sklearn.neighbors import KNeighborsRegressor >>> estimators = [('ridge', RidgeCV()), ... ('lasso', LassoCV(random_state=42)), ... ('knr', KNeighborsRegressor(n_neighbors=20, ... metric='euclidean'))] The final_estimator will use the predictions of the estimators as input. It needs to be a classifier or a regressor when using StackingClassifier or StackingRegressor, respectively: >>> from sklearn.ensemble import GradientBoostingRegressor >>> from sklearn.ensemble import StackingRegressor >>> final_estimator = GradientBoostingRegressor( ... n_estimators=25, subsample=0.5, min_samples_leaf=25, max_features=1, ... random_state=42) >>> reg = StackingRegressor( ... estimators=estimators, ... final_estimator=final_estimator) To train the estimators and final_estimator, the fit method needs to be called on the training data: >>> from sklearn.datasets import load_diabetes >>> X, y = load_diabetes(return_X_y=True) >>> from sklearn.model_selection import train_test_split >>> X_train, X_test, y_train, y_test = train_test_split(X, y, ... random_state=42) >>> reg.fit(X_train, y_train) During training, the estimators are fitted on the whole training data X_train. They will be used when calling predict or predict_proba. To generalize and avoid over-fitting, the final_estimator is trained on out-samples using sklearn.model_selection.cross_val_predict internally. For StackingClassifier, note that the output of the estimators is controlled by the parameter stack_method and it is called by each estimator. This parameter is either a string, being estimator method names, or 'auto' which will automatically identify an available method depending on the availability, tested in the order of preference: predict_proba, decision_function and predict. A StackingRegressor and StackingClassifier can be used as any other regressor or classifier, exposing a predict, predict_proba, or decision_function method, e.g.: >>> y_pred = reg.predict(X_test) >>> from sklearn.metrics import r2_score >>> print('R2 score: {:.2f}'.format(r2_score(y_test, y_pred))) R2 score: 0.53 Note that it is also possible to get the output of the stacked estimators using the transform method: >>> reg.transform(X_test[:5]) array([[142..., 138..., 146...], [179..., 182..., 151...], [139..., 132..., 158...], [286..., 292..., 225...], [126..., 124..., 164...]]) In practice, a stacking predictor predicts as good as the best predictor of the base layer and even sometimes outperforms it by combining the different strengths of the these predictors. However, training a stacking predictor is computationally expensive. For StackingClassifier, when using stack_method_='predict_proba', the first column is dropped when the problem is a binary classification problem. Indeed, both probability columns predicted by each estimator are perfectly collinear. Multiple stacking layers can be achieved by assigning final_estimator to a StackingClassifier or StackingRegressor: >>> final_layer_rfr = RandomForestRegressor( ... n_estimators=10, max_features=1, max_leaf_nodes=5,random_state=42) >>> final_layer_gbr = GradientBoostingRegressor( ... n_estimators=10, max_features=1, max_leaf_nodes=5,random_state=42) >>> final_layer = StackingRegressor( ... estimators=[('rf', final_layer_rfr), ... ('gbrt', final_layer_gbr)], ... final_estimator=RidgeCV() ... ) >>> multi_layer_regressor = StackingRegressor( ... estimators=[('ridge', RidgeCV()), ... ('lasso', LassoCV(random_state=42)), ... ('knr', KNeighborsRegressor(n_neighbors=20, ... metric='euclidean'))], ... final_estimator=final_layer ... ) >>> multi_layer_regressor.fit(X_train, y_train) >>> print('R2 score: {:.2f}' ... .format(multi_layer_regressor.score(X_test, y_test))) R2 score: 0.53 Wolpert, David H. “Stacked generalization.” Neural networks 5.2 (1992): 241-259. 1.11.7. AdaBoost# The module sklearn.ensemble includes the popular boosting algorithm AdaBoost, introduced in 1995 by Freund and Schapire [FS1995]. The core principle of AdaBoost is to fit a sequence of weak learners (i.e., models that are only slightly better than random guessing, such as small decision trees) on repeatedly modified versions of the data. The predictions from all of them are then combined through a weighted majority vote (or sum) to produce the final prediction. The data modifications at each so-called boosting iteration consists of applying weights \(w_1\), \(w_2\), …, \(w_N\) to each of the training samples. Initially, those weights are all set to \(w_i = 1/N\), so that the first step simply trains a weak learner on the original data. For each successive iteration, the sample weights are individually modified and the learning algorithm is reapplied to the reweighted data. At a given step, those training examples that were incorrectly predicted by the boosted model induced at the previous step have their weights increased, whereas the weights are decreased for those that were predicted correctly. As iterations proceed, examples that are difficult to predict receive ever-increasing influence. Each subsequent weak learner is thereby forced to concentrate on the examples that are missed by the previous ones in the sequence [HTF]. AdaBoost can be used both for classification and regression problems: 1.11.7.1. Usage# The following example shows how to fit an AdaBoost classifier with 100 weak learners: >>> from sklearn.model_selection import cross_val_score >>> from sklearn.datasets import load_iris >>> from sklearn.ensemble import AdaBoostClassifier >>> X, y = load_iris(return_X_y=True) >>> clf = AdaBoostClassifier(n_estimators=100) >>> scores = cross_val_score(clf, X, y, cv=5) >>> scores.mean() The number of weak learners is controlled by the parameter n_estimators. The learning_rate parameter controls the contribution of the weak learners in the final combination. By default, weak learners are decision stumps. Different weak learners can be specified through the estimator parameter. The main parameters to tune to obtain good results are n_estimators and the complexity of the base estimators (e.g., its depth max_depth or minimum required number of samples to consider a split min_samples_split). • Multi-class AdaBoosted Decision Trees shows the performance of AdaBoost on a multi-class problem. • Two-class AdaBoost shows the decision boundary and decision function values for a non-linearly separable two-class problem using AdaBoost-SAMME. • Decision Tree Regression with AdaBoost demonstrates regression with the AdaBoost.R2 algorithm. Y. Freund, and R. Schapire, “A Decision-Theoretic Generalization of On-Line Learning and an Application to Boosting”, 1997. 10. Zhu, H. Zou, S. Rosset, T. Hastie. “Multi-class AdaBoost”, 2009. 8. Drucker. “Improving Regressors using Boosting Techniques”, 1997. [HTF] (1,2,3) T. Hastie, R. Tibshirani and J. Friedman, “Elements of Statistical Learning Ed. 2”, Springer, 2009.
{"url":"https://scikit-learn.org/dev/modules/ensemble.html?ref=blog.collegefootballdata.com","timestamp":"2024-11-11T05:10:18Z","content_type":"text/html","content_length":"236864","record_id":"<urn:uuid:1ee58cd0-5a7c-44a4-8f9d-ad3466f945c8>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00565.warc.gz"}
Is it possible to pay for help with implementing Neural Networks for predicting traffic accidents in transportation? | Do My Programming Homework Is it possible to pay for help with implementing Neural Networks for predicting traffic accidents in transportation? The information presented in this proposal is intended to help improve the capabilities and flexibility of ICT systems, both in terms of data collection and This Site integration. The proposed ICT system will need a set of ICT systems (n) that can be managed and used by a computing platform (n) that can be remotely monitored and analyzed. The scope of this proposal is to propose the concept of a basic ICT system that can be easily managed and deployed in conjunction with other modern ICT systems (e.g., a financial system) as well as the design of novel networked data and data visualization modules for further myeloinfiltration of ICT systems. The goal of this approach is to help integrate models and concepts from several current data-driven ICT schemes and to help provide a framework for generalizing model-based ICTs to new data sources and computational challenges. These new data base and computational challenges will be included as parts of this proposal. We agree that data analytics can be used to improve ICT capability and efficiency, especially in non-linear data analytics. In this experiment, we will provide additional detailed details of the proposed ICT architecture. In addition, we will discuss development and testing of general tools that can be easily developed for designing data analytics systems. We are sure the study methods and system design methods required to implement this type of study will be widely adopted by the ICT community. [Appendices 6 and 7]Is it possible to pay for help with implementing Neural Networks for predicting traffic accidents in transportation? Getting the right guidance will help guide you in your decision made with the help of a trusted traffic engineer. Neural Network machines are very robust because they can predict traffic speed with low accuracy based on probability structure. This is one of the important features of Machine Learning machines. In this paper, we will focus on neural network systems since they are just a different kind of system from classical ones. We will first provide a picture of the hardware to be carried out of neural Network machines but only once we reach the ground truth, so that most of the research is not affected. First we want to introduce the concept called $p$-Algorithm instead of $p$ -Algorithm. The algorithm is known as a deep reinforcement learning (DRL) algorithm. The idea of deep reinforcement learning is to minimize the variation between $p$ and $p’$ as an operation of the network. It finds new members of a certain probability density function based on three dimensional structure. Get Your Homework Done Online It eventually learns a deterministic function that we will use for the prediction. According to the neural Recommended Site we assume the intensity of detection inside the network is from the average intensity of a view website connected to the target neuron according to the probability density for the target neuron: $$\sum_{t\in\mathcal{N}}\lambda_{tt}\times\mu_{tt}^{p,\beta;r}(\lambda;\mu_{t}^ {p’,\beta;r})={\sum_{t\in\mathcal{N}}}\lambda_{tt}P_{t}$$ We further ignore the noise intensity between the units in the network, as in previous works that deals with linear models. Assuming that we are able to extract a feature by selecting the most weighted feature, we will find a representative feature of a target neuron, so that a CNN with different weighting may generate a predicted feature density function. The model we will work on is aIs it possible to pay for help with implementing Neural Networks for predicting traffic accidents in transportation? Is it possible to pay for help with implementing Neural Network models for predicting traffic accidents in transportation? In this discussion, we break down the following processes that work well for different scenarios: 1. Model-to-input mapping. A model for predicting a discrete set of driver inputs to a supervised perceptron will return an item of the mapping represented by the hidden state of that set. These models will also return the corresponding item of the perceptron. Thus, in this case, for the case of predicting each person’s own output values for a time interval, we can propose the Neural Network, based on a simple representation, with the input to the perceptron generated by the learner. #Model-To-Input # ————- i__1 visit here I(…, 0) i__2 = i__1 +1 **2 + i__2 i__3 = I(…, 0) i__4 = i__1 +1 **2 + i__4 i__5 = I(…, 0) #model-to-initialization i__1 = I(…, 0) + 1 **2 //= 1 term time # ————- 2. Parameter prediction for prediction of activity or traffic. 3. Estimator to estimate performance. 4. Stochastic variant for estimating useful reference 5. Estimation of parameter value or probability. #Model-to-input, i.e., training module im = g1_generate(…, im) #im = g1_generate(…, im) + … + im outputs = im.eval()
{"url":"https://domyprogramminghomework.net/is-it-possible-to-pay-for-help-with-implementing-neural-networks-for-predicting-traffic-accidents-in-transportation","timestamp":"2024-11-10T21:39:23Z","content_type":"text/html","content_length":"119764","record_id":"<urn:uuid:c48fb682-e1c5-4a46-bd00-fa986e371aa6>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00672.warc.gz"}
Simone Brivio Aug 27, 2024 Abstract:In this work, we present the novel mathematical framework of latent dynamics models (LDMs) for reduced order modeling of parameterized nonlinear time-dependent PDEs. Our framework casts this latter task as a nonlinear dimensionality reduction problem, while constraining the latent state to evolve accordingly to an (unknown) dynamical system. A time-continuous setting is employed to derive error and stability estimates for the LDM approximation of the full order model (FOM) solution. We analyze the impact of using an explicit Runge-Kutta scheme in the time-discrete setting, resulting in the $\Delta\text{LDM}$ formulation, and further explore the learnable setting, $\Delta\text{LDM}_\theta$, where deep neural networks approximate the discrete LDM components, while providing a bounded approximation error with respect to the FOM. Moreover, we extend the concept of parameterized Neural ODE - recently proposed as a possible way to build data-driven dynamical systems with varying input parameters - to be a convolutional architecture, where the input parameters information is injected by means of an affine modulation mechanism, while designing a convolutional autoencoder neural network able to retain spatial-coherence, thus enhancing interpretability at the latent level. Numerical experiments, including the Burgers' and the advection-reaction-diffusion equations, demonstrate the framework's ability to obtain, in a multi-query context, a time-continuous approximation of the FOM solution, thus being able to query the LDM approximation at any given time instance while retaining a prescribed level of accuracy. Our findings highlight the remarkable potential of the proposed LDMs, representing a mathematically rigorous framework to enhance the accuracy and approximation capabilities of reduced order modeling for time-dependent parameterized PDEs. * 43 pages
{"url":"https://www.catalyzex.com/author/Simone%20Brivio","timestamp":"2024-11-13T10:37:35Z","content_type":"text/html","content_length":"84625","record_id":"<urn:uuid:8cd98cdd-e907-4288-ab06-58218494d500>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00373.warc.gz"}
One, Two, Many, Lots: Investigating the Start of Many-Body Physics Two papers with a similar theme crossed my social media feeds in the last couple of days. You might think this is just a weird coincidence, but I'm choosing to take it as a sign to write about them for the blog. So, what are these papers, and what's the theme? One is the final publication of some results I saw at DAMOP and alluded to back in June, and the other is from this post by Doug Natelson. Both look at the transition from few-body to many-body physics. And this is interesting, why? I mean, isn't it obvious that you just add some more bodies? OK, I guess that does need a little more explaining. The papers in question get at one of the fundamental difficulties of dealing with real systems in physics, namely that there are generally only two problems we know how to solve exactly: two particles, and an infinite number of particles. Ummm... I can see how two particles would be easy, but "an infinite number" sounds impossible, dude. It might seem that way, but one of the great things about calculus is that it provides a lot of powerful mathematical tools for dealing with quantities that might seem to be infinite. If you let the number of particles in a system become infinite, lots of parameters that start out discrete and discontinuous become smooth continuous functions, and you can use calculus to get nice answers in the form of equations you can actually write down and solve. But you never really have an infinite number of particles, do you? Right, that's the problem. The game is to assert that you're in the "thermodynamic limit," where you have so many particles that it might as well be an infinite number. And for most measurable quantities of stuff, that's a pretty good approximation-- you're talking about 100,000,000,000,000,000,000 particles on up-- and the equations you get by assuming an infinite number work very well. It's interesting to ask, though, where that kicks in. That is, how many particles do you need to have before you can start using the infinite-limit answers to describe the properties. This is a hard problem to address, because the really nice mathematical tools don't work for intermediate numbers, and then you need to do things like simulating huge numbers of particles on a supercomputer to try to figure out the answer. These papers report a couple of new studies that take on this regime, one experimental and the other theoretical. How do you do this experimentally? Isn't that hard to do? Well, that's why it's published in Science (arxiv version). This is a relatively straightforward extension of the paper we talked about back in June, where they showed an ability to prepare a specific number of atoms on demand in a microscopic trap. They use that technique here to prepare samples of zero to five atoms in the same state, and stick in one "impurity" atom in a different internal state. Why do they do that? Having the "impurity" gives them a single atom they can probe to see the effect of all the others. This is a great system for them to study, because the energy of that one "impurity" atom will depend on its interaction with the others, in a way that's generally very complicated. They can clearly resolve and measure the shift, though, and see it increasing as they increase the number of atoms, as shown in the "featured image" up top, which I'll reproduce here: And how does this show where many-body physics kicks in? Well, there are two cases you can solve exactly: one is the case of a single "majority" atom interacting with the single "impurity" atom, in which case you just have two atoms occasionally knocking into each other, and can solve it exactly. The other case is the limit of an infinite number of "majority" atoms, in which case they provide a continuous background field modifying the energy levels of the impurity. So, they just measure the shift for different majority numbers and compare to those two limits? Exactly. When they do, they get a graph that looks like this: This shows the difference between the energy predicted for a single majority atom and the energy measured in the experiment, or predicted by other models. The orange line represents the infinite-number limit, and the green is a supercomputer simulation for the two-majority-atom case. And the subtle gradations in color of the data points? Match the colors in the first figure, to identify different numbers of majority atoms. Or, if you're not good at matching colors, you can just count-- there are three sets of data points under slightly different conditions, and in each series the points move up and to the left as you increase the number of majority atoms. So... It looks like they hit the orange line at about four majority atoms? Exactly. The conclusion of this experiment is that, for the purposes of the particular shift they're measuring, four is effectively the same as an infinite number. That's smaller than I would've expected. Again, that's why it's in Science. So, what's the other paper? The other paper, pointed out by Doug, is a theory paper looking at thermodynamic properties of an ideal Bose gas (with a bit more detail in this New Journal of Physics article). What they're studying isn't an energy shift, but a phase transition. Is this one of those cool Star Trek things where you can walk through walls and stuff? Ah, no. That's comic-book science. This is a phase transition in the sense of moving from gas to liquid or liquid to solid. That's not nearly as much fun. No, but it's real. In this case, they looked at the behavior of a simulated "box" full of bosons, one of the classic toy models of thermodynamics. They calculated the size of the box for a bunch of different temperatures at a given pressure, using different numbers of particles. Above a particular number, they find that there's a discontinuity in the volume at a particular temperature. And this is interesting... why? Well, because that's what happens when something boils. As you increase the temperature of a liquid, the volume doesn't change very much until the boiling point, then bam, it shoots way up as the liquid changes to a gas. When they do their simulation for small numbers of particles, they don't see that sharp change, just a slow increase up to the "gas" volume. Above a particular number, though, a discontinuity appears, which they interpret as the onset of the collective phase transition. The number isn't 4, is it? Um, no. It's 7616. That's kind of random, isn't it? Well, so is 4, in the grand scheme of things... Before you ask, though, these aren't inconsistent results, just different systems. In one case, you're experimentally looking at a one-dimensional trap of fermions, measuring an energy shift due to collisional interactions between particles, while in the other you're looking theoretically at a three-dimensional gas of bosons, measuring thermodynamic properties. They're not really all that So why are you lumping them together in a single post, then? Well, because they both happened to pass my awareness threshold at the same time, more or less. And also because they're addressing the same thematic question, namely "At what point can you start thinking about moderate numbers of particles in the same terms you would use for macroscopic samples?" That has a little bit of a dorm-room stoner sort of sound to it, you know. Yeah, it does, and that's what's cool about these. Fifteen years ago, this sort of question would've been impossible to discuss outside of physicist bull sessions, tilting toward philosophy, possibly with the aid of mind-altering substances of some sort. Now, though, thanks to increases in computing power and developments in cold atom technologies, we can start to not only frame these questions in a sensible way, but get actual answers to them. And that's pretty awesome stuff. A. N. Wenz, G. Zürn, S. Murmann, I. Brouzos, T. Lompe, & S. Jochim (2013). From Few to Many: Observing the Formation of a Fermi Sea One Atom at a Time Science 342, 6157 pp. 457-460 (2013) arXiv: 1307.3443v2 Jeong-Hyuck Park (2013). How many is different? Answer from ideal Bose gas IC-MSQUARE conference proceedings arXiv: 1310.5580v1 More like this In the previous installments, I talked about identical particles and symmetry, and what that means for fermions. Given that there's only one other type of particle in the world, that sort of means that I need to explain what symmetry means in the case of bosons. When I explain this to the first-… So, last week, I talked about how superconductors work, and I have in the past talked about the idea of making cold atoms look like electrons. And obvious question, then, whould be: Do cold atoms systems allow us to learn anything about superconductivity? The answer here is, unfortunately, "Yes and… Hey, dude? Yeah, what's up? I'm not normally the one who initiates this, but I was wondering: When you were at DAMOP last week, did you see any really neat physics? Oh, sure, tons of stuff. It was a little thinner than some past meetings-- a lot of the Usual Suspects didn't make the trip-- but… When one of the most recent issues of Physical Review Letters hit my inbox, I immediately flagged these two papers as something to write up for ResearchBlogging. This I looked at the accompanying viewpoint in Physics, and discovered that Chris Westbrook already did most of the work for me. And, as… I would expect, from other critical phenomena, that there would be dimensional scaling. Have the theorists done their same study in other dimensions? Have the theorists done their same study in other dimensions? I don't know. I don't think so, based on a quick scan of their introduction, but I might've missed something. Critical exponents are discussed in the New Journal of Physics article. What I don't exactly understand yet is how they achieve a discontinuity in what they claim is a finite system. Generally, you expect any discontinuity associated with a phase transition (as strictly defined in the thermodynamic limit) to be smoothed out in a finite system. The larger the number of particles, for example, the sharper the heat capacity, etc., look, but they are strictly speaking not divergent/discontinuous until "N = \infty" (as the partition function is a sum of analytic functions, it can only develop a non-analyticity if you sum an infinite number of terms). But these authors claim they are getting a genuine non-analyticity in a finite-particle system. I guess I'll have to read the paper carefully to figure out how that's supposed to work out.
{"url":"https://scienceblogs.com/principles/2013/10/29/one-two-many-lots-investigating-the-start-of-many-body-physics","timestamp":"2024-11-07T18:30:57Z","content_type":"text/html","content_length":"53764","record_id":"<urn:uuid:1ce500a0-8b28-4536-b2a0-d084e129d7b0>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00215.warc.gz"}
python – Format numbers in django templates Sure! In Django templates, you can format numbers using the built-in `floatformat` template filter. This filter allows you to control the decimal places and rounding of a number in your template. Here are 8 examples of how you can format numbers in Django templates: 1. Format a number with default settings: {{ my_number|floatformat }} This will format `my_number` with the default settings, which display the number with a minimum of 1 decimal place and a maximum of 2 decimal places. 2. Specify the number of decimal places to display: {{ my_number|floatformat:3 }} This will display `my_number` with exactly 3 decimal places. 3. Force rounding up: {{ my_number|floatformat:"-2" }} This will display `my_number` rounded to the nearest 100s place. 4. Force rounding down: {{ my_number|floatformat:"-1" }} This will display `my_number` rounded to the nearest 10s place. 5. Remove decimal places: {{ my_number|floatformat:0 }} This will display `my_number` without any decimal places. 6. Format a negative number: {{ my_number|floatformat:"-2"|neg }} This will display `my_number` rounded to the nearest 100s place and then negated. 7. Format a positive number with a sign: {{ my_number|floatformat:2|add:"+ " }} This will display `my_number` with 2 decimal places and a `+` sign. 8. Format a number as a percentage: {{ my_number|floatformat:"-2"|mul:100 }}% This will display `my_number` as a percentage with 2 decimal places. Now let’s go through the code step by step for the first example: 1. In your Django template, you have a variable called `my_number`. 2. To format this number, you use the `floatformat` template filter, which is applied to `my_number` using the `|` pipe character. 3. By default, `floatformat` displays the number with a minimum of 1 decimal place and a maximum of 2 decimal places. The formatted number is outputted in the HTML. Using this approach, you can easily format numbers in Django templates by using the `floatformat` filter and specifying the desired format options.
{"url":"https://pythonkb.com/python-format-numbers-in-django-templates/","timestamp":"2024-11-04T05:51:58Z","content_type":"text/html","content_length":"71507","record_id":"<urn:uuid:6afe69b9-521d-4bb5-a8af-376dbf22a04b>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00405.warc.gz"}
Show Your Work! Whether you’re taking the SAT, the ACT, or Algebra II, it’s a good idea to always show your work. Writing things down helps you think! Some students seem to think the proper way to do a math problem is to devise a multi-step plan and then follow it through step by step. If they can’t think of Step 3, they won’t even bother writing down Step 1. People who are good at math don’t do it that way. They write down Step 1 and then start thinking about Step 2. Sometimes I don’t even know what Step 1 is, but I write down something. I’ll even copy down equations given in the problem. It might seem like a waste of time, but it gets the wheels turning. When you think about it, algebra is pretty simple. There are two basic rules: • Combine like terms. • If you do something to one side of an equation, you have to do it to the other side. That’s the gist of it. So if you don’t know where to begin on an algebra problem, start by combining like terms. You’ll probably get somewhere. Geometry is pretty simple too—at least on the SAT and ACT. The basic rules are: • A line has 180°. • A triangle has 180° • Vertical angles are equal: Most angle problems on the SAT and ACT can be solved with these three rules. So if you’re stuck on a geometry problem, start labeling angles. Another reason to show your work is that doing so will help you identify your mistakes. Think about it: if you miss a math problem because you forgot to distribute a minus sign, but you didn’t bother to write anything down, you’ll never know what went wrong. The best way to study for any standardized test is to practice frequently and review any problems you miss. You can’t learn from your mistakes if you don’t know what they are, so it’s important to have a record of your thought process. Keep your work neat and organized. Try to make it look like the demonstration problems in an algebra textbook. Anyone else ought to be able to look at your work and know what you were thinking. Showing each step may seem tedious, but it’s the fastest way to become better at math. After a while, you’ll see results. Your skills and confidence will improve. Then you can start doing some of the steps in your head, but in the beginning, it’s important to write down everything!
{"url":"https://www.morethanateacher.com/show-your-work/","timestamp":"2024-11-04T15:00:58Z","content_type":"text/html","content_length":"106379","record_id":"<urn:uuid:6ab4b7a6-3a5d-4acf-83f7-76ef1011b83f>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00510.warc.gz"}
Shares | AI Math Solver 📢 Shares Find the smallest possible values of A and B for the number 645A2879B8 to be divisible by both 8 and 9. November 11, 2024 To find the smallest possible values for A and B in a 10-digit number divisible by both 8 and 9, the last three digits (9B8) must be divisible by 8, and the sum of all digits must be divisible by 9. This leads to B=2 and A=3 as the smallest possible values. Find the value of 2^(m+n) November 9, 2024 To find 2^(m+n), solve for m and n in the equations 2^m = 1 and 2^n = 16, then substitute the values into the expression. Evaluating Zero to the Power of Zero November 7, 2024 The expression 0^0 is considered an indeterminate form in calculus because different approaches to evaluating it yield different potential values. While some approaches suggest a value of 1 and others suggest 0, the limit does not have a single, well-defined value. In some contexts, like combinatorics, it is defined as 1 for convenience.
{"url":"https://www.aimathsolve.com/shares","timestamp":"2024-11-12T21:40:21Z","content_type":"text/html","content_length":"43747","record_id":"<urn:uuid:417e5bfa-78f5-46fd-8223-f2223c450db1>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00524.warc.gz"}
Additional Exercises to 2nd Edition of Axler The notation used will not be exactly the same as Axler. For instance, we will denote the set $\{a_1, \ldots, a_n\}$ as $\{a_i\}$ out of laziness. Axler does many things right. It jumps straight into the main protagonists of linear algebra — vector spaces — without boring a reader with the details of a field or a vector space. This is all sensible for a first read, but it is useful to eventually learn what a field and vector space actually are. Field axioms Definition. A field is a set of elements $F$, including two special distinct elements $0$ and $1$, with two binary operators $+$ and $\cdot$ such that □ Commutativity: ☆ $a + b = b + a$ ☆ $a\cdot b = b\cdot a$ □ Associativity: ☆ $(a + b) + c = a + (b + c)$ ☆ $(a\cdot b) \cdot c = a \cdot (b \cdot c)$ □ Distributivity: $a\cdot (b + c) = a \cdot b + b \cdot c$ □ Identity: ☆ $a + 0 = a$ ☆ $a \cdot 1 = a$ □ Inverse ☆ For all $a$, there exists an element $b$ such that $a + b = 0$. ☆ For all $a eq 0$, there exists an element $b$ such that $a \cdot b = 1$. for all elements $a$, $b$, $c \in F$. Even though the field is really the triple $(F, +, \cdot)$, we will simply refer to the field as $F$ for brevity. Typically we omit $\cdot$ when it is clear, e.g. we will write $a\cdot b$ as $ab$. Examples of fields include $\mathbb{Q}$ and $\mathbb{R}$, as well as the integers modulo $p$, i.e. $\mathbb{F}_p$. Note that $\mathbb{Z}$ is not a field. A couple of results you should try to prove: • Prove that if $a + b = a + c$, then $b + c$. □ Note from above that the additive inverse is unique, i.e. if $a + b = a + c = 0$, then $b = c$. • Prove that if $ab = ac$, then either $a = 0$ or $b = c$. □ Note from above that the multiplicative inverse of any non-zero element is unique, i.e. if $ab = ac = 1$ and $aeq 0$, then $b = c$. • Prove that $0a = 0$. Vector space axioms Definition. A vector space consists of a set $V$ of vectors — including a special vector $0$ — over a field $F$ of scalars, and two binary operators $+ \colon V \times V \to V$ and $\cdot \colon F\times V \to V$ which satisfy □ Commutativity: $u + v = v + u$ □ Associativity: ☆ $(u + v) + w = u + (v + w)$ ☆ $(a \cdot b) \cdot v = a \cdot (b \cdot v)$ □ Distributivity: ☆ $(a + b) \cdot v = a \cdot v + b \cdot v$ ☆ $a \cdot (u + v) = a \cdot u + a \cdot v$ for all $u$, $v \in V$ and $a$, $b \in F$. Note that $+$ can either represent the binary operator $+ \colon F \times F \to F$ or $+ \colon V \times V \to V$. Similarly, $\cdot$ can either represent the binary operator $\cdot \colon F \times F \to F$ or $\cdot \colon F \times V \to V$. In the interest of conciseness, we will not explicitly differentiate the two. You will have to infer. Furthermore, we denote the vector space as $V$ for brevity, even though it really also involves a field and two binary operators. A result you should try to prove: • Prove that $0\cdot v = 0$. Existence of direct complement for infinite vector spaces Given a finite-dimensional vector space $V$ and a subspace $U$ of $V$, there exists some subspace $W$ of $V$ such that $U \oplus W = V$. (We can prove this by explicitly extending a basis of $U$.) However, what if $V$ is infinite? Then our explicit extension of a basis doesn’t work, because infinite vector spaces cannot have a finite basis. In fact, the existence of this direct complement is guaranteed not through a proof but through the Axiom of Choice. (This statement being true, in other words, is equivalent to AoC.) Chapter 2 • Suppose that $\{v_1, \ldots, v_n\}$ forms a basis of a vector space $V$. Prove that □ replacing any element $v_i$ with $kv_i$ where $k$ is a non-zero scalar □ replacing any element $v_i$ with $v_i+v_j$ where $jeq i$ creates another basis of $V$. • Prove that every vector in a vector space has a unique representation in any basis. More concretely, given $\vec{v}$ in a vector space $V$ and a basis $\{\vec{b_i}\}$ of $V$, show that there is only one choice of scalars $\{k_i\}$ such that $\vec{v}=\sum k_i\vec{b_i}$. • Prove that if $U$ is a subspace of $V$ and $\dim U = \dim V$, then $U = V$. Chapter 3 • Suppose $T$ is invertible. Show that if $\{v_i\}$ is a basis of $V$, then so is $\{Tv_i\}$. • Prove that for any invertible linear map $T$, $kT$ is also invertible for all non-zero scalars $k$. • Prove that for any non-invertible linear map $T$, $kT$ is also non-invertible for all scalars $k$. • Suppose $T \colon V\to V$ is a linear map. Show that there exists some map $S\colon V \to V$ such that $TST = T$. Matrix exercises (feel free to skip) • Show the matrix of that $P^{-1}TP$ with respect to a basis $B$ is the matrix of $T$ with respect to the basis $P$. (Here “the basis P” means the basis of vectors $\{Pb_1, \ldots, Pb_n\}$, where $ \{b_1, \ldots, b_n\}$ is the basis $B$.) • Show that the product of two square upper-triangular matrices $A$ and $B$ is an upper-triangular square matrix $C$. Also, show that the $i$th entry on the diagonal of $C$ is the product of the $i$th entry onthe diagonals of $A$ and $B$. Chapter 5 • Suppose that $\{U_i\}$ is a collection of invariant subspaces of $V$ under a linear map $T$. Show that $\sum U_i$ is also invariant under $T$. • (CMU 21341 Final, Spring 2011) Let $V$ be a finite-dimensional vector space over $\mathbb{C}$. Suppose $S, T\in \mathcal{L}(V, V)$ are such that $ST=TS$. Let $\lambda \in \mathbb{C}$ be an eigenvalue of $S$. Show that there exists $\mu \in \mathbb{C}$ and $v\in V$ with $veq 0$, such that both $Sv=\lambda v$ and $Tv = \mu v$. Chapter 6 • Prove the Extended Triangle Inequality: $||\sum v_i|| \leq \sum ||v_i||$ with equality if and only if all $v_i$ are multiples of each other. • Prove that for any inner product space $V$ over the real or complex numbers, $||v_1+\cdots+v_n||^2 \leq n(||v_1||^2+\cdots+||v_n||^2)$ (where $v_i$ are elements of $V$). Chapter 7 Theorem 7.25 states any normal operator can be expressed as a block diagonal matrix with blocks of size $1$ or $2$ (and the size $2$ block matrices are scalar multiples of the rotation matrix). This statement isn’t terrible (knowing the explicit representation of a linear map is useful, I guess), but there’s a much more natural way to state it. Return to the Spectral Theorem: (normal/ self-adjoint) operators in ($\mathbb{C}$/$\mathbb{R}$) have an orthonormal basis of eigenvectors. A (somewhat contrived) reformulation of the Spectral Theorem is that $T$ can be decomposed into invariant orthogonal subspaces of $\text{dim } 1$. And obviously every subspace of $\text{dim } 1$ is self-adjoint (and thus normal), so we can say $T$ can be decomposed into normal invariant orthogonal subspaces of dimension $1$ iff it is (normal/self-adjoint) in ($\mathbb{C}$/$\mathbb{R}$). So the equivalent reformulation of 7.25 would be $T$ can be decomposed into normal invariant orthogonal subspaces of dimension $1$ or $2$ iff it is normal in $\mathbb{R}$. • (Corollary to 7.6) Show that $\text{null } T = \text{null } T^\star$ if $T$ is normal. Also, show that the converse does not hold. • Show that $\text{null } T = \text{null } \sqrt{T^\star T}$. • (Generalization of uniqueness of polar decomposition) If $R_1$ and $R_2$ are positive operators such that $||R_1v|| = ||R_2v||$ for all $v$, show that $R_1=R_2$. Chapter 8 Let’s restate 8.5 and 8.9 in their full forms, which Axler alludes to later in the chapter. • (Higher-powered 8.5) There exists some non-negative integer $m$ such that $\text{null } T^k \subsetneq \text{null } T^{k+1}$ for $k < m$ and $\text{null } T^k = \text{null } T^{k+1}$ for $k \geq • (Higher-powered 8.9) There exists some non-negative integer $m$ such that $\text{range } T^k \supsetneq \text{range } T^{k+1}$ for $k < m$ and $\text{range } T^k = \text{range } T^{k+1}$ for $k \ geq m$. • Consider a vector space $V$ and a linear map $T\colon V\to V$. Given two invariant subspaces $U_1$, $U_2$ whose intersection consists only of the $0$ vector, show that $\text{char}(U_1 + U_2) = \ text{char} U_1 \cdot \text{char} U_2$ and $\text{minpoly}(U_1 + U_2) = \text{lcm}(\text{minpoly} U_1, \text{minpoly} U_2)$. • Suppose $X$ has eigenvalues $\lambda_1, \ldots, \lambda_n$ with multiplicities $m_1, \ldots, m_n$. Then show $X^k$ has eigenvalues $\lambda_1^k, \ldots, \lambda_n^k$ with multiplicities $m_1, \ ldots, m_n$ for all $k \geq 1$.
{"url":"https://dennisc.net/math/axler","timestamp":"2024-11-14T03:59:27Z","content_type":"text/html","content_length":"53734","record_id":"<urn:uuid:f0586aad-cbce-4545-9636-71c0d9f20411>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00487.warc.gz"}
Acceleration of free falling object on planet X • Thread starter 1irishman • Start date In summary, on planet X, an object is thrown vertically upward with an initial velocity of 5m/s. After 3 seconds, it returns to the point of release. The acceleration of a free falling object on this planet is -1.6m/s^2. The book's answer of -3.3m/s^2 is derived using the time taken for the object to reach its maximum height, which is 1.5 seconds. Homework Statement While on planet X, an object is thrown vertically upward with an initial velocity of 5m/s. If this object returns to the point of release in 3s, what is the acceleration of a free falling object on this planet? Homework Equations The Attempt at a Solution = -1.6m/s^2 The answer is -3.3m/s^2 in the book...i don't understand how they arrived at that? Hi 1irishman, welcome to PF. vf is zero when the object reaches the maximum height. What is the time taken by the object to reach that point? thank you rl...i see now. FAQ: Acceleration of free falling object on planet X 1. What is the acceleration of a free falling object on planet X? The acceleration of a free falling object on planet X would depend on the gravitational force of the planet. It can be calculated using the formula: a = GM/r^2, where G is the gravitational constant, M is the mass of the planet, and r is the distance between the object and the center of the planet. 2. How does the acceleration of a free falling object on planet X compare to that on Earth? The acceleration of a free falling object on planet X would vary depending on the mass and size of the planet. If the planet is larger and more massive than Earth, the acceleration would be higher. However, if the planet is smaller and less massive, the acceleration would be lower. 3. Can the acceleration of a free falling object on planet X change? Yes, the acceleration of a free falling object on planet X can change depending on the altitude of the object. As the object gets closer or farther from the planet's surface, the acceleration would also change. 4. Does air resistance affect the acceleration of a free falling object on planet X? Air resistance can affect the acceleration of a free falling object on planet X, but it would depend on the density of the planet's atmosphere. If the planet has a denser atmosphere, air resistance would have a greater impact on the acceleration of the object. 5. How is the acceleration of a free falling object on planet X measured? The acceleration of a free falling object on planet X can be measured using a device called an accelerometer. This device measures the change in velocity of the object over time, which can then be used to calculate the acceleration.
{"url":"https://www.physicsforums.com/threads/acceleration-of-free-falling-object-on-planet-x.356845/","timestamp":"2024-11-05T18:36:07Z","content_type":"text/html","content_length":"83558","record_id":"<urn:uuid:8b6d05bd-a2a9-4ee5-92ed-412af96d24da>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00457.warc.gz"}
An assessment of dynamic pressure in buried tanks subjected to seismic loadings The seismic performance of storage tanks is a matter of special importance, extending beyond the economic value of the tanks and contents. This has led to attention of many researchers due to the seismic behavior of tanks buried in the soil. Because of the interaction between soil and structure during an earthquake, dynamic analysis of such structures is very important that must be considered. In this paper, according to the interaction effect of soils and structures, under the time history record, seismic behavior of concrete cubic buried tanks has been analyzed using the finite element software ABAQUS. In this process circular and cubic tanks have been studied and the impact of changes in parameters such as earthquake and quantity of water in tanks on pressure on tank's wall has been studied. 1. Introduction Earthquake brings huge losses in the development of human society repeatedly, which is one of the most serious natural disasters facing the humanity. The world's earthquake occurs about 500 million times every year, of which magnitudes 6 and above is about 100 to 200 times and magnitudes 7 and above is about 18 times. Earthquake loads have usually been considered in their design, but the analyses have been based on simplified theory developed for retaining walls with the wall top at the ground surface and the foundation assumed to be rigid. When the structure is constructed below the surface in deep soil layers, these simplified methods have obvious limitations. So the earthquake force is an important factor for the design of such structures. This problem has caused in the past years, many researchers have been considered the analyzing such structures under seismic forces. Jacobson [1] is among the first researchers that has carried out many studies on the dynamic behavior of water tanks in 1949. A decade later, Housner [2] studied on a simple dynamical model for the buried rectangular and cylindrical tanks. In the same years, Lysmer [3] proposed a theory of viscous absorbing boundary around a limited environment for modeling semi-constrained environment for analyzing buried tanks. But the main problem of absorbing boundaries is limitations to analyzing in the frequency domain. This led to Lysmer continued his research of absorbing boundaries and in 1969 Lysmer and Kuhlemeyer [3] and in 1972 Lysmer and Wass [4] provided other additional models based on the theory of absorbing boundaries. Research in this area did not stop his studies and other researchers like Kausel [5] presented research in this field. In 1992, Haroun [6] used and expanded studies of Housner [2] for flexible reservoirs. He offered empirical model for the reservoir according to soil structure interaction effects and explained soil structure interaction effects with several experiments on turbulence in the water. He collected valuable information on soil structure interaction effects from previous researchers and their experiments. Thus evaluation of soil structure interaction is considered an important factor for the analysis of buried tanks. Appropriate methods of dynamic analysis of structures with regard to interaction effects and soil structure are formed based on finite element method [7]. Finite Element Method (FEM) is the most widely used numerical methods in structural analysis that includes nonlinear effects, what type of material and geometry [8]. Most research on soil and structure interaction studies based on finite element method is analyzing two-dimensional tanks models based on different software. However, in some cases three-dimensional model of the comment is taken into account [9-10]. In another research work, Lin et al. [11] rebuilded the computer model of condensate storage tanks in Taiwan using the SAP 2000 program in conjunction with the lumped mass stick model and to evaluate the soil structure interaction by employing the SASSI 2000 program. The differences between the results with the soil structure interaction and spring model were compared via natural frequency and response spectrum curves. 2. Seismic analysis of buried tanks Seismic behavior of buried tanks is dependent on the surrounding environment. In general, because dynamic model of a complete system include the dynamic model in relation to surrounding structures, wave propagation conditions in tank environment and the interface between tanks and their surrounding environment must be considered. In other words, buried tanks and surrounding soil during an earthquake is in the interaction. This problem is very important for bulky and heavy structures such as tanks. One of the factors in the seismic analysis of buried tanks is soil structure 3. Simplified fluid–structure interaction The seismic analysis of liquid storage tanks is complicated due to fluid–structure interaction of the system. Therefore, complex actions must be taken into account. First of all, the contained liquid is interacting with the tank wall. Seismic energy is transferred from the ground to the fluid through the motion of the tank. A portion of the liquid accelerates with the tank, acting as an added mass; the remaining liquid is assumed to slosh. Sloshing occurs in the upper part of the liquid, which does not displace laterally with the tank wall, generating seismic waves. In an effort to simplify the analysis, Haroun and Housner [12] developed a three-degree-of-freedom model of a ground-supported cylindrical tank that takes tank wall flexibility into account. Following is a review of this model. The contained liquid is considered as incompressible, inviscous and has irrotational flow. During the base excitation, the entire tank liquid mass ($m$) vibrates in three distinct patterns, in three distinct patterns, such as sloshing or convective mass (${m}_{c}$) (i.e. top liquid mass which changes the free liquid surface), impulsive mass (${m}_{i}$) (i.e. intermediate liquid mass vibrating along with tank wall) and rigid mass (${m}_{r}$) (i.e. the lower liquid mass which rigidly moves with the tank wall). While there are several modes in which sloshing and impulsive masses vibrates, the response can be predicted by considering only the first sloshing mode and the first impulsive mode. Therefore, the continuous liquid with flexible tank can be modeled as three lumped masses (see Fig. 1) [12]. Fig. 1Mechanical analog proposed by Haroun and Housner for flexible cylindrical tank [12] The convective and impulsive masses are connected to the tank by corresponding equivalent springs. The various equivalent masses and associated natural frequencies of the tank liquid are expressed as $m=\pi {R}^{2}H{\rho }_{w},$ ${\omega }_{i}=\frac{P}{H}\sqrt{\frac{E}{{\rho }_{s}}},$ ${\omega }_{c}=\sqrt{1.84\left(\frac{g}{R}\right)\text{tanh}\left(1.84S\right)},$ where $H$ is the liquid height, $R$ is the tank radius, $S=H/R$ is the aspect ratio (ratio of the liquid height to radius of the tank), the non-dimensional parameters ${Y}_{c}$, ${Y}_{i}$ and ${Y}_ {r}$ are mass ratios associated with convective, impulsive and rigid masses of the tank liquid, respectively; ${q}_{w}$ is the mass density of liquid; ${x}_{c}$ and ${x}_{i}$ are the convective and impulsive frequencies, respectively; $E$ and ${q}_{s}$ are the modulus of elasticity and density of tank wall, respectively; $g$ is the gravity acceleration and $P$ is a non-dimensional parameter associated with frequency of impulsive mass. The parameters ${Y}_{c}$, ${Y}_{i}$, ${Y}_{r}$ and $P$ are function of the aspect ratio of the tank $S$, given by the following relation [13]: $\left\{\begin{array}{c}{Y}_{c}\\ {Y}_{i}\\ {Y}_{r}\\ P\end{array}\right\}=\left[\begin{array}{ccccc}1.01327& -0.8757& 0.35708& 0.06692& 0.00439\\ -0.15467& 1.21716& -0.62839& 0.14434& -0.0125\\ -0.01599& 0.86356& -0.30941& 0.04083& 0\\ 0.037085& 0.084302& -0.05088& 0.012523& -0.0012\end{array}\right]\left\{\begin{array}{c}1\\ S\\ \begin{array}{c}{S}^{2}\\ \begin{array}{c}{S}^{3}\\ {S}^{4}\ $\left\{\begin{array}{c}{\mu }_{c}\\ {\mu }_{i}\\ {\mu }_{r}\end{array}\right\}=\mathrm{}\left[\begin{array}{ccc}0.52410& -0.10792& 0.33958\mathrm{}\mathrm{}\mathrm{}\mathrm{}\\ 0.44086& -0.11972& 0.16752\mathrm{}\mathrm{}\mathrm{}\\ 0.44233& 0.08445& 0.07916\mathrm{}\mathrm{}\mathrm{}\end{array}\begin{array}{ccc}-0.19357& 0.04791& -0.0045\\ -0.06089& 0.00751& 0\\ -0.02677& 0.00326& 0\end {array}\right]\left\{\begin{array}{c}1\\ S\\ \begin{array}{c}{S}^{2}\\ \begin{array}{c}{S}^{3}\\ \begin{array}{c}{S}^{4}\\ {S}^{5}\end{array}\end{array}\end{array}\end{array}\right\}.$ The effective heights ${H}_{c}$, ${H}_{i}$ and ${H}_{r}$ in terms of liquid height, $H$ are expressed as: The equivalent stiffness and damping of the convective and impulsive masses are expressed as: ${K}_{c}={m}_{c}{\omega }_{c}^{2},$ ${K}_{i}={m}_{i}{\omega }_{i}^{2},$ ${C}_{c}=2{\xi }_{c}{m}_{c}{\omega }_{c},$ ${C}_{i}=2{\xi }_{i}{m}_{i}{\omega }_{i},$ where ${\xi }_{c}$ and ${\xi }_{c}$ are the damping ratios of convective and impulsive masses, respectively [15]. 4. Earthquake force and its components In general each structure which is burial in soil affected by six components of ground motion during the earthquake. These components include two lateral components, one vertical component and three torsion components. Horizontal components of ground motion make hydrodynamic pressures exertion to the parapet of tank. Hydrodynamic pressures include pendulous and traumatic pressures. Hydrodynamic pressures may cause shear forces and bending moment and convoluted axial stresses and shear stress on the parapet of tank. Seismic behavior of structure is a complicated behavior that this problem cause more accurate in modeling of buried tank seismic behavior. Thus buried tank seismic behavior analysis needs seismic risk assessment until can estimate correct evaluation of structure performance during the earthquake. Seismic risk assessment can calculate ground motion parameters and records and spectral requirements. 5. Modeling of soil structure interaction effect based on finite element method The important part of modeling in buried tank analysis is interaction between soil and structure. In general tow methods exist in Abaqus for contact solution. Penalty function method and kinematic contact method are two methods that are mentioned. In penalty functional method, there is no limitation for penetration two surfaces together. But in kinematic contact method, the value of penetration two surfaces together is zero. In this research kinematic contact method is used for modeling the slide between parapet of tank and soil with tangential parameter. According to this element in Abaqus can model friction between tank and soil and also nonentity endurance assumption of soil. This element has ability of considering soil and structure interaction. Homogeneous solid element is used for modeling the tank. Also homogeneous solid element is used for modeling the soil and water with regarding soil parameters. Solid element is six faced and eight node elements that in each node three degree of freedom exist. This model includes nonlinear behavior and can consider criteria yield for the soil based on the stress – strain yield surface. In modeling has been used Drucker-Prager behavior for soil behavior. 3D finite element models in Abaqus were presented in following figures. The aim of this study is finite element modeling with estimation of damage to the concrete tanks. In this research, ABAQUS software is used for finite element modeling of the tank. The next step is to make the structure geometry, the tank and the foundation by elements relating to the available specifications. The boundary conditions are introduced to consider the interaction of the soil and the tank and also the applied loadings. In order to define the crack criterion in the concrete tank, the parameter of concrete damage plasticity (CPD) is used [14]. In this parameter, a strain criterion is determined for tensile and compressive stresses and the damage parameter is obtained for each strain. To model concrete, soil and water in ABAQUS, it is required to define parameters of Table 1-3. These parameters are considered in accordance with the software defaults. Fig. 3 demonstrates model meshing. Fig. 2a) Circular tank 3D model and b) Cubic tank 3D model Fig. 3Meshed model of tank in ABAQUS software Table 1Used parameters in concrete definition [16] Dilation angle Eccentricity ${\sigma b}_{0}/{\sigma c}_{0}$ $K$ Viscose parameter Poisson coefficient Modulus of elasticity Concrete density 36° 0.1 0.16 0.66 0 0.2 26.5 GPa 7850 Kg/m^3 Compression behavior Yield stress (MPa) Inelastic strain Damage parameter 20.2 7.47E-05 0 30 9.88E-05 0 40.3 0.000154123 0 50 0.000761538 0 40.2 0.002557559 0.195402 20.2 0.005675431 0.596382 5.3 0.011733119 0.894865 Tensile behavior Yield stress (MPa) Inelastic strain Damage parameter 2.8 3.33E-05 0 1.9 0.000160427 0.406411 0.86 0.000279763 0.69638 0.23 0.000684593 0.920389 0.056 0.00108673 0.980093 Table 2Mechanical properties of soil Poisson’s ratio Elasticity (kPa) Cohesion (kN/m^2) Friction angle Density (kN/m^3) 0.3 73000 0 38 1900 Table 3Mechanical properties of water Water properties EOS (${U}_{s}-{U}_{p}$) Viscosity (Pa.s) Density (kg/m^3) ${c}_{0}$ $s$ $Gamm{a}_{0}$ 0.001 1450 0 0 996 6. Loading In this study, the weight load and the force due to the fluid, soil and the tank body interaction are considered only. Records from earthquakes of Tabas (1978), Northridge (1994) and Helena (1935) are used for seismic stimulation of this case study tank. Fig. 4 to Fig. 6 show the parameters of these earthquakes in $X$ direction. Fig. 4Horizontal accelerogram parameters of Helena earthquake in X direction Fig. 5Horizontal accelerogram parameters of Northridge earthquake in X direction Fig. 6Horizontal accelerogram parameters of Tabas earthquake in X direction Table 4Accelerogram characteristics used in the analysis Maximum acceleration (g) Earthquake intensity (richter) Occurrence place Occurrence date Earthquake name 0.231 6.7 California, USA 1994 Northridge 0.173 6.0 Montana, USA 1935 Helena 0.852 7.35 Tabas, IRAN 1978 Tabas In the analyses, the model is exposed to horizontal accelerogram. In order to apply loadings to the tank floor which is connected to the ground, the earthquake acceleration is applied like a real 7. Helena earthquake loading Helena earthquake is applied to the tank in $X$ direction and the results have been reported and some comparisons have been done among them. The measure of stress in the tank buttress and in the connection place of tank to soil and the wall section of tank which is in contact with water is maximum and the disturbance made in the wall causes increase in internal forces of the structure; these forces are applied to the soil cumulatively. It is clear that the stresses made in the tank are due to these forces. These stresses are created in the soil and water and are transferred to the whole tank. Fig. 7-8 show model displacement and pressure stress contour due to Helena earthquake in horizontal direction. Fig. 7Displacement contour of a) circular tank, b) cubic tank, c) water in circular tank, d) water in cubic tank Fig. 8Pressure stress contour of circular tank and cubic tank After subjecting accelerogram to models time-displacement diagram and either time-pressure stress diagram can obtain like Fig. 9, 10. Fig. 9Time-displacement diagram of circular tank and cubic tank Fig. 10Time-pressure stress diagram of circular tank and cubic tank 8. Models comparison Buried tank model is imposed to different cases of loadings. Table 5-7 show the results relating to these analyses. In Table 5 there is a comparison between tanks with full water and half full water and in Table 6, 7 there is a comparison between fully buried tanks and half buried tanks. Table 5Overall comparison of wall pressure stress and maximum displacement in studied models Maximum displacement (mm) Wall pressure (KPa) Half full Full Half full Full Helena 50 76 59 577 Circular tank Northridge 78 112 183 1026 Tabas 143 175 782 1780 Helena 85 96 113 544 Cubic tank Northridge 101 130 590 1108 Tabas 149 196 925 1690 Table 6Overall comparison of wall pressure stress in fully buried and half buried models Wall pressure (KPa) Difference percentage Difference percentage Difference percentage Helena 42 34 59 Northridge 31 126 183 Tabas 18 645 782 Helena 30 79 113 Northridge 24 451 590 Tabas 12 810 925 Table 7Overall comparison of displacement in fully buried and half buried models Maximum displacement (mm) Difference percentage Difference percentage Difference percentage Helena –16 42 50 Circular tank Northridge –17 65 78 Tabas –21 113 143 Helena –9 77 85 Cubic tank Northridge –12 89 101 Tabas +3 153 149 9. Conclusions By studying tank model, it was obtained that most displacement has been occurred in Tabas earthquake and minimum displacement has been occurred during Helena earthquake. Quantity of water has a great effect on the wall pressure stress. In other words, the full tank against half full tank has more pressure stress almost 2-10 times. This increases structural stresses around the tank wall in the full tank. On the other hand in half full tank, the stress variations in the concrete tank wall are also increasing. With the soil surrounding the tank, horizontal displacement of tank wall with half water is increasing. On the other hand, the amount of time that tank movement decreases also increased. Thus water in the tank is a special matter and Soil Structure Interaction is a very important problem. In models which are fully buried in soil displacement is less than same models with half buried situation. For example in circular tank in Northridge earthquake maximum displacement decreases 21 % when tank is fully buried. Circular tank models have less displacement than cubic models. Pressure stress in tank walls in cubic ones is almost 10 % up to 100 % more than same models in circular ones. • Jacobson L. S. Impulsive hydrodynamics of fluid inside a cylindrical tank and of fluid surrounding a cylindrical pier. Bulletin of the Seismological Society of America, Vol. 39, 1949, p. 189-204. • Housner G. W. The dynamic behavior of water tanks. Bulletin of the Seismological Society of America, Vol. 53, Issue 2, 1963, p. 381-387. • Lysmer J., Kuhlemeyer R. Finite dynamic model for infinite media. Journal of Engineering Mechanics Div. ASCE, Vol. 95, Issue EM4, 1969, p. 859-877. • Lysmer J., Wass G. Shear waves in plane infinite structures. Journal of Engieering Mechics Div. ASCE, Vol. 98, Issue EM1, 1972, p. 85-105. • Kausel E. Local transmitting boundaries. Journal of Engineering Mechics Div. ASCE, Vol. 114, Issue 6, 1988, p. 1011-1027. • Haroun M. A. Dynamic analyses of liquid storage tanks. PhD Thesis, California Institute of Technology, Pasadena, 1979. • Zienkiewicz O. C., Taylor R. L. The finite element methods. McGraw Hill, Newyork, Vol. 2, 2000. • Bonet J., Wood R. R. Nonlinear continuum mechanics for finite element analysis. Cambridge University Press, 1997. • Kramer S. L. Geotechnical earthquake engineering. Prentice Hall, New Jersey, 1966. • Helwany S. Applied soil mechanics with ABAQUS applications. John Willey & Sons, Canada, 2007. • Lin W. T., Hsieh M. H., Wu Y. Ch., Huang Ch. Ch. Seismic analysis of the condensate storage tank in a nuclear power plant. Journal of Vibroengineering, Vol. 14, Issue 3, 2012, p. 1021-1031. • Haroun M. A., Housner G. W. Seismic design of liquid storage tanks. Journal of Techical Councils ASCE, Vol. 107, Issue 1, 1981, p. 191-207. • Seleemah A. A., El-Sharkawy M. Seismic response of base isolated liquid storage ground tanks. Ain Shams Engineering Journal, Vol. 2, 2011, p. 33-42. • Jankowiak T., Lodygowski T. Identification of parameters of concrete damage plasticity constitutive model. Foundations of Civil and Environmental Engineering, Vol. 6, 2005. About this article 05 September 2013 seismic analysis buried tanks finite element method Copyright © 2014 JVE International Ltd. This is an open access article distributed under the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
{"url":"https://www.extrica.com/article/14737","timestamp":"2024-11-14T20:16:43Z","content_type":"text/html","content_length":"132138","record_id":"<urn:uuid:1a6d4031-f320-4799-a0ef-8acc6d229d75>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00002.warc.gz"}
CS2208b Assignment 5 QUESTION 1 (100 marks) Recursion is a method where the solution to a problem depends on solutions to smaller instances of the same problem. A function is considered recursive if it calls itself. The following function computes xn recursively, where x is an integer number and n is a non-negative integer number. int power(int x, unsigned int n) int y; if (n == 0) return 1; if (n & 1) return x * power(x, n – 1); { y = power(x, n >> 1); return y * y; Draw a detailed flowchart and write an ARM assembly program to calculate xn using the above recursive function, where n is passed-by-value through the stack to the function and the returned value is stored in the stack just above the parameter. No other registers may be modified by the power function. Once the control is completely returned back from the function (i.e., after calculating xn ), the returned value must be stored in a local variable (called result) in the main function. Your code should be highly optimized, i.e., use as little number of instructions as possible. You should utilize a big enough stack so that you can calculate xn for various n values. How many stack frames are needed to calculate xn , when n = 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, and 12?
{"url":"https://codingprolab.com/answer/cs2208b-assignment-5/","timestamp":"2024-11-14T00:46:25Z","content_type":"text/html","content_length":"103430","record_id":"<urn:uuid:d898b981-b802-4d2a-9b52-61d27c32a08b>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00404.warc.gz"}
Delete middle element of Queue Open-Source Internship opportunity by OpenGenus for programmers. Apply now. In this article, we discuss how to delete the middle element of a queue without using other data structures. We have explained both iterative and recursive approach. Table of contents: 1. Introduction to Queue 2. Problem statement: Delete the middle element from queue 3. Iterative Approach 4. Recursive Approach 5. Time and Space Complexity Analysis Introduction to Queue A queue is an abstract linear data structure that stores elements in a sequential order. It uses FIFO(first in first out) approach for accessing elements. Picture a line of people in a bank, all waiting to get served by the teller. A new person joins the queue from the end and the first person on the queue would be leaving the queue.(first come first Common types of queues. 1. Simple queue. Just like we have described above, insertions and deletions take place in the back and front respectively. Applications include; process scheduling, I/O buffer, disk scheduling. 2. Circular queue. The first node is connected to the last node, also known as a ring buffer. Insertions take place at the front and deletions at the end of the queue. Applications include; memory management, CPU-scheduling, computer controlled traffic systems. 3. Priority queue. Nodes in the queue have an assigned priority therefore a node with the highest/lowest priority will be the first to be removed. They can be asceding or descending priority queues. Applications include; CPU-scheduling algorithms, prims algorithm, heap sort. 4. Doubly ended queues(dequeue) 'deck'. Insertions and deletions take place at both the front and ends of the queue. Applications include; saving browser history, stack implementation. Queue operations 1. enqueue(): add element to end of queue. 2. dequeue(): removing an element from the front of the queue. 3. peek()/front(): returns the element at the front of the queue. 4. rear(): returns the last element of the queue. 5. isFull(): returns boolean true or false based on capacity of queue. 6. isEmpty(): returns boolean true if queue is empty, false otherwise. Problem statement: Delete the middle element from queue Input: queue[] = [1, 2, 3, 4, 5], n = 5, mid = 3. Output: [1, 2, 4, 5] The queue size is an odd number, therefore the middle element is 3. Input: queue[] = [1, 2, 3, 4, 5, 6, 7, 8], n = 8, mid = 4. Output: [1, 2, 3, 5, 6, 7, 8] The queue size is an even number thus we have two elements in the middle that is 4 and 5, we remove the first to appear. Iterative Approach We initialize a curr variable to store the current position in the iteration, while curr is not equal to size of the queue we pop elements from the queue and push them back except the case where curr is the middle element. 1. Initialize curr variable to store current position. 2. Loop through the queue while it has elements and pop all elements, store front element before popping it from queue. 3. Push the elements popped to the queue excluding the mid element (queue-size / 2). 4. The queue now has all its previous elements excluding the mid element. using std::queue; using std::cout; using std::endl; class RemoveMid{ //iterative approach void removeMidIter(queue<int> &q){ int n = q.size(), curr = 0; int x = ((n % 2) == 0) ? (n/2-1) : (n/2); //pop from main queue and push to temp queue except where curr = n/2 while(curr != n){ int i = q.front(); if(curr != x){ curr += 1; //helper to print queue void printQueue(queue<int> q){ int s = q.front(); cout << s << " "; cout << endl; int main(){ RemoveMid rm; queue<int> q; for(int i = 1; i <= 7; i++) return 0; Recursive Approach We use a recursive approach whereby we pop elements from the queue and push them back to the queue this time skipping the middle element. The algorithm terminates when the base case(empty queue) is triggered. 1. Initialize current variable to store the current position. 2. Store the front element of queue in a variable then pop the queue. 3. Check the condition of mid element if curr is not equal to mid, push the front element to the queue 4. Recursively repeat the above steps, the recursion will stop when the queue is empty or curr is equal to queue size. 5. Finally we have a queue without the middle element. using std::queue; using std::cout; using std::endl; class RemoveMid{ //recursive approach void removeMidRec(queue<int> &q, int n, int curr = 0){ //base case (queue is empty, all elements popped) if(q.empty() || curr == n) //if queue size is even remove the first mid element else remove mid element int x = ((n % 2) == 0) ? (n/2-1) : (n/2); //remove from front element int i = q.front(); //add elements back to front of queue except mid element if(curr != x) removeMidRec(q, n, curr + 1); //helper to print queue void printQueue(queue<int> q){ int s = q.front(); cout << s << " "; cout << endl; int main(){ RemoveMid rm; queue<int> q; for(int i = 1; i <= 7; i++) //rm.removeMidRec(q, q.size()); return 0; what happens Given the queue q = [1, 2, 3, 4, 5] Recursive calls: 1st call: curr = 0, x = 1, queue = [2, 3, 4, 5, 1] 2nd call: curr = 1, x = 2, queue = [3, 4, 5, 1, 2] 3rd call: curr = 2, x = 3, queue = [4, 5, 1, 2], curr == to mid, so we don't push to queue 4th call: curr = 3, x = 4, queue = [5, 1, 2, 4] 5th call: curr = 4, x = 5, queue = [1, 2, 4, 5] queue is empty, base case is triggered, algorithm terminates. Time and Space Complexity Analysis All queue operations take constant time and traversing the queue of size n takes O(n) time complexity. We need no extra space, we use the same queue therefore this makes the space complexity constant O(1). 1. Can you think of another approach to solve this problem? 2. Implement a queue using stacks and a stack using queue.
{"url":"https://iq.opengenus.org/delete-middle-element-of-queue/","timestamp":"2024-11-09T19:57:28Z","content_type":"text/html","content_length":"59524","record_id":"<urn:uuid:6ecbae19-9c42-4588-8d8c-e7f5ab3aa16f>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00530.warc.gz"}
Removing Silly Mistakes in the AQE & GL test. Removing Silly Mistakes One of the biggest questions I regularly get asked is around how to remove silly mistakes. Of course it is a great question and it is very important because, as you may have seen, silly mistakes can affect the overall test score considerably. Whilst it is often hoped that the child will grow out of this and on the big day they will magically disappear, I’d prefer to have a little more control and structure to increase the chances of that being the case. So, I hope this process finds you well and helps you in your plight to remove those pesky, silly mistakes. But I didn’t dream this up overnight and I want to give credit to this learning which has been taken from Anish Passi who has shared his conceptual and test-taking expertise with students over the last decade. I have simply tweaked the main concepts and made it applicable to AQE / GL students. Before launching into the process, Anish recommends that everyone realises two small points: 1. We all make silly mistakes. He goes further to suggest that if you feel you do not make silly mistakes while practicing, you’re either not practicing or not looking hard enough. 2. Making silly mistakes is a systemic problem. What he means by this is that many a times we brush off silly mistakes as one-off cases. (“Surely I’ll not repeat such a mistake again.”). If silly mistakes were “one-off”, wouldn’t you be done with them by now? The 5-Stage Process To Removing Silly Mistakes 1. Identify The Silly Mistake 2. Classify Into Categories 3. Figure Out Root Cause 4. Eliminate Silly Mistakes 5. Taking Countermeasures Step 1: Identify The Silly Mistake This first stage requires that a log is maintained detailing all the silly mistakes when they occur. It is important that this activity takes place over a good period of time before the date of the test, so that trends can be identified and remedial action put in place. This log can be on a piece of paper or a computerised log, but it is important that it exists and is used regularly over the prep time. Step 2: Classify Into Specific Categories All silly mistakes can primarily be classified into two broad types: 1. Reading / Misunderstanding The Question 2. Solving / Calculating The Answer N.B. If you find that the child did not know how to approach the question, this is not a silly mistake, this is a knowledge gap which can be plugged with further revision and focus on that topic. Then do a further deep-dive and explain further. For example, your log may now say: Silly Mistake 1: • Calculating • Mistake – 2 x 3 = 5 Silly Mistake 2: • Reading • Mistake – Read 7 as a 1 in working out. Step 3: Figure Out Root Cause This step is very important and will help you understand what you really need to eliminate and step 4 becomes very easy, so take time to carefully consider why this silly mistake occurred. This should also be noted on your log and below is our continued example: Silly Mistake 1: • Calculating • Mistake – 2 x 3 = 5 • No evidence of rough work and done in head Silly Mistake 2: • Reading • Mistake – Read 7 as a 1 in working out • Working out was small and messy and squeezed into a tiny gap Step 4: Eliminate Silly Mistakes Once you have identified the root cause behind the silly mistake, figuring out ways to eliminate the root cause is comparatively easy. Some examples are expanded below to give you a flavour of this process: • Root Cause 1: Six silly mistakes in the log show that this is because there is no evidence of rough work and calculations are being done in the head. • Solution 1: make sure that all calculations are done on paper as this will also support the ability to be able to quickly check for silly mistakes at the end (there is no time to re-read the whole question but you can check calculations). • Root Cause 2: Ten silly mistakes were due to messy rough work and not being able to read the numbers properly, regularly mixing up the 7 and 1, and the 6 and 0. • Solution 2: Stop being messy. Ok, you are not likely to change your handwriting overnight, but make a conscious effort to be neater as it is clearly costing you important marks. Don’t try to squeeze everything into one rough work sheet, ask for more paper. Space out your calculations and write in a logical fashion down the page, not sporadic and randomly all over the place. Step 5: Taking Countermeasures Countermeasures are ways in which you can ensure that you don’t make the silly mistake in the first place or they can be a sanity check at the end. With the root causes identified for your child, you will want to design a number of countermeasures that can be the final step in making sure these mistakes are eliminated. So, some examples: • If your child made the mistake of 7 x 9 = 56, a sanity check could be ‘two odd numbers multiplied will produce an odd number answer‘, so this mistake would have been caught. • If you child was working out the average of 5 numbers which were 6.3, 7.4, 8.3, 7.8 and 5.3 and they worked out the answer to be 4.5, a sanity check could simply be ‘does that make sense‘, ‘is that likely to be right’. • If you child always got lost in the calculations and answered the question he thought he should be asked, rather than what he was being asked, then a countermeasure could be ‘underline the actual question‘ or ‘quickly write the actual question on the rough work paper‘. This could also be a root cause solution. Each child is unique and so this analysis needs to happen for each individual child and one size will not fit all, but I hope that I have armed you with a tool that you can take to move forward with the objective of removing silly mistakes, rather than simply hoping the big day will be any different than any other time.
{"url":"https://thetransfertutor.co.uk/silly-mistakes","timestamp":"2024-11-05T01:04:22Z","content_type":"text/html","content_length":"67785","record_id":"<urn:uuid:a6db720a-191f-43ab-bfe3-12ad4bf16d8c>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00190.warc.gz"}
Computational Thinking in Mathematics Education Professor (Queen’s University) Computational Thinking + Math Projects • Biology: modeling behaviour, inclusive fitness, evolutionary stability. • Mathematics education: curriculum development, both at the high school and university level. • On Virgil: my opening lecture to Mathematics 120, For the Learning of Mathematics 1 (1980) 49-52. Zen in the art of archery and teaching, Journal of Literary Theory 4 (1983), 73-81. • The Randolph Lecture: Calculus, where are we going? Seaway Section, MAA, Oswego, NY, Nov. 1990. • Care, not Speed, IEEE Canadian Review No. 11, Fall, 1991. • Calculus, The Analysis of Functions, Wall & Emerson, Toronto, 1992. 480 pp. • Royal Canadian Institute Lecture: The calculus revolution: right revolution–wrong subject, Toronto, Feb. 1994. • Complex Variables: the poetry and music of W.J. Barnes, Quarry press, 1994. 131 pp. (with D. Helwig). • Panelist: The Power of Teaching and Learning, SFU Centre for University Teaching, Vancouver. • The MacClement Lecture, Mathematics and Poetry, Queen’s University Fac. of Education, March, 1995. • Small Napkins, Facts and Arguments, Globe and Mail, February 6, 1997. • Plenary Speaker CMESG. The High School Math Curriculum. Thunder Bay, May 25, 1997. • Plenary Speaker Changing the Culture Conference, Post-modernism and High School Math Education. PIMS Simon Fraser University, Feb. 20, 1998. • “Training” our students. Canadian J. of Math, Science and Technology 1 (2000) 110-116 (with Nathalie Sinclair). • Regular Lecturer, Reinventing the Teacher, International Congress of Mathematics Education, Tokyo, August 2000 (with Nathalie Sinclair). • Speaker: Inquiry and design, Canadian Math Society Winter Meeting, Ottawa, December 2002. • Working Group co-leader. Rethinking math thinking in secondary math classes. CMS Math Education Forum. Montreal May 16-18, 2003. • CMS Winter Meeting. December 2003. Vancouver. Education Session. The tyranny of reality. • The teacher as artist: a letter to my colleagues. MAA Focus 24, May/June 2004: 8-9. • Review of the CUPM Curriculum Guidelines 2004. MAA Online. July 2004. • What the history of art can teach us about teaching. MAA Seaway Banquet address, April 2005. • Banquet Speaker, “Whitehead’s stage of romance” CUMC Annual Meeting, Kingston, July 13, 2005. • Keynote Speaker: Workshop on Preparation of students for the preparatory-year program at the King Fahd University of Petroleum and Minerals, Dhahran, Saudi Arabia. Sept 2006. • The Adrian Pouliot Prize Lecture. The structure of a mathematics curriculum.CMS Winter Meeting, Toronto Dec. 2006. • Canadian Club Luncheon Speaker. What’s wrong with the high school curriculum? December, 2006. • Kingston Rotary Club Breakfast Speaker. An interesting photographic problem. January 2007. • Keynote address: Will the real Q please step forward? 9th annual symposium on innovative teaching, Simon Fraser University, May, 2007. • Mathematical Lens. Mathematics Teacher 101 (2007): 179-182. • “Last Lecture,” Queen’s MiniU, May 2011 God is also a mathematician. • ASUS Teaching Award 1986 • MAA Distinguished Teaching Award, Seaway Section 1992 • 3M Teaching Fellowship 1994 • Golden Apple, Faculty of Applied Science, Queen’s 1995 • OCUFA Teaching Award 2003 • CMS Adrian Pouliot Award for Mathematics Education 2006
{"url":"https://ctmath.ca/co-investigators/peter-taylor/","timestamp":"2024-11-09T19:06:18Z","content_type":"text/html","content_length":"31501","record_id":"<urn:uuid:ebec47cc-65e0-4521-9d49-eecaee2035cf>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00193.warc.gz"}
A special matrix type and solver, designed for finite volume solutions of scalar equations. More... A special matrix type and solver, designed for finite volume solutions of scalar equations. Original source file fvMatrices.H InClass Foam::fvMatrix Face addressing is used to make all matrix assembly and solution loops vectorise. Definition in file fvMatrices.H.
{"url":"https://cpp.openfoam.org/v10/fvMatrices_8H.html","timestamp":"2024-11-04T23:42:11Z","content_type":"application/xhtml+xml","content_length":"16620","record_id":"<urn:uuid:b7f512f5-3271-43db-b3bf-f9ecbc545400>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00549.warc.gz"}
Observation of b→dγ and determination of |Vtd/Vts| We report the observation of the flavor-changing neutral current process b→dγ using a sample of 386×106 B meson pairs accumulated by the Belle detector at the KEKB e+e- collider. We measure branching fractions for the exclusive modes B-→ρ-γ, B̄0→ρ0γ, and B̄0→ωγ. Assuming that these three modes are related by isospin, we find B(B̄→(ρ,ω)γ)=(1.32-0.31+0.34(stat)-0.09+0.10(syst)) ×10-6 with a significance of 5.1σ. This result is used to determine the ratio of Cabibbo-Kobayashi-Maskawa matrix elements |Vtd/Vts| to be 0.199-0.025+0.026(exp)-0.015+0.018(theor). ASJC Scopus subject areas • General Physics and Astronomy Dive into the research topics of 'Observation of b→dγ and determination of |Vtd/Vts|'. Together they form a unique fingerprint.
{"url":"https://pure.korea.ac.kr/en/publications/observation-of-bd%CE%B3-and-determination-of-vtdvts","timestamp":"2024-11-07T06:14:44Z","content_type":"text/html","content_length":"89063","record_id":"<urn:uuid:71e5e1b0-f654-45f3-9b2a-7d335792f4a3>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00198.warc.gz"}
As a noun calculation is (mathematics|uncountable) the act or process of calculating. As a letter x is the twenty-fourth letter of the. As a symbol x is voiceless velar fricative. As nouns the difference between calculation and arithmetics is that is (mathematics|uncountable) the act or process of calculating while is . As nouns the difference between calculation and competition is that is (mathematics|uncountable) the act or process of calculating while is competition. As nouns the difference between tabulation and calculation is that is the act or process of tabulating while is (mathematics|uncountable) the act or process of calculating. As nouns the difference between calculation and consideration is that is (mathematics|uncountable) the act or process of calculating while is consideration ( the process of considering As a noun calculation is (mathematics|uncountable) the act or process of calculating. As an adjective undefined is lacking a definition or value. As nouns the difference between calculation and simulation is that is (mathematics|uncountable) the act or process of calculating while is simulation. As a noun calculation is (mathematics|uncountable) the act or process of calculating. As a verb determine is As nouns the difference between calculation and check is that is (mathematics|uncountable) the act or process of calculating while is (chess) a situation in which the king is directly threatened by an opposing piece or can be (textiles|usually|pluralized) a pattern made up of a grid of squares of alternating colors; a checkered pattern. As a verb check is to inspect; to examine. As an adjective mathematical is of, or relating to mathematics. As a noun calculation is (mathematics|uncountable) the act or process of calculating.
{"url":"https://wikidiff.com/terms/calculation","timestamp":"2024-11-10T08:46:14Z","content_type":"text/html","content_length":"34855","record_id":"<urn:uuid:dfdf3616-9d03-40ed-ad4b-8e4cab5ed03d>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00782.warc.gz"}
Understanding Proficiency provides resources that guide educators in analyzing student work on performance tasks in order to develop a deeper understanding of the Common Core State Standards in What you’ll find: • Examples of student responses to Smarter Balanced mathematics performance tasks^* — administered, scored and annotated by teachers — for all score levels and for all tested grades (grades 3–8 and high school) • Case studies that provide analysis of individual students’ work across all of the questions in the performance tasks, including samples from English learners (EL) • Professional development activities to support educators in leveraging these resources for their own learning ^*All mathematics performance tasks come from the Smarter Balanced Practice Test released in February 2017. Select a grade level to explore grade-specific resources About Smarter Balanced Math Performance Tasks In order to gather evidence of college- and career-readiness, Smarter Balanced developed four “claims” that focus on students’ ability to understand and do math, as well as apply mathematical knowledge. Claims are broad statements of an assessment system’s learning outcomes. The Smarter Balanced Claims in mathematics cover both the content standards and the standards for mathematical practice within the Common Core State Standards. The performance task portion of the Smarter Balanced assessment is specifically designed to measure and gather evidence of students’ ability to apply mathematical knowledge to real-life situations in which they are required to problem solve, communicate their reasoning, analyze information and data, and model with mathematics. (The computer adaptive portion of the test focuses on grade-level mathematical content but it also includes the mathematical practices.) Because the purpose of the performance task portion of the test is to assess evidence of claims 2, 3, and 4 (shown below), the tasks primarily rely on content skills studied in prior grade levels. Smarter Balanced Mathematics Claims 1. 1 Concepts & Procedures Students can explain and apply mathematical concepts and carry out mathematical procedures with precision and fluency. 2. 2 Problem Solving Students can frame and solve a range of complex problems in pure and applied mathematics. 3. 3 Communicating & Reasoning Students can clearly and precisely construct viable arguments to support their own reasoning and to critique the reasoning of others. 4. 4 Data Analyisis & Modeling Students can analyze complex, real-world scenarios and can use mathematical models to interpret and solve problems. Every performance task begins with a “Stimulus” that sets the context for the task and provides some of the data or parameters needed to complete the task.Every performance task includes four to six items, or questions, which are all connected to the Stimulus. The first item (or first two items) of a task are intended as entry-level questions to prompt students to review and connect information provided within the Stimulus. Usually, student responses to these first items are machine-scored. Subsequent items are more complex, requiring synthesis of new information, analysis of results, modeling, explanation, and/or justification. Student responses to these more complex items are usually human-scored, using an item-specific rubric. Some tasks have “dependent items,” meaning a response to one item depends on a response to a previous item. The scoring rules for dependent items instruct scorers to follow through with the student work from earlier items on which subsequent items depend: If the student incorrectly answers the earlier item, but then uses that response correctly in the dependent item that follows, the student would earn full credit for the dependent item. In this manner, the student is not penalized twice for the same mistake, and is rewarded for the mathematical thinking and problem-solving demonstrated in the dependent item. • Stimulus □ Item 1 □ Item 2 □ Item 3 □ Item 4 (also connects to Item 5) □ Item 5 (also connects to Item 4) The diagram above represents a task in which Item 5 depends on Item 4. The performance tasks for Grades 5, 6, and 8 all provide examples of scoring dependencies.
{"url":"https://understandingproficiency.wested.org/math/","timestamp":"2024-11-13T09:10:48Z","content_type":"text/html","content_length":"33295","record_id":"<urn:uuid:ddc75c95-97a9-4152-b232-78dec32f9403>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00341.warc.gz"}
A sample of nitrogen gas has a volume of 32.4 L at 20°C. The gas is heated to 220°C at constant pressure. What is the final volume of nitrogen? | Socratic A sample of nitrogen gas has a volume of 32.4 L at 20°C. The gas is heated to 220°C at constant pressure. What is the final volume of nitrogen? 1 Answer The volume of nitrogen gas at 493 K will be 54.5 L. This is an example of Charles' law which states that the volume of a gas held at constant pressure is directly proportional to the Kelvin temperature. The equation for Charles' law is ${V}_{1} / {T}_ {1} = {V}_{2} / {T}_{2}$. ${V}_{1} = \text{32.4 L}$ ${T}_{1} = \text{20"^"o""C"+273.15="293 K}$ ${T}_{2} = \text{220"^"o""C"+273.15="493 K}$ ${V}_{2} = \text{???}$ Rearrange the equation to isolate ${V}_{2}$ and solve. ${V}_{1} / {T}_{1} = {V}_{2} / {T}_{2}$ ${V}_{2} = \frac{{V}_{1} {T}_{2}}{T} _ 1$ #V_2=(32.4"L"xx493cancel"K")/(293cancel"K")="54.5 L"# Impact of this question 39908 views around the world
{"url":"https://socratic.org/questions/a-sample-of-nitrogen-gas-has-a-volume-of-32-4-l-at-20-c-the-gas-is-heated-to-220#202892","timestamp":"2024-11-13T06:02:16Z","content_type":"text/html","content_length":"34710","record_id":"<urn:uuid:433ca9b0-bb50-4d1f-959f-7fa44eb302c2>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00735.warc.gz"}
Undergrad Resources | Mathematics Main content start Current Mathematics Majors are here to help answer any questions you may have about the department, different programs, or research. Check out the Peer Advisors page for this quarter's peer advisor contact information and office hours. On Campus Resources The Stanford University Mathematical Organization (SUMO) is the by-students-for-students math club dedicated to promoting mathematics on the Stanford campus and elsewhere by hosting various events and providing tutoring resources. They host various events that are open to everyone: math majors and non-math majors, graduate and undergraduate. SWIMM is a mentoring program that aims to reduce the gender gap in mathematics. Undergraduates interested in math and computational science are paired with graduate student mentors from math, statistics, and ICME. SWIMM also hosts events, including study halls, dinners, and talks. During the summer, the department offers the Stanford Undergraduate Research Institute in Mathematics (SURIM) where students can work on a research project either one-on-one with a faculty member or in a group mentored by a graduate student. Please visit the SURIM site for information on the application, program dates, eligibility, and funding information. Directed Reading Program (DRP) Students interested in independent reading with a graduate student mentor may wish to participate in the Directed Reading Program. The Directed Reading Program is a program of Stanford's Graduate Mathematics Outreach Organization in which undergraduate students (of any major) interested in independently reading some mathematics outside of their official coursework are paired for a quarter with math graduate students for weekly guidance and discussions. At the end of the quarter, participants gather for a colloquium in which each participant gives a short talk about their reading. The program began in winter quarter 2017. Per instructor and departmental approval, students also enroll in Math 199 do reading courses with faculty members, which requires approaching individual professors directly. External Organizations American Mathematical Society (AMS): Offers many resources on their student page, including information about math programs, graduate school, research, careers, and more. Stanford Career Education: Stanford Career Education (CareerEd) provides online and in-person career planning resources including job/internship searches (for both academic and non-academic searches), resume, cover letter and CV writing, interviewing and job talk preparation, self-assessment, networking, and more. Mathematical Association of America (MAA): Offers many resources on their Professional Development & resources Page and their Community Event page, including math meeting, research careers, links to other organizations, and more.
{"url":"https://mathematics.stanford.edu/academics/undergraduate-students/undergrad-resources","timestamp":"2024-11-08T11:02:04Z","content_type":"text/html","content_length":"45799","record_id":"<urn:uuid:4e90f6da-4602-431a-ad79-efdba51e3650>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00770.warc.gz"}
Towards the Solution of Mixed-Integer Nonlinear Optimization Problems using Simultaneous Convexification Liers F, Martin A, Merkert M, Mertens N, Michaels D (2020) Publication Language: English Publication Type: Other publication type Publication year: 2020 Pages Range: 36 URI: https://opus4.kobv.de/opus4-trr154/frontdoor/index/index/searchtype/latest/docId/303/start/0/rows/10 Solving mixed-integer nonlinear optimization problems (MINLPs) to global optimality is extremely challenging. An important step for enabling their solution consists in the design of convex relaxations of the feasible set. Known solution approaches based on spatial branch-and-bound become more effective the tighter the used relaxations are. Relaxations are commonly established by convex underestimators, where each constraint function is considered separately. Instead, a considerably tighter relaxation can be found via so-called simultaneous convexification, where convex underestimators are derived for more than one constraint at a time. In this work, we present a global solution approach for solving mixed-integer nonlinear problems that uses simultaneous convexification. We introduce a separation method for the convex hull of constrained sets. It relies on determining the convex envelope of linear combinations of the constraints and on solving a nonsmooth convex problem. In particular, we apply the method to quadratic absolute value functions and derive their convex envelopes. The practicality of the proposed solution approach is demonstrated on several test instances from gas network optimization, where the method outperforms standard approaches that use separate convex relaxations. Authors with CRIS profile Related research project(s) Involved external institutions How to cite Liers, F., Martin, A., Merkert, M., Mertens, N., & Michaels, D. (2020). Towards the Solution of Mixed-Integer Nonlinear Optimization Problems using Simultaneous Convexification. Liers, Frauke, et al. Towards the Solution of Mixed-Integer Nonlinear Optimization Problems using Simultaneous Convexification. 2020. BibTeX: Download
{"url":"https://cris.fau.de/publications/231258652/","timestamp":"2024-11-03T10:41:20Z","content_type":"text/html","content_length":"14346","record_id":"<urn:uuid:af8ba96c-7b07-43bf-8a63-4ea19184db8a>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00544.warc.gz"}
Discrete Random Variables (4 of 5) Learning OUTCOMES • Use probability distributions for discrete and continuous random variables to estimate probabilities and identify unusual events. Standard Deviation for a Discrete Random Variable The mean of a discrete random variable gives us a measure of the long-run average but it gives us no information at all about how much variability to expect. For example, earlier we found that the average cafeteria wait time at Rushmore Community College was 14 minutes. Put in terms of our random variable, this means over the long run, if we continued to keep track of wait times for students entering the cafeteria, their times would average 14 minutes. Some students would get their food in less than 14 minutes, and some would have to wait longer. Is that all we need to know? Suppose on the one hand the average time was 14 minutes, but we knew that it was most likely that times would range from 8 to 20 minutes. Compare that to a situation where again the average time was 14 minutes, but it was most likely that times would range only from 13 to 15 minutes. That would give us a different picture of what the problem at the cafeteria might be. What we need is a measure of how much variability to expect in a random variable X over the long run. The standard deviation is that measure. Just as we need both the mean and standard deviation to get a full picture of the shape of a data set, we need both the mean and standard deviation of a random variable to understand its likely long-term behavior. In Summarizing Data Graphically and Numerically, we used the following formula to compute the standard deviation of a data set. As you may recall, the most important part of this formula is the term inside the square root, which we call the average of the squares of the deviations from the mean. As we will see, the formula for the standard deviation for a discrete random variable has a lot in common with this formula. Here is the formula for the standard deviation of a discrete random variable. Note that [latex]P(x)[/latex] represents the probability of x, where x is a value of the random variable X. And [latex]{\ mathrm{μ}}_{x}[/latex] again stands for the mean of X. Again, we focus on the term inside the square root: The term [latex](x-{\mathrm{μ}}_{x})[/latex] here represents the deviation of each value of the random variable X from the mean [latex]{\mathrm{μ}}_{x}[/latex] , just as the term [latex](x-\stackrel {¯}{x})[/latex] represents the deviation of each observation of the data set from the mean [latex]\stackrel{¯}{x}[/latex] . In both cases, we proceed to sum the squares of these deviations. In the case of a data set, we divide by n − 1 to find the average squared deviation. However, in the case of a discrete random variable, we again use a weighted average. Why? Because we don’t want to give undue weight to values of X that are unlikely to occur. So those values of X, even if far from the mean [latex]{\mathrm {μ}}_{x}[/latex] , will not contribute much to the standard deviation if their probability is low. On the other hand, values of X with large probabilities will count more in our calculation of the standard deviation of X. Cafeteria Wait Times Let’s revisit the problem about wait times in the cafeteria at Rushmore Community College. Recall the following probability distribution. X = Time (minutes) 5 10 15 20 25 P(X) 0.15 0.26 0.31 0.20 0.08 On the previous page, we found that the average wait time is 14 minutes. Now we will compute the standard deviation of wait times and think a bit about what it tells us. We start by computing the squared deviations from the mean and weighting them by the probability. For the first value of X, we have Performing the same operation on the next three values of X will give us [latex]\begin{array}{l}{(10-14)}^{2}⋅(0.26)=4.16\\ {(15-14)}^{2}⋅(0.31)=0.31\\ {(20-14)}^{2}⋅(0.20)=7.2\end{array}[/latex] The next step of the formula is to add up the weighted square deviations from the mean, as follows: Recall that in Summarizing Data Graphically and Numerically we used the standard deviation of a quantitative data set to give a range of typical values. This range of typical values was formed by blocking off an interval 1 standard deviation to the right and left of the mean. In other words, the range of typical values was [latex]\left[\stackrel{¯}{x}-1⋅\mathrm{SD},\stackrel{¯}{x}+1⋅\mathrm {SD}\right][/latex]. Exactly the same thing can be done in the current context of random variables. Did you have an idea for improving this content? We’d love your input.
{"url":"https://courses.lumenlearning.com/wm-concepts-statistics/chapter/discrete-random-variables-4-of-5/","timestamp":"2024-11-07T14:05:32Z","content_type":"text/html","content_length":"53652","record_id":"<urn:uuid:5f5395a9-0f7e-403c-a873-ae0af9877989>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00847.warc.gz"}
Word Problems involving Addition and Subtraction of Fractions (examples, solutions, videos, worksheets, lesson plans) Related Pages Fraction Word Problems More Fraction Word Problems Harder Fraction Word Problems Lesson Plans and Worksheets for Grade 4 More Lessons for Grade 4 Common Core For Grade 4 Common Core Standards: 4.NF.3a, 4.NF.3d, 4.NF.1, 4.MD.2 NYS Common Core Grade 4 Module 5, Lessons 19 Lesson 19 Concept Development Problem 4: Use the RDW process to solve a word problem involving the subtraction of fractions. Mrs. Jones had 1 4/8 pizzas left after a party. After giving some to Gary, she had 7/8 pizza left. What fraction of a pizza did she give Gary? One way is to rewrite the mixed number as 12/8 and subtract. The other method subtracts from the whole and adds back the fractional part. Lesson 19 Problem Set Use the RDW process to solve. 1. Sue ran 9/10 mile on Monday and 7/10 mile on Tuesday. How many miles did Sue run in the 2 days? 2. Mr. Salazar cut his son’s birthday cake into equal pieces. Mr. Salazar, Mrs. Salazar, and the birthday boy each ate 1 piece of cake. What fraction of the cake was left? 3. Maria spent 4/7 of her money on a book and saved the rest. What fraction of her money did Maria save? 4. Mrs. Jones had 1 4/8 pizzas left after a party. After giving some to Gary, she had 7/8 pizza left? What fraction of a pizza did she give Gary? 5. A baker had 2 pans of corn bread. He served 1 1/4 pans. What fraction of a pan was left? 6. Marius combined 4/8 gallon of lemonade,3/8 gallon of cranberry juice, and 6/8 gallon of soda water to make a punch for a party. How many gallons of punch did he make in all? Lesson 19 Homework Use the RDW process to solve. 1. Isla walked 3/4 mile each way to and from school on Wednesday. How many miles did Isla walk that day? 2. Zach spent 2/3 hour reading on Friday and 1 1/3 hours reading on Saturday. How much more time did he read on Saturday than on Friday? 3. Mrs. Cashmore bought a large melon. She cut a piece that weighed 1 1/8 pounds and gave it to her neighbor. The remaining piece of melon weighed 6/8 pound. How much did the whole melon weigh? 4. Ally’s little sister wanted to help her make some oatmeal cookies. First, she put 5/8 cup of oatmeal in the bowl. Next, she added another 5/8 cup of oatmeal. Finally, she added another 5/8 cup of oatmeal. How much oatmeal did she put in the bowl? 5. Marcia baked 2 pans of brownies. Her family ate 1 5/6 pans. What fraction of a pan of brownies was left? 6. Joanie wrote a letter that was 1 1/4 pages long. Katie wrote a letter that was 3/4 page shorter than Joanie’s letter. How long was Katie’s letter? Try the free Mathway calculator and problem solver below to practice various math topics. Try the given examples, or type in your own problem and check your answer with the step-by-step explanations. We welcome your feedback, comments and questions about this site or page. Please submit your feedback or enquiries via our Feedback page.
{"url":"https://www.onlinemathlearning.com/add-subtract-fractions-word-problems.html","timestamp":"2024-11-06T01:55:24Z","content_type":"text/html","content_length":"39367","record_id":"<urn:uuid:ead86411-cdc5-42d4-a8bb-cfe74fde7d18>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00207.warc.gz"}
Re: PDF Forms - Calculating the discount / JavaScript Hello my friends, I've a problem which I am hoping will be solved with your help:) I made a pdf form with calculating net and gross price where customer can type how many of products wants to buy and it works all fine (attachment). The problem is when I want to make a discount (for example 10%) of total price if customer order 2 or more products and I'm stuck. Do you have any ideas how I can make it happen? I can write JavaScript code but I dont know how to implement it into pdf. Is it possible to use the total net price as a variable? Any ideas will be very valuable, cheers!
{"url":"https://community.adobe.com/t5/acrobat-sdk-discussions/pdf-forms-calculating-the-discount-javascript/m-p/11355236?attachment-id=44336","timestamp":"2024-11-13T00:24:10Z","content_type":"text/html","content_length":"882692","record_id":"<urn:uuid:6d042922-31e2-4034-a2db-d76851700718>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00103.warc.gz"}
Option Rho What Is Rho Rho measures an option's sensitivity to changes in interest rates – how much option premium will change if the risk-free interest rate increases by one percentage point. Consider a call option on a stock with 3 months left to expiration, which is currently trading at $2.35 (option premium). Its rho is 0.15 and the 3-month risk-free interest rate is 3%. The option's rho indicates that if the interest rate increases by one percentage point to 4%, the option premium should rise by $0.15 to $2.50. Conversely, if the interest rate declines by one percentage point to 2%, the option premium should decrease by $0.15 to $2.20. All this assumes that the other factors (the underlying stock's price, implied volatility, and time to expiration) remain the same. Rho Units Rho is the ratio of option price change (in dollars) to interest rate change (in percentage points). Therefore, its units are dollars per percentage point, although in practice units are rarely mentioned, as with the other Greeks. Rho Values There is no theoretical limit on the values rho can reach. It can be positive or negative, but it is usually a very small number. In most cases, interest rates have far smaller effect on option prices than underlying price or volatility. Interest rate changes tend to be smaller and take longer time (a one percentage point change in the short-term risk-free interest rate is always a huge event in the markets, while the same change in implied volatility often happens in minutes or seconds on many underlyings). As a result, rho often receives less attention than the other main Greeks – delta, gamma, theta, and vega – and it is less understood by the typical option trader, also because it is harder to make universal rules about effects of different factors (underlying price, time or volatility) on rho values. These effects depend on option type, underlying type, and the settlement procedures of both. For example, rho of stock options behaves differently from futures options rho or FX options rho (currency options are in fact affected by two interest rates – domestic and foreign – and have two Call Option Rho The main benefit of holding a call option is the optionality, right but not obligation to buy the underlying. If the stock goes up, you make money, but if it goes down, you don't need to exercise the option and you are protected. You have a choice. That said, a call option has another benefit that is sometimes forgotten: it allows you to control the underlying (at least its upside) without paying for it. It delays payment and improves cash For example, consider a stock trading at $50 per share. You can buy 100 shares in the stock market and pay $5,000 immediately. Alternatively, you can buy a 3-month, $50 strike call option for $2 and pay only $200 (for one contract of 100 shares). In both cases, you control 100 shares of the stock and make money if the stock goes up. However, in the first case you pay $5,000 now, while in the second case you only pay $200 now (the option premium) and $5,000 (the option's strike price) later, if you exercise. The option effectively delays your payment for 3 months (from now to the option's expiration). It is like a 3-month loan of $5,000. Therefore, the option's time value must reflect not only the optionality, but also the value of the loan. Call options are more valuable with higher interest rates and have positive rho. Put Option Rho It is opposite with put options. Let's say you already own 100 shares of the same stock. You are worried the stock will fall. You can sell the stock in the stock market and get $5,000 in cash. Alternatively, you can buy a 3-month, $50 strike put option. If the stock does fall, you can exercise the put later and get $5,000. Without the put, you receive $5,000 now (and you can put it in a bank for 3 months and earn interest); with the put you get it in 3 months. The higher the interest rate, the more attractive it is to sell the stock and get the money earlier, rather than buy the put and get the money later. Put options are less valuable when interest rates are higher. They have negative rho. Rho and Time to Expiration From the above examples it should be obvious that the effect of interest rates on options is greater with longer time to expiration. In general, rho approaches zero as an option gets closer to Rho of Futures Options The reasoning in the above examples assumes that the underlying security settles like a stock – you must pay the full price in cash when buying, and you receive the full price in cash when selling. Not all underlyings settle like that. When you buy (go long) a futures contract, you don't pay anything (you only deposit a margin) and only your profits or losses are marked to market and settled daily. Therefore, the above logic of calls having positive rho and puts having negative rho is valid for stock options or currency options, but not for futures options or currency futures options. It is hard to make any general statements about futures option rho, as there are too many variables, such as the way the options themselves are settled, the marking to market process, or the option's strike. In general, futures options tend to be less sensitive to interest rates than stock or currency options, as the cash flow advantage of calls and disadvantage of puts explained above does not Rho of Currency Options With foreign currency options, two interest rates are involved. For example, consider an option on euros, traded on a US exchange in dollars. This option's price will be affected by both the US (domestic) interest rate and the Eurozone (foreign) interest rate. Rho measures the effect of the domestic rate, which is similar to stock options. Generally, higher domestic interest rate makes foreign currency calls more valuable (positive rho) and puts less valuable (negative rho), because the domestic rate is the cost of financing. The effect of the foreign interest rate is measured by rho2 (sometimes called phi). It is like the effect dividend yield has on stock options. When you hold the underlying (a stock / euros), you earn it. When you hold a call option instead, you don't. As a result, the higher the EUR interest rate, the less attractive call options are as an alternative to holding the euros directly. It is the opposite with puts. An increase in foreign interest rate makes call options on that currency less valuable and put options more valuable. Effect of Interest Rates on Underlying It is important to understand that rho (and rho2) only measures the effect of interest rate on option price if the underlying price and all the other factors remain the same. In reality, interest rate changes often move the underlying price as well – particularly for currency options (interest rates are the main driver of exchange rates), options on fixed income instruments (bond prices are closely related to yields), but also some stock options (e.g. bank stocks may strongly react on interest rate moves). Therefore, interest rate changes can affect option prices via two channels: directly and indirectly via moving underlying price. Rho only measures the direct effect, but not the indirect one. The effect of underlying price changes on option premium is measured by delta. For some underlyings, such as bonds or currencies, the indirect effect is often much stronger than the direct one. How to Calculate Rho Mathematically, rho is the derivative of option price with respect to interest rate. If you are interested in the exact formulas, you can find them in Black-Scholes Greeks Formulas and Option Greeks Excel Formulas. • Rho measures how option premium will change if the risk-free interest rate increases by one percentage point. • Call options on most underlyings have positive rho; put options have negative rho. • Rho is generally greater (in absolute terms) with more time to expiration. • For many underlyings like currencies or bonds, interest rates may also affect underlying price, and thereby option prices. This indirect effect, though often greater than the direct effect, is not measured by rho.
{"url":"https://www.macroption.com/option-rho/","timestamp":"2024-11-03T06:21:04Z","content_type":"text/html","content_length":"23836","record_id":"<urn:uuid:2f03fb31-8e5f-4431-b623-5330a5f4be78>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00557.warc.gz"}
Identifying latent behavioural states in animal movement with M4, a nonparametric Bayesian method First published: 20 October 2021 Handling Editor: Robert Freckleton 1. Understanding animal movement often relies upon telemetry and biologging devices. These data are frequently used to estimate latent behavioural states to help understand why animals move across the landscape. While there are a variety of methods that make behavioural inferences from biotelemetry data, some features of these methods (e.g. analysis of a single data stream, use of parametric distributions) may limit their generality to reliably discriminate among behavioural states. 2. To address some of the limitations of existing behavioural state estimation models, we introduce a nonparametric Bayesian framework called the mixed-membership method for movement (M4), which is available within the open-source bayesmove R package. This framework can analyse multiple data streams (e.g. step length, turning angle, acceleration) without relying on parametric distributions, which may capture complex behaviours more successfully than current methods. We tested our Bayesian framework using simulated trajectories and compared model performance against two segmentation methods (behavioural change point analysis (BCPA) and segclust2d), one machine learning method [expectation-maximization binary clustering (EMbC)] and one type of state-space model [hidden Markov model (HMM)]. We also illustrated this Bayesian framework using movements of juvenile snail kites Rostrhamus sociabilis in Florida, USA. 3. The Bayesian framework estimated breakpoints more accurately than the other segmentation methods for tracks of different lengths. Likewise, the Bayesian framework provided more accurate estimates of behaviour than the other state estimation methods when simulations were generated from less frequently considered distributions (e.g. truncated normal, beta, uniform). Three behavioural states were estimated from snail kite movements, which were labelled as ‘encamped’, ‘area-restricted search’ and ‘transit’. Changes in these behaviours over time were associated with known dispersal events from the nest site, as well as movements to and from possible breeding locations. 4. Our nonparametric Bayesian framework estimated behavioural states with comparable or superior accuracy compared to the other methods when step lengths and turning angles of simulations were generated from less frequently considered distributions. Since the most appropriate parametric distributions may not be obvious a priori, methods (such as M4) that are agnostic to the underlying distributions can provide powerful alternatives to address questions in movement ecology. Our understanding of animal movement has advanced considerably in recent decades with the emergence of the field of movement ecology (Fraser et al., 2018; Joo, Picardi, et al., 2020), which focuses on understanding where animals go, what they are doing and how they are influenced by their surrounding environment (Nathan et al., 2008). As telemetry and biologging devices continue to increase in their battery life, data resolution and affordability (Hussey et al., 2015; Kays et al., 2015), statistical methods that can efficiently analyse these large datasets will become ever more important (Patterson et al., 2017; Potts et al., 2018). To fully understand animal movement, it is necessary to account for behaviour since space and resource use are directly linked to an animal's internal state (Gurarie et al., 2016; Nathan et al., 2008). Since the direct observation of animal behaviour can be challenging in many situations, recorded tracks from biologging devices are increasingly used to infer potential behaviour by estimating latent states. These latent states can be estimated from a variety of data streams (i.e. time series of variables) such as step lengths, turning angles, ambient temperature and acceleration, among others (Edelhoff et al., 2016). State estimation is often performed using segmentation and clustering methods, as well as state-space models (SSMs). Segmentation methods partition tracks into segments by detecting shifts in the data stream(s), whereas clustering methods classify these segments (or the observations directly) into discrete states. Alternatively, SSMs estimate latent states per observation based on the transition probabilities among a given number of states (Edelhoff et al., 2016; Gurarie et al., 2016). While existing state estimation methods provide fast or powerful predictive capacity (Edelhoff et al., 2016; Patterson et al., 2017), they possess a number of limitations that can impact the inference made on behavioural states. For instance, segmentation methods commonly infer behaviour using only a single data stream such as persistence velocity or speed (Edelhoff et al., 2016; but see Patin et al., 2020). This can be problematic when underlying behaviours are complex and not well-represented by a single metric alone. Additionally, many segmentation methods, clustering methods and SSMs typically estimate behavioural states by fitting the data streams to parametric probability distributions (e.g. Edelhoff et al., 2016; Joo, Boone, et al., 2020; Patterson et al., 2017), such as Gaussian, gamma or wrapped Cauchy distributions. When the structure in the data streams is not well-captured by parametric distributions, this can often result in overestimation of the true number of states when information criteria are used due to model misspecification (Gurarie et al., 2016; Pohle et al., 2017). Furthermore, running SSMs and some clustering methods can be computationally costly: model runtime can take minutes to days depending on the type of model, sample size, number of estimated states and computer hardware. This is further exacerbated when model selection (e.g. determining the likely number of groups by fitting models with different numbers of groups) and multi-model inference are performed. Given the limitations posed by existing state estimation methods, there is a need to develop a framework that is based on as few parametric assumptions as possible while also being fast and flexible. Here, we introduce a new two-stage modelling framework called the mixed-membership method for movement (M4) that implements nonparametric Bayesian methods to: (a) jointly segment multiple data streams into relatively homogeneous units of behaviours; and (b) subsequently determine the likely number of behavioural states using a mixed-membership method where segments can be comprised of more than one behavioural state. Latent behavioural states are estimated for entire track segments (as opposed to individual observations) since this reflects our understanding that behaviour is inherently autocorrelated, especially when observations are sampled at short time intervals (Pohle et al., 2017; Potts et al., 2018). Additionally, track segments are expected to be characterized by multiple states (Patin et al., 2020; Pohle et al., 2017). This M4 model framework is available within the open-source R package bayesmove available on CRAN (Cullen & Valle, 2021). In this article, we describe the model structure and the Bayesian sampling methods used to estimate from the posterior distribution. We then demonstrate that M4 can successfully recover breakpoints and behavioural states based on simulated trajectories and compare our model's performance against two common segmentation methods (behavioural change point analysis (BCPA), Gurarie et al., 2009; segclust2d, Patin et al., 2020), one machine learning method (expectation-maximization binary clustering (EMbC), Garriga et al., 2016) and one type of SSM, a hidden Markov model (HMM). Finally, we illustrate our novel approach on the movements of an endangered raptor species, the Everglade snail kite Rostrhamus sociabilis, and interpret the results within the context of natal and breeding dispersal events. 2.1 Model structure Most existing segmentation methods (e.g. BCPA, segclust2d, behavioural movement segmentation), some machine learning methods (e.g. EMbC) and most SSMs (e.g. HMMs, multistate random walks) experience one or more common limitations to behavioural state estimation. These limitations include the reliance on parametric distributions, analysis of only a single data stream, as well as reliance on information criteria to determine the most likely number of states. 2.1.1 Discretization of data streams We address the problem of parametric distributions by providing an approach that relaxes parametric assumptions through the discretization of data streams (Figure 1a–c). Although data streams (e.g. step lengths and turning angles) are not typically discretized into bins, we expect that this may lead to more robust estimates in the face of parametric distribution uncertainty. This is because bins are estimated independently of one another and extreme values lose their influence when added to the first or last bins with the rest of the data. Therefore, the discretization of data streams is expected to increase model flexibility (John & Langley, 1995; Kitagawa, 1987). Selecting the number of bins and the binning method is relatively subjective and therefore it is important that prior biological reasoning be used to inform these decisions. For example, discretization methods may include the use of equal bin widths or quantiles. However, the number of bins should be sufficient to characterize the shape of the density distribution. These assumptions during the discretization process are not unlike assumptions made for HMMs when selecting probability distributions to fit data streams, but require practitioners to make more decisions up front. Based on a sensitivity analysis of binning methods used on a right-skewed data stream (i.e. step lengths), the use of quantiles resulted in greater discrimination of behavioural states than bins of equal widths (Appendix S1). However, data streams with circular distributions (e.g. turning angles) will likely be more interpretable when using bins of equal widths. At a minimum, discretized values for each data stream and the associated track IDs are required to begin analysing the data. 2.1.2 Segmentation Similar to other segmentation methods, our model aims to divide tracks into segments by estimating a set of breakpoints. Through the use of joint probabilities within a Bayesian framework, we circumvent the limitation of analysing only a single data stream. Moreover, our model estimates the location and number of unknown breakpoints by implementing a Gibbs sampler within a reversible-jump Markov chain Monte Carlo (RJMCMC) algorithm. RJMCMC is a trans-dimensional algorithm that serves as a model-based approach to model selection by providing simultaneous inference on parameter values given a particular model, as well as model space (i.e. the collection of all possible models) (Green, ). In particular, we use a birth–death RJMCMC that allows the addition (i.e. birth), removal (i.e. death) or swap of proposed breakpoints where model parameters are updated from the known posterior distribution using a Gibbs sampler (see Appendix for more details). We adopt this approach to perform unsupervised segmentation on each individual trajectory. In our framework, each potential model is the model number. Each breakpoint is restricted to being an integer between 2 and . Given a particular model at time for data stream are assigned to one of the bins. Overall, the model is seeking breakpoints that define relatively homogeneous track segments. The use of a categorical distribution to characterize track segments (as opposed to continuous distributions) is what makes this framework nonparametric. Our prior is given by: ). Details for the derivation of the full conditional distributions can be found in Appendix Since the posterior distribution of models maximum a posteriori (MAP) estimate (i.e. the breakpoints of the model with the greatest log marginal likelihood) (Figure 1d). Although the MAP estimate does not account for uncertainty in breakpoint number and position, it appears to be in good agreement with estimates from the entire posterior distribution as described in Appendix S4. These MAP breakpoints are then used to define segments per individual track, which are subsequently clustered into latent behavioural states by a mixed-membership model. 2.1.3 Mixed-membership clustering Although most existing state estimation methods assign a single discrete state to observations or track segments (e.g. Garriga et al., 2016; McClintock & Michelot, 2018; Patin et al., 2020; but see Jonsen et al., 2019), animal movement may not be entirely comprised of a single behaviour over a given sampling interval (Patin et al., 2020; Pohle et al., 2017). Latent Dirichlet allocation (LDA), a mixed-membership clustering method, can be used to classify each track segment as a mixture of multiple states (Hudon et al., 2021; Valle et al., 2014). For example, a proportion of observations within a given track segment might belong to state 1 while another proportion might belong to state 2 and so on. LDA is used to characterize track segments in terms of their behavioural state components, where each state corresponds to a distribution of discretized data streams. To do so, the model estimates the probability of observations from each track segment (rows) belonging to each latent state (columns) in matrix ). Additionally, the model characterizes the latent states (rows) with the probability of observations belonging to each bin per discretized data stream (columns) in matrix ). The track segments from all individual animals are analysed together since we assume that there is a common set of behaviours exhibited across the population. Although there may be some individual heterogeneity in movement patterns, the pooling of all individuals ensures that behavioural states are directly comparable and improves the inference on individuals with fewer observations (Jonsen, ). In this model, we assume that: of track segment from data stream for individual . Additionally, for the vector (i.e. the number of clusters or states) that sum to 1 and indicates the likelihood of assigning an observation at time of track segment for individual to each behavioural state . This formulation assumes that each observation within a particular track segment must belong to a single behavioural state, but that track segments are comprised of multiple states. For our priors, we assume that: to track segment . As a result, fewer and fewer observations will be assigned to states with large values of , enabling the model to identify the most likely number of behavioural states (Valle et al., in press; Valle et al., ). This is an improvement on existing state estimation methods in the sense that our model only needs to be run once, whereas several other common methods (e.g. HMMs, segclust2d and other clustering methods) are typically run multiple times with varying numbers of behavioural states to then determine the best model via information criteria (e.g. AIC or BIC). This LDA model is fitted using a Gibbs sampler and a complete description of the full conditional distribution can be found in Appendix S5. Similar to the segmentation model, convergence was assessed by inspecting trace plots of the log-likelihood. The posterior mean for all 1e–g). This combination of results provides a straightforward approach to selecting the most likely number of behavioural states. A list of the primary functions to analyse data using the M4 framework within the bayesmove R package is included in Appendix S6. 2.2 Simulation study We assessed the performance of M4 compared to other methods via simulations. We first evaluated the ability of our track segmentation method to detect true breakpoints and compared its results to those obtained by two segmentation methods (i.e. BCPA and segclust2d). We then evaluated the ability of our clustering method to estimate the true number of behavioural states and to properly assign behaviour proportions to track segments. For this component, we compared the results of our model to those obtained by a HMM (McClintock & Michelot, 2018) and two additional clustering methods (i.e. segclust2d and EMbC). 2.2.1 Generating simulated trajectories We generated multiple three-state trajectories from a correlated random walk at regular time intervals, where five tracks were simulated at each of four durations (1,000, 5,000, 10,000 and 50,000 observations), resulting in a total of 20 tracks. Each track was comprised of 10, 50, 100 or 500 segments that each included 100 observations. Each of these segments included a dominant behavioural state (80% of observations), which was randomly assigned to each segment. The three behavioural states were parameterized to represent (a) little to no movement (‘encamped’), (b) slow and tortuous movement (‘area-restricted search’ or ARS), as well as (c) fast and directed movement (‘transit’). To assess model performance on tracks generated from different types of distributions, we generated two sets of tracks with 20 simulations in each. In the first set of simulations, step lengths for each behaviour were drawn from a truncated normal distribution and turning angles were drawn from either a beta, uniform or truncated normal distribution (hereafter referred to as ‘uncommon distributions’) (Figure 2a). For the second set of simulations, step lengths for each behavioural state were generated from a gamma distribution and turning angles were drawn from a wrapped Cauchy distribution (hereafter referred to as ‘common distributions’) (Figure 2b). Both sets of simulations were designed to generally resemble each other in their step length and turning angle distributions. Additional comparisons of simulated tracks generated from a HMM can be found in Appendix S11. 2.2.2 Implementation of Bayesian M4 framework Step lengths and turning angles were the data streams used to make inference on latent behavioural states. Step lengths were separated into five bins using the 25th, 50th, 75th, 90th and 100th quantiles as upper limits. Quantiles were used to discretize highly right-skewed step lengths as suggested by our sensitivity analysis (Appendix S1). Turning angles were discretized into eight bins Each simulated track was analysed by the M4 segmentation model using a vague prior where the hyperparameter S7). We then assessed how well our model identified the true breakpoints for each simulation, where a threshold of ±10 observations was used to distinguish an accurate from an inaccurate estimated breakpoint. If no breakpoints were estimated within ±30 observations of a true breakpoint, then the model was considered to have ‘missed’ that breakpoint. Other thresholds were tested and all resulted in the same relative pattern of accuracy (Appendix S1). Estimated track segments were used as input for the LDA model of M4, which was run using 1,000 MCMC iterations, a burn-in of 500 iterations and vague priors where hyperparameters were set to S7). The true number of states was estimated by calculating the arithmetic mean of behaviour proportions across all track segments and selecting the set of states that together represented ≥90% of all observations on average. Additionally, state-dependent distributions of step lengths and turning angles were inspected so that we only selected states that were also biologically interpretable. Since the LDA treats track segments as a combination of behavioural states, proportions of each state were estimated per track segment. The accuracy of state estimates was evaluated by two methods: (a) we calculated the percentage of observations where the dominant behaviour of each track segment was accurately classified, and (b) we calculated the root mean square error (RMSE) of the estimated behaviour proportions compared to the true behaviour proportions over all states and track segments. 2.2.3 Method comparison We compared the performance of M4 on the simulated trajectories against BCPA, EMbC, HMM and segclust2d (see Appendix S8 for details regarding model properties and assumptions). All models were run using a 2.6 GHz i7 CPU with 16 GB RAM. Segmentation models The BCPA model performed segmentation based on persistence velocity (PV), which is a combination of velocity (V) and turning angle (bcpa v1.1 (Gurarie, 2014). Parameters for BCPA were tuned to provide a close approximation of the true number and location of simulated breakpoints with window size set to 80, sensitivity set to 2 and clusterwidth set to 30. Breakpoint accuracy was evaluated using the same method as for M4. The segclust2d model performed segmentation on step lengths and the absolute value of turning angles using the R package segclust2d v0.2.0 (Patin et al., 2019). This method models each data stream using a Gaussian distribution, so the absolute value of turning angles was used to accommodate this unimodal assumption. Tuning parameters were chosen within the bounds of the simulated tracks, such that the maximum number of segments was set to 1.5 × s, where s is the true number of segments, the minimum observations per segment was set to 50 and the number of potential clusters (i.e. states) ranged from 2 to 4. Since the model was still analysing the longest simulations (50,000 observations) after 2 days, these tracks were omitted from the reported results for segclust2d. Breakpoint accuracy was assessed in the same manner as for M4. Clustering models The EMbC model was fitted to step lengths and the absolute value of turning angles using the R package EMbC v2.0.3 (Garriga et al., 2019). The absolute value of turning angles was used to achieve better discrimination among states given the use of a unimodal distribution like the Gaussian distribution. This model uses binary clustering to partition each of n data streams into a ‘low’ and ‘high’ class, resulting in a total of 2^n possible states. For our analysis, this resulted in four states estimated from a bivariate Gaussian distribution. To make these results comparable to the other models, both states with ‘high’ step lengths (and ‘low’ or ‘high’ turning angles) were merged into a single state to produce three states overall. State classification accuracy was assessed at the segment level so that results were directly comparable with the Bayesian M4 model. This was achieved by using true breakpoints to segment the time series of states estimated by the EMbC model and then calculating the proportion of these behaviours within each track segment. Additionally, the resulting state-dependent distributions of step lengths and turning angles were discretized using the bin limits defined for M4 to compare the accuracy of distribution shapes. Accuracy was measured by RMSE across bins of all states and data streams per simulation. A discrete-time HMM was also fitted to each of the simulated trajectories using the R package momentuHMM v1.5 (McClintock & Michelot, 2018). Step lengths were modelled using a gamma distribution, and turning angles were assumed to arise from a wrapped Cauchy distribution. The HMMs for each simulation were run using a range of two to four possible behavioural states (2018; Michelot et al., 2016). The most likely number of states was selected using a combination of AIC and BIC, where the model with the lowest value was considered to be most likely. However, if the difference in AIC or BIC (ΔAIC/BIC) of the next best model was <10 (Burnham & Anderson, 2002), the more parsimonious model was chosen. Behaviour classification accuracy was assessed in the same manner as for EMbC. The segclust2d model clustered segments previously estimated by this method into K states. The number of likely states was estimated using BIC in the same manner as for HMM. The likely number of states (and associated breakpoints) were used to assign behavioural states to track segments, which were then compared to the other methods using the proportion of each state per estimated segment (which were all either 0 or 1). Additionally, the accuracy of the state-dependent distributions was evaluated in the same manner as for EMbC and HMM. 2.3 Snail kite case study As part of a larger investigation on the effects of wetland management on wildlife, solar-powered GPS-GSM transmitters (Ecotone Telemetry) were attached to juvenile snail kites (n = 26) prior to fledging at Lakes Tohopekaliga, East Tohopekaliga and Kissimmee in central Florida during 2018 and 2019. Tagging of snail kites was conducted under US Geological Survey BBL Permit #23906. Subsequent movement of each individual resulted in a total of 40,720 observations (Figure 3). Locations were collected once per hour only during daylight at an accuracy of ±30 m. As a result of the programmed duty cycle and time periods where GPS tags failed to transmit data, track time intervals were irregular. To ensure comparable step lengths and turning angles, we filtered our data to the most common time interval (i.e. 1 hr). We chose to omit all other observations since imputation procedures for long time gaps would increase the number of artificial data and the use of linear interpolation would artificially inflate the number of turning angles at zero radians. Step lengths and turning angles were used to estimate latent behavioural states. As was performed on the simulated tracks, step lengths for the empirical data were discretized into five bins using the 25th, 50th, 75th, 90th and 100th quantiles as upper limits. This resulted in bin limits at 0.00, 0.03, 0.09, 0.32, 1.63 and 72.56 km. Turning angles were discretized into eight bins from Step lengths and turning angles for each of the 26 snail kites were analysed by the M4 segmentation model using 80,000 iterations, a burn-in of 40,000 iterations and hyperparameter S7). The proportions of behavioural states were evaluated over time in relation to emigration from natal sites and peak breeding season of snail kites in Florida (1 March–30 June; Reichert et al., 2020) to discern any patterns associated with these events. 3 RESULTS 3.1 Segmentation model comparison The M4 segmentation model successfully recovered breakpoints from the simulations and outperformed both BCPA and segclust2d. Among the three methods, the segclust2d model took much longer to run (0.46 to 418 min) compared to M4 (0.24 to 11 min) and BCPA models (0.25 to 21 min), particularly for longer tracks (Figures 4a and 5a). While all three models exhibited similar accuracy on the shortest simulations, M4 was much more accurate on all larger simulations. For these large simulations, the accuracy of the M4 segmentation model was >80% on average when simulations were generated from uncommon distributions and >90% on average when generated from common distributions (Figures 4b, 5b and 6a). Additionally, M4 missed the lowest proportion of true breakpoints (uncommon: 21%; common: 0.3%) compared to BCPA (uncommon: 67%; common: 66%) and segclust2d (uncommon: 26%; common: 30%) across simulations of all analysed track lengths. 3.2 Clustering model comparison When estimating the true number of states on both sets of simulations, M4 correctly determined the number of true states more frequently than the other methods and exhibited greater computational efficiency over all other clustering methods besides EMbC. The Bayesian LDA model took 2–23 s to run, highlighting the computational efficiency of this particular model. When added to the duration of the segmentation model, the proposed method ran much faster than the HMM (20–36×) and segclust2d (2–178×) at all track lengths despite these models being fitted with only two to four states, whereas our method allows for up to seven states (Figures 4a and 5a). However, the time to run each EMbC model increased very little with increases in track length, but also automatically assumed four states were present. The LDA model from M4 correctly suggested three states as most likely for 18 of the 20 simulations generated from uncommon distributions and 19 of 20 simulations generated from common distributions (Figure 6b, Appendix S9). By comparison, the HMM suggested (via AIC and BIC) that three states were most likely in 17 and 16 of the 20 analysed simulations generated by uncommon and common distributions, respectively. The segclust2d model suggested that three states were most likely in only six and five of the 15 analysed simulations generated from uncommon and common distributions, respectively, based on BIC. To enable direct comparisons among all four models that estimated behavioural states, we assumed three states were most likely for all 20 simulations when calculating model accuracy. Using this assumption, we find that M4 demonstrated high accuracy in behavioural state estimation for both sets of simulations, often equivalent or superior to the other clustering methods. When analysing simulations generated from uncommon distributions, mean accuracy of M4 to classify the dominant state within each segment was greater than that of the HMM and segclust2d models at all track lengths (Figure 4c). However, mean accuracy of the EMbC model was slightly greater than M4 on this set of simulations at a track length of 5,000 observations. When analysing simulations generated from common distributions, mean accuracy of M4 was slightly below that of the HMM, but greater than the mean accuracy of the EMbC and segclust2d models at all track lengths (Figure 5c). Additionally, accuracy measures displayed little variability in M4 across tracks of different lengths and on each set of simulations, highlighting the increased stability of this framework. Similar to the pattern found for estimates of dominant behavioural states, the accuracy of behavioural state proportions was higher in M4 for all but the HMM on simulations generated from common distributions, as denoted by low RMSE (Figures 4d, 5d and 6c). The accuracy of the estimated step lengths and turning angles distributions was relatively consistent across each set of simulations. For tracks generated from uncommon distributions, M4 was slightly more accurate than the HMM, but much more accurate than EMbC and segclust2d across all track lengths (Appendix S9). However, HMM estimates were slightly more accurate than the Bayesian model on tracks of all lengths when generated from common distributions (Appendix S9). When viewed as continuous distributions, it is clear that the HMM, EMbC and segclust2d models had difficulty estimating the true distributions of step lengths and turning angles regardless of track length for the simulation with uncommon distributions (Appendix S10). On the other hand, the HMM was able to perfectly estimate the state-dependent distributions of the simulations generated from common distributions (Appendix S10). 3.3 Snail kite analysis The segmentation of 26 snail kite trajectories using M4 took 4 min to run and estimated 1 to 64 breakpoints for these individuals. Breakpoints were then used to define 444 track segments from all individuals (Figure 7a). These segments were clustered into states using M4, which took approximately 27 s to run. It appeared that there were likely three behavioural states, which comprised 91.6% of all state assignments on average (Figure 7b). To ensure that these three states were biologically interpretable, distributions of step lengths and turning angles were also evaluated (Figure 7c). The distributions showed: (a) a slow and tortuous behaviour; (b) a tortuous behaviour with intermediate speed; and (c) a fast and directed behaviour. For this reason, these behaviours were labelled ‘encamped, ‘ARS’ and ‘transit’ respectively. Some individuals were only tracked for a short period of time and did not leave the natal area. However, 17 birds did emigrate from their natal site. Dispersal events were typically denoted by a brief period of ARS or transit behaviour (Figure 8a,b; Appendix S9). The three longest tracks, which belonged to snail kites tracked for more than a year (SNIK 12, SNIK 14 and SNIK 15), displayed relatively synchronous behaviour before, during and after their first breeding season. Two brief periods of high activity behaviour that occurred during and immediately following peak breeding season in 2019 may potentially represent pre- and post-breeding dispersal events (Figure 8a,c,d). We demonstrated that our Bayesian M4 framework (available within the bayesmove R package) can accurately identify changes in behavioural states, reliably estimate the most likely number of behavioural states and properly characterize the state-dependent distributions of data streams. This two-stage model treats track segments as the unit of interest (as opposed to observations) and relies on the discretization of data streams to avoid the need to specify parametric probability distributions. Importantly, the proposed method is computationally efficient, a key characteristic given the ever-increasing storage capacities of modern sensors and their ability to measure a growing number of intrinsic and environmental variables (Whitford & Klimley, 2019; Williams et al., 2020 ). A comparison of model performance in addition to the analysis of an empirical dataset highlight the utility of the M4 framework. 4.1 Method comparison Although BCPA displayed comparable speed to M4 during track segmentation, the accuracy of the estimated breakpoints was much higher in the latter. Additionally, M4 was much faster and exhibited greater accuracy of breakpoint estimates than the segclust2d method, which was not able to successfully analyse the simulated tracks of 50,000 observations. Since the accuracy of the segclust2d method was not much greater than the BCPA for either set of simulations (Figures 4b and 5b), it appears that BCPA’s reliance on a single derived variable (i.e. persistence velocity) instead of separate data streams was not as limiting as was initially expected. While HMMs are powerful methods that can incorporate individual-level random effects and account for cyclical patterns (McClintock & Michelot, 2018; Patterson et al., 2017), they can also be restrictive in some of their assumptions. Standard forms of HMMs require the use of parametric distributions, which may not fit the data streams well (Appendix S9; Langrock et al., 2018). While HMMs displayed greater accuracy than M4 when the selected parametric distributions matched the true underlying distributions (Figure 5c), we find that the proposed methodology performed better than HMMs when the selected parametric distributions did not match the true underlying distribution. By comparison, the segclust2d and EMbC methods are straightforward to apply when estimating latent behavioural states from a set of tracks, but appear limited by their assumption of Gaussian distributions when partitioning observations into segments or into states, respectively. Since the most common data streams (i.e. step lengths and turning angles) are not typically modelled with a Gaussian distribution (McClintock et al., 2020), this likely contributes to the lower accuracy of these The determination of the most likely number of states is another issue when fitting clustering models and HMMs since this is typically unknown a priori and is directly impacted by how well the selected parametric distributions characterize the states (Pohle et al., 2017). Unfortunately, HMMs often require multiple models to be fit and compared using information theoretic approaches, which tend to favour a greater number of states than are truly present and come at a high computational cost (Li & Bolker, 2017; Pohle et al., 2017). Importantly, while M4 allows for up to seven behavioural states, we only attempted to fit HMMs with two to four behavioural states. Even in this limited context, fitting HMMs was already much slower than fitting M4. Had we attempted to fit HMMs with two to seven behavioural states, the amount of time required for this would be substantially larger than what we report in Figures 4 and 5. A similar issue is present in segclust2d, where models are fit with every possible number of track segments and states before comparing via BIC. A different problem is posed by the EMbC model, which imposes four states by default when analysing step lengths and turning angles. These issues are directly addressed by our framework since we use a mixed-membership model (LDA) with a penalizing prior to cluster track segments, enabling the estimation of the most likely number of states and the state-dependent distributions in a single step. While existing methods can provide useful behavioural inference depending upon the ecological question and dataset, the M4 framework provides a powerful alternative when behaviours are complex, multiple data streams are available and these data are not well-characterized by parametric distributions and/or when datasets are large. 4.2 Empirical applications Three behavioural states were clearly estimated for the snail kite dataset, which was supported by biologically relevant distributions of step lengths and turning angles. The ‘encamped’ state likely represents fine-scale behaviours that include resting, feeding and time spent at the nest (as a fledgling or reproductive adult). On the other hand, the ‘ARS’ state likely includes exploration for nearby suitable habitat as well as foraging bouts (Martin et al., 2006; Pias et al., 2016). Finally, the ‘transit’ state includes fast, directed movements associated with dispersal of snail kites in addition to departure from wetlands experiencing low water levels (Robertson et al., 2017). The time series of snail kite behaviour proportions showed variability in the timing of emigration from natal sites among individuals, but changes in behaviour were generally synchronous in the three birds that reached maturity. This variability in the timing of emigration from natal sites could be due to a variety of factors, such as hatching date, body condition and local environmental conditions (Cattau et al., 2016; Fletcher et al., 2015; Rodgers & Schwikert, 2003). The shifts in behaviour proportions appeared to show multiple phases of high and low activity, some of which seem to match the phenology of natal dispersal (summer), pre-breeding dispersal (early spring) and post-breeding dispersal (late summer) (Bennetts & Kitchens, 2000). While the continued monitoring of these tagged birds should provide greater evidence for the characterization of activity budgets over ontogeny, future research could also explore the primary drivers of snail kite movement and habitat use within each behavioural state through the inclusion of environmental covariates. 4.3 Caveats and extensions In addition to the M4 method proposed by this study, other nonparametric state estimation methods have been previously developed (Langrock et al., 2018; Nams, 2014; Sur et al., 2014). In one such example, the behavioural movement segmentation (BMS) model proposed by Nams (2014) uses a combination of direct search optimization, iterative sampling and k-means clustering to estimate latent states from track segments. BMS is similar to our proposed M4 framework in that both methods are nonparametric, partition multiple data streams into segments and cluster segments into latent states (Nams, 2014). However, M4 differs both technically and conceptually from BMS in that M4 proposes breakpoints using RJMCMC, the number of likely states are estimated within a single model run (instead of using multi-model selection) and track segments are expected to be comprised of multiple states rather than just one. We believe that practitioners should carefully evaluate the properties and assumptions of different methods to determine the best method to properly analyse their data and address their objectives. Although M4 effectively classified behavioural states from both simulated and empirical tracks, there are some limitations to this approach. The selection of the number and width of bins when discretizing data streams is a subjective choice that impacts the results from the segmentation model and ultimately the estimation of behavioural states. Therefore, practitioners may need to test different binning methods if the segmentation model does not produce reliable breakpoints that match up with plots of the data stream(s). Additionally, our model implicitly assumes that location error is negligible or requires that it be accounted for via another method. Although our model can analyse data streams from regular or irregular time intervals, this will also depend on the inherent properties of the data streams themselves. Since step lengths and turning angles are calculated from multiple successive observations, these values will not be comparable once the data are not close to a regular time interval. However, variables such as net squared displacement (the squared distance from the starting location to all other relocations) can be analysed over irregular time intervals. M4 can be extended to analyse other types of data streams and can include prior knowledge on the timing of behavioural shifts. Although only step lengths and turning angles were analysed for the simulated and empirical tracks, additional ancillary data coming from the sensor (e.g. elevation, salinity, temperature or accelerometer data) could be used to make behavioural inference. These data streams could come from all types of distributions (i.e. continuous, discrete, bounded between 0 and 1). It is also relatively straightforward to deal with zero-inflated data by including all zeroes in a single bin. Additionally, our segmentation model can be implemented in a semi-supervised fashion, by which practitioners pre-specify breakpoints for the time series based on a priori knowledge and these breakpoints will be considered by the RJMCMC algorithm. This may be particularly useful if daily activity patterns are expected or if only one of several possible states can be clearly This work was supported in part by the University of Florida Biodiversity Institute (UFBI), the US Department of Agriculture's National Institute of Food and Agriculture (USDA-NIFA) McIntire–Stennis project 1005163, the US National Science Foundation (NSF) award 1458034 and 2040819, the Florida Fish and Wildlife Conservation Commission (FWC) award 13416 and by the US Army Corp of Engineers (USACE). The tagging of snail kites was approved and conducted under UF IACUC no. 202008334. J.A.C. was supported by a postdoctoral fellowship from the University of Florida Informatics Institute (UFII) Fellowship Program. The authors thank Tyler Beck for his assistance with fieldwork and Brenda Betancourt for her feedback during model and package development. Additionally, they are grateful to Rocío Joo and Mathieu Basille for their valuable comments on an earlier draft of this paper. The authors declare that there is no conflict of interest. J.A.C. and D.V. conceived the ideas and designed the methodology; C.L.P. and R.J.F. collected the empirical data; J.A.C., C.L.P. and D.V. analysed the data; J.A.C. and D.V. led the writing of the manuscript. All authors contributed critically to the drafts and gave final approval for publication. All code mentioned here for the Bayesian framework are available within the bayesmove package for R hosted on CRAN at https://CRAN.R-project.org/package=bayesmove. The development version of the package is available on GitHub at https://github.com/joshcullen/bayesmove and a full set of vignettes can be found at https://joshcullen.github.io/bayesmove. The code to generate the simulations and perform method comparison are available on Zenodo at https://doi.org/10.5281/zenodo.4245254 (Cullen, 2021). The Everglade snail kite telemetry data have not been made available since this is a federally listed endangered species and the location data are sensitive.
{"url":"https://besjournals.onlinelibrary.wiley.com/doi/10.1111/2041-210X.13745","timestamp":"2024-11-03T11:08:03Z","content_type":"text/html","content_length":"330918","record_id":"<urn:uuid:1f733623-4422-4c2a-bbdf-707e28d779d2>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00078.warc.gz"}
Calculate Electric Power With Ease | Step By Step - Last Updated: Monday, May 01, 2023 Electric Power Calculator Last updated: Monday, May 01, 2023 Electric power is the rate at which electrical energy is transferred or consumed by an electrical circuit. It is usually measured in watts (W) and can be calculated using the formula P=VI, where P is the power in watts, V is the voltage in volts, and I is the current in amperes. Electric power has many practical applications, such as in the generation and distribution of electricity to power homes and businesses, as well as in the design and operation of various electrical devices and machines. Understanding how to calculate and manage electric power is essential for ensuring the safe and efficient operation of electrical systems. The formula for determining electric power using energy and time is defined as: \(P\) \(=\) \(\frac{E}{t}\) \(t\): the duration \(P\): the electric power \(E\): the electric energy The SI unit of electric power is: \(watt\text{ }(W)\) Find t Use this calculator to determine the duration of which the power is being measured when the electric energy and power are given Bookmark this page or risk going on a digital treasure hunt again
{"url":"https://www.smartconversion.com/electric-power-calculator-using-energy-and-time/determine-the-interval-using-electric-energy-and-power","timestamp":"2024-11-05T23:14:26Z","content_type":"text/html","content_length":"38354","record_id":"<urn:uuid:06d8ba12-7cee-44d1-92aa-4912505368cd>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00362.warc.gz"}
Interpreting Graphs and Data Math Activities - Fishyrobb Interpreting Graphs and Data Math Activities Representing and interpreting data using graphs and charts is the focus of this math resource for upper elementary grades. It helps students explore and understand measurement data. For each activity, students will use a graph to answer questions about the data. They will also create another type of graph, chart, or table based on the information provided. This added step really helps students understand the various types of graphs and ways in which data can be displayed. These activities are perfect for both guided practice in your math groups or independent practice during math centers. The activities in this resource involve the following types of graphs: • pictographs • bar graphs • double bar graphs • pie charts • line graphs All of the graphs have a seasonal theme such as weekly snowfall amounts, number of beach visitors, or spring bird sightings. What graphs and data math activities are included? • 8 colorful graphs (2 for each season) in both full- and half-page sizes • 8 activity pages/worksheets for interpreting the data and making new graphs based on data • Answer keys 3.MD.B.3 Draw a scaled picture graph and a scaled bar graph to represent a data set with several categories. Solve one- and two-step “how many more” and “how many less” problems using information presented in scaled bar graphs. For example, draw a bar graph in which each square in the bar graph might represent 5 pets. 4.MD.B.4 Make a line plot to display a data set of measurements in fractions of a unit (1/2, 1/4, 1/8). Solve problems involving addition and subtraction of fractions by using information presented in line plots. For example, from a line plot find and interpret the difference in length between the longest and shortest specimens in an insect collection. RI.4.7 Interpret information presented visually, orally, or quantitatively (e.g., in charts, graphs, diagrams, time lines, animations, or interactive elements on Web pages) and explain how the information contributes to an understanding of the text in which it appears. 1 review for Interpreting Graphs and Data Math Activities 1. Great resource the kids enjoyed it, great selection!
{"url":"https://www.fishyrobb.com/product-page/interpreting-graphs-data-activities-bar-graphs-pie-charts-line-graphs/","timestamp":"2024-11-03T13:50:35Z","content_type":"text/html","content_length":"195947","record_id":"<urn:uuid:3516f77c-4565-4e2d-8296-26730b04527c>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00892.warc.gz"}
Advanced Haskell Data Structures: Red-Black Trees So, we've built up some pretty nifty binary trees - we can use the binary tree both as the basis of an implementation of a set, or as an implementation of a dictionary. But our implementation has had one major problem: it's got absolutely no way to maintain balance. What that means is that depending on the order in which things are inserted to the tree, we might have excellent performance, or we might be no better than a linear list. For example, look at these trees. As you can see, a tree with the same values can wind up quite different. In a good insert order, you can wind up with a nicely balanced tree: the minimum distance from root to leaf is 3; the maximum is 4. On the other hand, take the same values, and insert them in a different order and you get a rotten tree; the minimum distance from root to leaf is 1, and the maximum is 7. So depending on luck, you can get a tree that gives you good performance, or one that ends up giving you no better than a plain old list. Playing with a bit of randomization can often give you reasonably good performance on average - but if you're using a tree, it's probably because O(n) complexity is just too high. You want the O(lg n) complexity that you'll get from a binary tree - and not just sometimes. To fix that, you need to change the structure a bit, so that as you insert things, the tree stays balanced. There are several different approaches to how you can do this. The one that we're going to look at is based on labeling nodes in ways that allow you to very easily detect when a serious imbalance is developing, and then re-arrange the tree to re-balance it. There are two major version of this, called the AVL tree, and the red-black tree. We're going to look at the red-black. Building a red-black tree is as much a lesson in data structures as it is in Haskell, but along with learning about the structure, we'll see a lot about how to write code in Haskell, and particularly about how to use pattern-matching for complex structures. We'll start with a basic definition. A red-black tree is a normal binary search tree, except that each node is assigned a color, which is either red or black, and there are several invariant properties that must hold about the coloring of the tree: 1. The root of the tree is always black. 2. All branches of a tree end in a null which is black. 3. All children of red nodes are black. 4. For all nodes in the tree, all downward paths from the node to a leaf contain the same number of black nodes. If these invariants are maintained, they guarantee that tree is almost balanced: for the entire tree, and every subtree of it, the longest path from the root to a leaf is no more than twice the shortest path from the root to a leaf. We'll start by writing the Haskell type declaration for a red/black tree. We'll just do it as a tree of ordered values to keep things simple. >data Color = Red | Black deriving (Eq, Show) >data (Ord a) => RedBlackTree a = RBTNode a Color (RedBlackTree a) (RedBlackTree a) > | RBTEmpty deriving (Eq,Show) Also, for convenience, we'll write a couple of accessor functions that we'll use later on. Something interesting to note about these accessors is that they use non-exhaustive patterns: there are values of type RedBlackTree a for which these functions are undefined. If you call any of these accessors on a tree whose value is RBTEmpty, you'll get a runtime error. It is, at the very least, considered bad style to write non-exhaustive functions. It's actually a way of cheating the type system. You're claiming that you're writing a function from type T to type U, but in fact, there are values of T for which the function won't work: the real type of the function is T' -> U, where T' is a subset of T. But you can't say that in Haskell - so you're cheating. To be more concrete, you're writing functions like rbtLeftChild which claims that for any red-black tree passed to it, it will return a valid red-black tree. But in fact, that's only true for the subset of red-black trees that were built with the RBTNode constructor; for other values, the function will fail. The best solution to make it exhaustive would be to use the Maybe type to allow you to return a valid value for all trees passed as inputs. But that would make the code much more complex, unless we used monads - and we're not ready for monads yet. >rbtLeftChild :: (Ord a) => RedBlackTree a -> RedBlackTree a >rbtLeftChild (RBTNode _ _ l _) = l >rbtRightChild :: (Ord a) => RedBlackTree a -> RedBlackTree a >rbtRightChild (RBTNode _ _ _ r) = r >rbtValue :: (Ord a) => RedBlackTree a -> a >rbtValue (RBTNode v _ _ _) = v >rbtColor :: (Ord a) => RedBlackTree a -> Color >rbtColor (RBTNode _ c _ _) = c >rbtColor RBTEmpty = Black Inserting data into the tree is where things get interesting. It starts off the same as how you insert into a typical BST: search for the correct position, and then insert the value as a new leaf node. But in a red-black tree, the new node needs a color. New nodes are always red - so you're inserting a red node. Now you need to check to make sure that you're not violating any of the tree invariants. If you are, then you need to fix it. keep things reasonably clean and separate, we'll use the tail-calling version of tree insert, and then tail-call a rebalance function when the basic insert is complete. Rebalance will fix the balance of the tree, and do the tree re-assembly as it climbs up the tree. >rbtInsert :: (Ord a) => RedBlackTree a -> a -> RedBlackTree a >rbtRebalance :: (Ord a) => RedBlackTree a -> [RedBlackTree a] -> RedBlackTree a >--rbtRebalance focus ancestors >rbtInsert node v = > rbtInsertTailCall node v [] >rbtInsertTailCall node@(RBTNode v color left right) newval path > | v > newval = rbtInsertTailCall left newval (node:path) > | otherwise = rbtInsertTailCall right newval (node:path) >rbtInsertTailCall RBTEmpty v path = > rbtRebalance (RBTNode v Red RBTEmpty RBTEmpty) path All over the place as we rebalance the tree, we'll have places where we want to "rebuild" nodes to patch in the insertion change; as usual, we separate that into its own function. >-- Reconstruct takes a child node and a parent node, and creates a replacement >-- for the parent node with the child in the appropriate position. It allows >-- the color of the new node to be specified. >reconstructNode node@(RBTNode v c l r) parent@(RBTNode pv pc pl pr) color = > if (pv > v) > then (RBTNode pv color node pr) > else (RBTNode pv color pl node) Now, we need to think about what we're going to do to keep the tree balanced as we walk back up the insertion path fixing the tree. There are two things we can do to make the tree respect the invariants: we can re-color nodes, or we can pivot subtrees. Pivoting a tree is an interesting operation - it's a process of swapping a node and one of its children to rotate a section of the tree. Suppose we have a binary search tree like the one in the diagram to the side. It's poorly balanced; it's got only one node to its left, but 7 nodes to its right. To correct this by pivoting, what we'll do is take node 6 - currently a child of the root, and rotate the tree counterclockwise around it, so that 6 becomes the root, the old root (2) becomes the left child of 6, and the old left child of 6 (node 4) becomes the right child of the old root. So after the pivot, our tree looks like this. This operation was a left pivot; a right pivot does the same kind of thing, but rotating the tree clockwise instead of counterclockwise. So let's go ahead and write the pivot operations. We'll write two pivot functions: one for each direction. We'll pass the pivot operation a subtree whose root and child in the appropriate direction are to be rotated. In addition, we'll also add a parameter for managing the color of the new root node. In some cases, we'll want to swap the colors of the nodes being moved; in other cases, we won't. So we'll put a boolean parameter in to specify whether or not to swap the colors. > -- pivot left tree at root; second parent indicates whether or not to swap > -- colors of the nodes that are being moved. >rbtPivotLeft :: (Ord a) => RedBlackTree a -> Bool -> RedBlackTree a >rbtPivotLeft (RBTNode rootval rootcolor sib (RBTNode focval foccolor focleft focright)) swap = > (RBTNode focval newrootcolor oldroot focright) where > newrootcolor = if swap then rootcolor else foccolor > oldrootcolor = if swap then foccolor else rootcolor > oldroot = RBTNode rootval oldrootcolor sib focleft >rbtPivotRight (RBTNode rootval rootcolor (RBTNode focval foccolor focleft focright) sib) swap = > (RBTNode focval newrootcolor focleft oldroot) where > newrootcolor = if swap then rootcolor else foccolor > oldrootcolor = if swap then foccolor else rootcolor > oldroot = RBTNode rootval oldrootcolor focright sib So, let's try taking a look at how the pivots work. First, we need to construct some trees to rebalance. We'll just do it manually, since the insert code isn't properly finished yet. >twentyseven = RBTNode 27 Black RBTEmpty RBTEmpty >twentytwo = RBTNode 22 Black RBTEmpty RBTEmpty >twentyfive = RBTNode 25 Black twentytwo twentyseven >sixteen = RBTNode 16 Black RBTEmpty RBTEmpty >twenty = RBTNode 20 Black sixteen twentyfive >twelve = RBTNode 12 Black RBTEmpty RBTEmpty >fifteen = RBTNode 15 Black twelve twenty >two = RBTNode 2 Black RBTEmpty RBTEmpty >seven = RBTNode 7 Black RBTEmpty RBTEmpty >five = RBTNode 5 Black two seven >ten = RBTNode 10 Black five fifteen That produces a unbalanced binary tree that looks like this: RBTNode 10 Black (RBTNode 5 Black -- 10left (RBTNode 2 Black RBTEmpty RBTEmpty) -- 5 left (RBTNode 7 Black RBTEmpty RBTEmpty)) -- 5 right (RBTNode 15 Black -- 10 right (RBTNode 12 Black RBTEmpty RBTEmpty) -- 15 left (RBTNode 20 Black -- 15 right (RBTNode 16 Black RBTEmpty RBTEmpty) -- 20 left (RBTNode 25 Black -- 20 right (RBTNode 22 Black RBTEmpty RBTEmpty) -- 25 left (RBTNode 27 Black RBTEmpty RBTEmpty)))) -- 25 right Let's do a quick test, and try doing a left pivot on the root. *Main> rbtPivotLeft ten False RBTNode 15 Black (RBTNode 10 Black (RBTNode 5 Black (RBTNode 2 Black RBTEmpty RBTEmpty) (RBTNode 7 Black RBTEmpty RBTEmpty)) (RBTNode 12 Black RBTEmpty RBTEmpty)) (RBTNode 20 Black (RBTNode 16 Black RBTEmpty RBTEmpty) (RBTNode 25 Black (RBTNode 22 Black RBTEmpty RBTEmpty) (RBTNode 27 Black RBTEmpty RBTEmpty))) Cleaned up, that looks like this: RBTNode 15 Black (RBTNode 10 Black (RBTNode 5 Black (RBTNode 2 Black RBTEmpty RBTEmpty) (RBTNode 7 Black RBTEmpty RBTEmpty)) (RBTNode 12 Black RBTEmpty RBTEmpty)) (RBTNode 20 Black (RBTNode 16 Black RBTEmpty RBTEmpty) (RBTNode 25 Black (RBTNode 22 Black RBTEmpty RBTEmpty) (RBTNode 27 Black RBTEmpty RBTEmpty))) Much better - that's much closer to a balanced tree! So now that we know how to do the pivot, and we've seen that it works correctly, we can look at building the rebalance code. With pivots out of the way, we can start looking at how to decide what operations to do to rebalance the tree. When we're doing an insert, we end up inserting a red node on the bottom of the tree. It's got two children, both null, which are considered black. If the parent of our new node is black, then everything is fine; we haven't altered the number of black nodes on any path from a node to a leaf. So we're done. But if the parent is red, then we've got a red child of a red node, so we need to do some fixing. Fixing an imbalance in a red-black tree can (and in fact often will) trigger a cascade of changes. But part of what makes the structure so elegant is that we only need to look at the local structure immediately around the new insert; and then when we've corrected that, there's only one place where the next problem could be. In every case where we're rebalancing, we can look at a specific problem, and fix it, and then immediately move to where the next potential problem is. To code this, we'll look at in terms of a focal node, which is the node causing the immediate problem we're fixing; and we'll fix the problem by looking at the local context of the focus. The potential cases we can encounter are: 1. The focal node is the root of the tree. In that case, we make it black. That adds one black node to every path in the tree, which leaves us with a valid tree, so we're done. 2. The focal node is red, but has a black parent. Again, that's fine. No 3. The focal node is red; it's parent is also red. Then we need to look at its uncle; that is, the node that is the sibling of its parent. If both the new node, the parent and the uncle are all red, then we change the color of the parent and uncle to black, and the grandparent to red. After this, the grandparent becomes the focal node, and we continue to do our tree-fixing with the new focus. 4. Here's where it gets a bit messy. If the focal node and its parent are both red, but the uncle is black, then we're going to need to pivot. Getting the pivot right is tricky. There are four cases: 1. The focal node is the right child of its parent, and the parent is the left node of the grandparent, then we do a left pivot of the focal node and its parent, and the former parent becomes the new focal node. 2. The focal node is the left child of its parent, and the parent is the right child of the grandparent, then we do a right pivot of the focal node and its parent, and the former parent becomes the new focus. 3. The focal node is the left child of its parent, and the parent is the left child of the grandparent. Then we do a right pivot of the parent and the grandparent and swap the colors of the parent and grandparent. The parent becomes the focus. 4. The focal node is the right child of its parent, and the parent is the right child of the grandparent. Then we do a left pivot of the parent and the grandparent and swap the colors of the parent and grandparent. The parent becomes the focus. Ok, there's the algorithm for rebalancing. How can we code it in Haskell? We've got a list of the nodes from the insertion path, in leaf to root order. When we look at the rebalance, we can see that there are a bunch of different cases which we can separate via pattern matching: 1. The focus is the root of the tree. We can select this case by using an empty list for the pattern for the ancestors parameter. Once we've gotten to the root, the tree is balanced, and the only corrective thing we may need to do is make the root black. So: >-- Root is focus; no matter what color it is, just make it black >rbtRebalance (RBTNode v _ left right) [] = RBTNode v Black left right >rbtRebalance node@(RBTNode v _ left right) (parent@(RBTNode pv pc pl pr):[]) > | pv > v = RBTNode pv pc node pr > | otherwise = RBTNode pv pc pl node 2. Also very simple is the case where the focus is black. In that case, we don't need to do anything except patch in the insert, and continue up the tree. Again, we can select that case just by pattern matching. >-- black node - just patch in the change, and climb. >rbtRebalance focus@(RBTNode fv Black left right) (parent@(RBTNode pv pc pl pr):ancestors) > | pv > fv = rbtRebalance (RBTNode pv pc focus pr) ancestors > | otherwise = rbtRebalance (RBTNode pv pc pl focus) ancestors 3. Next, we've got the case of a red node with a black parent. We can identify it by using "RBTNode v Red left right" as a pattern for the focus, and "RBTNode _ Black _ _" as a pattern for the parent. A red node with a black parent is OK, as long as the subtree under the red is balanced; and since we're balancing from the bottom up, we know that everything beneath this node is balanced. So: >rbtRebalance focus@(RBTNode fv Red left right) (parent@(RBTNode pv Black pl pr):ancestors) = > rbtRebalance (reconstructNode focus parent Black) ancestors 4. Now we're getting to the interesting cases, which are the cases where both the node and its parent are red. We can separate two cases here: cases where we'll fix using a pivot, and cases where we'll fix using a recoloring. The way to distinguish them is by looking at the uncle of the focus node; that is, the sibling of the nodes parent. The red-red case is complicated enough that instead of writing out huge pattern expressions, we'll simplify it by separating the function into several layers of calls, each of which does a phase of the pattern match. We want to separate out the cases where we've got a red node with a red parent and a red uncle, and the cases where we've got a red node with a red parent and a black uncle. If the focus, its parent, and its uncle are all red, then we're in a recoloring case; if the focus and its parent are red, and the uncle is black, then we're in a pivot case. >rbtRebalance focus@(RBTNode v Red left right) (parent@(RBTNode _ Red _ _):ancestors) = > rebalanceRedRedNode focus parent ancestors To be able to recognize sub-cases when we have a red node/red parent, we need to be able to look at the path from the grandparent to the focus, and the color of the uncle. So we'll write some helper functions to get those. >uncleColor node parent grandparent = > if (parent == rbtLeftChild grandparent) > then rbtColor (rbtRightChild grandparent) > else rbtColor (rbtLeftChild grandparent) >data TwoStepPath = LeftLeft | LeftRight | RightLeft | RightRight >pathFromGrandparent :: (Ord a) => RedBlackTree a -> RedBlackTree a -> RedBlackTree a -> TwoStepPath >pathFromGrandparent node@(RBTNode v _ l r) parent@(RBTNode pv _ pl pr) grand@(RBTNode gv _ gl gr) > | pv < gv && v < pv = LeftLeft > | pv >= gv && v < pv = RightLeft > | pv < gv && v >= pv = LeftRight > | pv >= gv && v >= pv = RightRight To actually handle the red node/red parent, first we separate out the case where the red parent is the root of the tree - there are no more ancestors on the insertion path. In that case, we can just climb to root, and do the correction from there. >-- node is red, parent is red, but parent is root: just go to parent(root), and fix >-- from there. >rebalanceRedRedNode focus@(RBTNode fv fc fl fr) parent@(RBTNode pv pc pl pr) [] = > rbtRebalance (reconstructNode focus parent Red) [] Otherwise, we need to check whether the uncle was red or black. If it was black, we do a recolor correction; if it was red, we figure out what kind of pivot to do. We'll use a bunch of helper functions to make it easy. >rebalanceRedRedNode focus parent (grand@(RBTNode gv gc gl gr):ancestors) = > if (uncleColor focus parent grand) == Red > then recolorAndContinue focus parent grand ancestors > else case (pathFromGrandparent focus parent grand) of > LeftLeft -> rbtRebalance (pivotGrandparentRight focus parent grand) ancestors > LeftRight -> rbtRebalance (pivotParentLeft focus parent) (grand:ancestors) > RightLeft -> rbtRebalance (pivotParentRight focus parent) (grand:ancestors) > RightRight -> rbtRebalance (pivotGrandparentLeft focus parent grand) ancestors The code above is really just using patterns for case selection. The actual work is in the helper functions that get called. They're all simple functions. First, we have some custom pivot functions - one for each direction for pivoting around a parent (the cases where the node is left of the parent, and the parent is right of the grandparent, or vise versa), and one for each direction pivoting around a grandparent (both node and parent are left children, or both are right children). >pivotGrandparentLeft node parent@(RBTNode pv pc pl pr) grand@(RBTNode gv gc gl gr) = > rbtPivotLeft (RBTNode gv gc gl (RBTNode pv pc pl node)) True >pivotGrandparentRight node parent@(RBTNode pv pc pl pr) grand@(RBTNode gv gc gl gr) = > rbtPivotRight (RBTNode gv gc (RBTNode pv pc node pr) gr) True >pivotParentLeft node parent@(RBTNode pv pc pl pr) = > rbtPivotLeft (RBTNode pv pc pl node) False >pivotParentRight node parent@(RBTNode pv pc pl pr) = > rbtPivotRight (RBTNode pv pc node pr) False And a function to do the recoloring for when the uncle was red: >recolorAndContinue focus@(RBTNode v c l r) parent@(RBTNode pv pc pl pr) grand@(RBTNode gv gc gl gr) ancestors = > let path = pathFromGrandparent focus parent grand > uncle = (case path of > LeftLeft -> gr > LeftRight -> gr > RightLeft -> gl > RightRight -> gl) > newUncle = if (uncle == RBTEmpty) > then RBTEmpty > else (RBTNode (rbtValue uncle) Black (rbtLeftChild uncle) (rbtRightChild uncle)) > newparent = reconstructNode focus parent Black > newGrandParent = (case path of > LeftLeft -> (RBTNode gv Red newparent newUncle) > LeftRight -> (RBTNode gv Red newparent newUncle) > RightLeft -> (RBTNode gv Red newUncle newparent) > RightRight -> (RBTNode gv Red newUncle newparent)) > in rbtRebalance newGrandParent ancestors And that, finally, is it. For the binary search tree without balancing code, the worst case is inserting a list of values in order. So let's try that, to see how well this works. *Main> foldl (\ x y -> rbtInsert x y) RBTEmpty [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16] RBTNode 4 Black (RBTNode 2 Black (RBTNode 1 Black RBTEmpty RBTEmpty) (RBTNode 3 Black RBTEmpty RBTEmpty)) (RBTNode 8 Red (RBTNode 6 Black (RBTNode 5 Black RBTEmpty RBTEmpty) (RBTNode 7 Black RBTEmpty RBTEmpty)) (RBTNode 12 Black (RBTNode 10 Red (RBTNode 9 Black RBTEmpty RBTEmpty) (RBTNode 11 Black RBTEmpty RBTEmpty)) (RBTNode 14 Red (RBTNode 13 Black RBTEmpty RBTEmpty) (RBTNode 15 Black RBTEmpty (RBTNode 16 Red RBTEmpty RBTEmpty))))) Since that's completely illegible, let's clean it up, and look at it in picture form: The shortest path from root to leaf is [4,2,1]; the longest is [4,8,12,14,15,16]. Just like we promised: the longest is no more than twice the shortest. It's a pretty good search tree, and the rebalancing work isn't terribly expensive, and amortizes nicely over a long run of inserts. The insert time ends up amortizing to O(lg n), just like the simple binary search tree insert. More like this So, we've built up some pretty nifty binary trees - we can use the binary tree both as the basis of an implementation of a set, or as an implementation of a dictionary. But our implementation has had one major problem: it's got absolutely no way to maintain balance. What that means is that… During my Haskell tutorial, I used balanced binary search trees as an example. I've had people write to me asking me to write about that in a non-functional programming post, because the Haskell one got too tied up in how to do things without assignments, and with tail recursion. Binary search… In Haskell, there are no looping constructs. Instead, there are two alternatives: there are list iteration constructs (like foldl which we've seen before), and tail recursion. Let me say, up front, that in Haskell if you find yourself writing any iteration code on a list or tree-like structure,… This post is very delayed, but things have been busy. I'm working my way up to finger trees, which are a wonderful functional data structure. They're based on trees - and like many tree-based structures, their performance relies heavily on the balance of the tree. The more balanced the tree is,… Mark, are you familiar with this paper? The gist of it is that imperative implementations of RB trees tend to be more complicated because destructive updates to the tree are possible. In functional languages, you can't do that, so red-black might as well be boiled down to a very simple pattern match of four cases. Not totally sure what your sources are but it looks closer to a reproduction of an imperative algorithm. There is a good example of using GADT's to certify the correctness of some red-black tree operations: I think the example may originate from a paper someplace but I don't know which one. I like your diagrams - what are you using to draw the trees? I've been coding binary trees in Haskell too, and I'm looking for a way to visualize them. The best I can do so far is dump a representation of the tree to a text file in DOT format and run graphviz on it. Mark woud you mind making a post about zippers or tries ? It seems that those are the most used data structures. I remember barely understanding these in university. I hope i never need to understand them again. Hmm. This is reminiscent of how I teach my middle school students "how to factor." Draw two factorizations of, say, 24 on the board, and point out how the "balanced" one is faster than the "unbalanced" one. Maybe, someday, one of them will be a computer programmer and this will resonate! ;o/ Dan McKinley makes an excellent point. From p. 27 of Chris Okasaki's Purely Functional Data Structures, the book that expands upon the work in his thesis: One of the reasons this implementation is so much simpler than typical presentations of red-black trees...is that it uses subtly different rebalancing transformations. Imperative implementations typically split the four dangerous cases considered here into eight cases, according to the color of the sibling of the red node with a red child. Knowing the color of the red parent's sibling allows the transformations to use fewer assignments in some cases and to terminate rebalancing early in others. However, in a [pure] functional setting, where we are copying the nodes in question anyway, we cannot reduce the number of assignments in this fashion nor can we terminate copying early, so there is no point in using the more complicated transformations. The book is one I think that any Haskell programmer who builds his own data structures should own.
{"url":"https://www.scienceblogs.com/goodmath/2009/11/30/advanced-haskell-data-structur","timestamp":"2024-11-04T05:02:36Z","content_type":"text/html","content_length":"73222","record_id":"<urn:uuid:f58be3db-9d94-4703-8432-eb30e62c2dba>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00174.warc.gz"}
iteration complexities In this paper, we study iteration complexities of Mizuno-Todd-Ye predictor-corrector (MTY-PC) algorithms in SDP and symmetric cone programs by way of curvature integrals. The curvature integral is defined along the central path, reflecting the geometric structure of the central path. The idea to exploit the curvature of the central path for the analysis of iteration … Read more Information Geometry and Interior-Point Algorithms in SDP and Symmetric Cone Programs This paper is a continuation of the paper Kakihara, Ohara and Tsuchiya by the authors where they demonstrated that the number of iterations of Mizuno-Todd-Ye predictor-corrector primal-dual interior-point methods for SDP and more generally symmetric cone programs is (asymptotically) expressed with an integral over the central trajectory called “curvature integral.” It was shown that the … Read more
{"url":"https://optimization-online.org/tag/iteration-complexities/","timestamp":"2024-11-04T08:03:28Z","content_type":"text/html","content_length":"86159","record_id":"<urn:uuid:d2fe9797-3412-406d-a4d1-11e06e6b18a5>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00836.warc.gz"}
Depletion-driven antiferromagnetic, paramagnetic, and ferromagnetic behavior in quasi-two-dimensional buckled colloidal solids We investigate quasi-two-dimensional buckled colloidal monolayers on a triangular lattice with tunable depletion interactions. Without depletion attraction, the experimental system provides a colloidal analog of the well-known geometrically frustrated Ising antiferromagnet [Y. Han et al., Nature 456, 898–903 (2008)]. In this contribution, we show that the added depletion attraction can influence both the magnitude and sign of an Ising spin coupling constant. As a result, the nearest-neighbor Ising “spin” interactions can be made to vary from antiferromagnetic to para- and ferromagnetic. Using a simple theory, we compute an effective Ising nearest-neighbor coupling constant, and we show how competition between entropic effects permits for the modification of the coupling constant. We then experimentally demonstrate depletion-induced modification of the coupling constant, including its sign, and other behaviors. Depletion interactions are induced by rod-like surfactant micelles that change length with temperature and thus offer means for tuning the depletion attraction in situ. Buckled colloidal suspensions exhibit a crossover from an Ising antiferromagnetic to paramagnetic phase as a function of increasing depletion attraction. Additional dynamical experiments reveal structural arrest in various regimes of the coupling-constant, driven by different mechanisms. In total, this work introduces novel colloidal matter with “magnetic” features and complex dynamics rarely observed in traditional spin systems. Over the years, experiments with model colloidal suspensions have generated fundamental insights about melting in two- and three-dimensions (2D and 3D),^2–7 crystal physics,^8–10 nucleation kinetics, ^11–16 and the nature and mechanics of disordered solids (glasses).^17–28 These investigations with colloids are complementary to studies of atomic systems because the length- and time-scales in suspension permit direct visualization and tracking of constituents with single-particle resolution.^29 Such studies often unify the soft- and hard-matter phenomenology and explore ideas from statistical mechanics. One fascinating model system along these lines is the buckled colloidal monolayer,^30–37 which consists of a packing of colloidal particles confined by two walls whose separation is ∼1.5 particle diameters. This quasi-2D buckled monolayer of particles on a triangular lattice provides a colloidal analog of the classic frustrated antiferromagnetic Ising model, which was first studied by Wannier ^38 and has been realized in a variety of experiments and simulations.^39–66 In this system, up and down out-of-plane particle displacements are analogous to Ising spins that point, respectively, up and down; the free volume (entropy) of each particle depends on the out-of-plane position of its nearest neighbors and is maximum when neighboring particles buckle in opposite directions. Moreover, by varying the colloid sphere diameter while holding wall separation constant, it is possible to tune nearest-neighbor free-volume differences and thereby vary the effective antiferromagnetic coupling constant. Experiments and theory based on this colloidal system have probed “spin” configurations, “spin” dynamics, and lattice distortions as a function of interaction strength and In this contribution, we introduce a quasi-2D buckled monolayer system that enables tuning of the sign and magnitude of the Ising spin coupling constant in situ. As a result, the nearest-neighbor “spin” interactions can vary from antiferromagnetic to para- and ferromagnetic. We demonstrate this basic phenomenon in experiments that employ suspensions of nearly hard-sphere particles of fixed diameter and rod-like micelles whose length can be tuned by varying temperature.^73–76 The rod-like micelles induce a short-range depletion attraction between nearest-neighbor particle pairs with attraction strength that varies with micelle length; this depletion force is nearly zero at low temperature, large at high temperature, and monotonic in-between. The effective Ising coupling constant is set by a combination of the original antiferromagnetic free volume effect (without depletion), which prefers oppositely buckled neighbors, and the depletion attraction, which prefers neighbors with the same buckling. By using temperature to tune depletion attraction strength, we demonstrate modulation of the effective Ising coupling constant, even changing its sign, and we generate experimental state diagrams for the system. We also measure particle dynamics, i.e., spin-flip autocorrelation functions as a function of depletion attraction strength. In parallel, we develop theoretical models to elucidate these effects. A comparison of the experiment and the simplest theory not only corroborates major concepts but also reveals complexities of the colloid system beyond what can be described by the simplest models. Depletion-driven wall interactions, for example, affect energetics and kinetics and introduce new physics. Therefore, for comparison with the experiment, we incorporate some of these features into a more sophisticated and realistic theoretical model, which we use to calculate phase diagrams as a function of temperature, packing, and the ratio of cell-thickness to particle-diameter. In total, this work takes first steps toward the creation of colloidal matter with “magnetic” features rarely observed in traditional atomic systems. In addition to equilibrium behavior, the experiments initiate the investigation of the microscopic kinetics and non-equilibrium dynamics of the “magnetic” colloidal matter. The remainder of this paper is organized as follows. Section II A introduces the central concept with a very simple hard-sphere model in quasi-1D; this model shows how the Ising coupling constant can be tuned from antiferromagnetic to the para- and ferromagnetic regimes via the modification of geometrical parameters and short-range depletion attraction. Then, we introduce and discuss more comprehensive quasi-2D models with and without wall attraction; these models provide context for an experimental comparison. Section III details the experimental methods. Section IV presents the primary experimental phase-diagram and quantitative results and briefly discusses dynamical observations. Section V summarizes the findings and suggests directions for future work. A. Quasi 1D: Introduction to basic effect Here, we introduce a simple model first developed for hard-spheres that we generalize to include arbitrary interparticle potentials. The simple model serves to clarify the physical concepts, especially how the Ising coupling constant varies with depletion attraction strength. In Sec. II B, we develop a realistic model that captures more features of our experimental system, including its many-body free energy; using the realistic model, we work out phase diagrams with and without wall interactions. For the reader who is not interested in detailed modeling, we recommend perusing the phase diagrams in Sec. II B and then skipping to Secs. III and IV. The experiments employ hard-sphere colloidal particles arranged on a triangular lattice in the xy-plane (the transverse plane) with in-plane nearest-neighbor spacing, L; the particles have diameter D (see Fig. 1). The particles are confined vertically by two walls, i.e., confined in the out-of-plane z-direction. The wall separation (sample thickness) is H. Typically, the thickness-to-diameter ratio, H/D, is ∼1.5 or smaller, and the system volume fraction is nearly closed-packed. We first analyze the colloidal system without depletion interactions (closely following prior work^1,68); then, we add small micellar depletants into the system and elucidate new features. With only hard-core repulsive interactions between spheres (and between spheres and walls), the particles seek to maximize their free volume or translational entropy. Thus, neighboring particles tend to move out-of-plane (buckle) in opposite directions. We coarse grain single-particle microstates with the particle center above and below the lattice plane, respectively, into the “up” and “down” macrostates. We assign to each particle a spin value of +1 if the z-component of the particle’s position is above the midplane of the sample cell (z = H/2) and a spin value of −1 otherwise. Qualitatively, nearest neighbors behave like Ising spins that interact antiferromagnetically with a coupling constant, J. Specifically, the interaction energy between neighboring particles is −Js[i]s [j], where s[i] and s[j] denote the spins assigned to particles i and j and J < 0. Generally, we will employ this theoretical model (or more complex versions of the model) to find a spin coupling constant J such that the free energy of the particle lattice best resembles the energy of an Ising lattice with an equivalent spin configuration {s[i]}. Previous work, which reduced the many-body partition function to a single-particle one, has shown for hard-spheres that J quantitatively depends on H/D and L/D.^1,67,68 We next generalize this theoretical approach^67,68 to include interparticle interactions beyond that of hard spheres. Specifically, we will show how short-range attractive interactions between spheres due to added depletants can cause the sign and magnitude of J to change. The analysis builds on the quasi-1D model of Shokef and co-workers for confined and frustrated colloidal Ising antiferromagnets on a triangular lattice.^67,68 This quasi-1D model focuses on three particles in the hexagonal cell [collinear particles enclosed by a red rectangle in Fig. 1(a)] and computes the relative free area (or free volume) of the central particle in its up vs down state. The microstate of the ith particle is specified by its position in the xz-plane, (x[i], z[i]); the system microstate involves all particles (1 ≤ i ≤ N) on the lattice. The colloid problem is next transformed to a corresponding problem with Ising-like spin configurations. Each particle i is specified by the z-component of its “spin” s[i], where s[i] = ±1. Here, x describes the in-plane position of the central particle (x = 0 corresponds to the horizontal position of the center of the central particle located midway between the outer particles); z is the out-of-plane position of the central particle constrained by walls at z = 0 and z = H. When a particle is above (below) the vertical center of the cell (z = H/2), it is in the spin-up (spin-down) state, s[i] = +1 (s[i] = −1). Note that if the central particle’s two outer neighbors are in opposite spin states, e.g., { − − +} or { − + +}, then the accessible areas associated with the central particle being up vs down are equal. However, when two neighbors are in the same spin state, e.g., both spin-down, then the buckled-up central particle, { − + −}, has more free area [shaded yellow, Fig. 1(b)] than a buckled-down central particle { − − −} [shaded green, Fig. 1(c)]. Shokef and Lubensky showed that this hard-sphere system has an effective antiferromagnetic interaction ( < 0) with ≡ 1/ is the Boltzmann constant, is the temperature, and ) is the free area available to the central particle when it resides in a vertical plane opposite from (the same as) its two neighbors; see Fig. 1(d) . The coupling constant, , can be changed in magnitude by modifying system geometry, i.e., , and . Although approximate, this model largely accounts for observed phenomena. If we permit lattice distortion, the model also elucidates stripe and zigzag spin configurations observed experimentally. We next extend the quasi-1D model to the more general situation wherein interparticle interactions are different from hard-spheres. For clarity, the analysis will focus on the case where both outer neighbors are in spin-down states [Fig. 1(d)]. Consider the probability for the central particle to be either spin-up or spin-down. Then, the probability of the central particle being in a particular “spin” configuration is is the integral of the Boltzmann weight over the accessible area, , of the central particle in the up/down spin state. The “total” partition function is It is needed for normalization. More explicitly, the numerators in Eq. The constant accounts for the contribution of the integration over momentum and other constants. The integrand in Eqs. ) contains a Boltzmann factor involving the potential energy, ), felt by the central particle due to its two neighbors. In contrast to the hard-sphere case, these potentials can be non-zero even when the central particle does not overlap physically with neighboring particles (or walls). The upper limit, ), represents the maximum in-plane displacement the particles can have such that they do not overlap physically with their neighbors; ) is a function of the out-of-plane (vertical) position, . (Note that we integrate over positive and account for the bilateral symmetry with a factor of 2.) In the hard-sphere case (without depletants), ) = 0 within the accessible area of the central particle. Therefore, the value of the Boltzmann weight becomes unity and Eqs. become the total accessible area to the central particle in the up and down spin state, respectively. Accordingly, the probability that the central particle resides in an up spin state is and the probability that the central particle resides in a down-spin state is is the total free area available to the central particle. The Ising coupling constant, , is determined from the ratio of accessible free areas, Rearrangement of this equation gives Eq. We next include depletion interactions in Eqs. (4) and (5). In the experiments, rod-like surfactant micelles are employed as depletants, and we control the strength and range of the depletion interaction by tuning the rod length (see Fig. 2 and Sec. III for details). Importantly, the numerical values of the Boltzmann weights will no longer be either zero or unity. The depth of the depletion attraction potential is an increasing function of the depletant volume-fraction ϕ[d], which (in practice) can be made roughly constant; the depth is also an increasing function of cylinder major axis length, ℓ, which can be controlled by temperature. An analytical expression for this depletion potential has been worked out and can be written in the following form: The function ) is given in the supplementary material , Sec. S1. The colloidal particle radius is ) is infinite when particles touch, zero at long range, and when particle surfaces are separated by a distance less than ∼ . Thus, ) need not be zero, and the Boltzmann weight need not be unity when the particles do not physically overlap. In this case, Boltzmann integrals will no longer give true physical areas. As a result, we obtain a modified version of Eq. that relates to a ratio of effective areas defined by the integrals in Eqs. The ratio of effective areas is obtained by the evaluation of the integrals weighted by the depletion potential. In practice, at low temperature and relatively low volume-fraction [Fig. 2(a)], the rods are very short, and the interaction between the large particles (with diameter D) is hard-sphere-like. Then, $A+eff/A−eff≈A+/A−>1$ [Fig. 1(d)] and J < 0. The system is antiferromagnetic. However, when the sample temperature increases [Fig. 2(b)], the rod length increases. When the rod length increases, the interparticle potential well-depth and the range of the potential increase. This effect causes the ratio of the effective areas to vary. At high temperature, the ratio $A+eff/A−eff<1$ [Fig. 1(e)] and the Ising coupling constant J > 0. Thus, the sign of the effective Ising coupling constant can be changed from negative to positive by increasing the depletion interaction strength. This increasing nearest-neighbor particle interaction will thus transform the sample from antiferromagnetic to paramagnetic states (and to a ferromagnetic state when βJ ≥ 0.275; see Appendix C). In Fig. 3, we display the results of numerical calculations based on this simple quasi-1D model with the depletion interaction due to cylindrical micelles at a fixed volume fraction. At low temperature, when the depletants are approximately small spheres, $A+eff/A−eff$ gives a 69% (31%) probability of being buckled-up (buckled-down) and βJ < 0. At slightly higher temperatures with the same volume fraction, the depletants become more rod-like, the effective areas become comparable, and βJ ≈ 0. Finally, when the micelles become long rods at high temperatures, the probability of the central particle buckling-up (buckling-down) is 34% (66%) and βJ > 0. Thus, the simple model clearly suggests that depletion attraction facilitates variation of the Ising coupling constant from negative to positive values (passing through zero). B. Realistic quasi-2D models, phase diagrams 1. Quasi-2D and bulk interactions The quasi-1D model introduced in Sec. II A illustrates key concepts but has significant limitations. It ignored the many-body nature of the Hamiltonian of the particle system, and the model was quasi-1D. A more realistic calculation should extend this problem to three dimensions (3D) and should aim to recover system free energy contributions due to more degrees of freedom of the central particle neighbors (e.g., nearest and next-nearest neighbors). To this end, we extend the model to quasi-2D, we take into account the free energy of the central particle and its nearest neighbors, and we compute from a statistical average over all the possible neighbor configurations, i.e., possible configurations. To account for the free energy of the central particle and its nearest neighbors, we must include the state of all particles in the central particle’s nearest and next-nearest neighbor rings; this gives 2 possible values of . For each , an optimal can be computed that is associated with the energy difference of the configuration with the central particle buckled up vs down (details in Appendix B ). To compute the ensemble averaged for the quasi-2D system, we average over all possible values of is the Boltzmann weight for observing the th configuration. This approach incorporates the relative probability of observing each nearest-neighbor spin configuration. We carried out these calculations computationally using the depletion potential for cylindrical micelles. Figure 4(a) shows predictions of this more realistic quasi-2D model as a function of depletion attraction strength (βU[min]) and diameter-normalized lattice spacing (L/D) and cell thickness (H/D). βU[min] increases from left-to-right in the plot; note that both the magnitude of the depletion attraction and the temperature increase moving left-to-right. The color scale in the phase diagrams corresponds to different regimes of the predicted Ising coupling constant. The blue region corresponds to the frustrated antiferromagnetic (AF) phase where −3 < βJ < 0; beige corresponds to the paramagnetic phase (P) where 0 < βJ < 0.275, and red corresponds to the ferromagnetic phase (F) where βJ ≥ 0.275^78,79 (see also Appendix C). Generally, the system starts in the AF region at low temperature, where the depletion attraction is very small, and evolves from AF (J < 0) to P (J > 0) as the magnitude of the attractive depletion interaction increases. Note that, for fixed H/D, a larger depletion attraction is required to reach the (AF–P) crossover for increasing L/D. Note also that, for fixed L/D, a larger depletion attraction is required to reach the (AF–P) crossover for increasing H/D. We quantitatively characterize “magnetic” order in the samples using the ensemble-averaged number of similar bonds, ⟨N[s]⟩. Here, bonds refer to nearest-neighbor pairs involving particles i and j. A similar bond between two particles occurs when a particle and its neighbor are in the same Ising spin state (s[i]s[j] = 1); a dissimilar bond occurs when a particle and its neighbor are in opposite Ising spin states (s[i]s[j] = −1). Notably, ⟨N[s]⟩ is a measurable and calculable parameter that helps identify the crossover from anti-to para-to ferromagnetic phases. ⟨N[s]⟩ also helps distinguish strong vs weak coupling in samples with the same magnetic classification. In Fig. 5(a), all possible nearest-neighbor configurations for a central particle in the spin-up state are shown, and the number of similar bonds for the central particle is indicated. An equivalent set of configurations can be constructed for a central particle in the spin-down state. (As an aside, the term frustrated bond and the number of frustrated bonds, ⟨N[f]⟩, are sometimes used in the literature; note that a similar bond is equivalent to a frustrated bond for spins in the antiferromagnetic phase.) Figure 4(b) shows the relationship between ⟨N[s]⟩, βJ, and βU[min] for a fixed lattice spacing of L/D = 1.01 [indicated by the white dashed line in Fig. 4(a)]. To obtain these curves, theoretical parameters [βU^attr(r; ϕ[d], ℓ, d), H/D, L/D] are input into our numerical model, and βJ is computed. This model prediction for the coupling constant, βJ, is then inserted into Wannier’s analytic theory^38,80 to derive a value for ⟨N[s]⟩. Theoretical curves, such as those in Fig. 4(b), will be used later [Fig. 5(b)] to characterize experimental data. A few features from Fig. 5 are worth noting for future analysis. The realistic theory demonstrates that for smaller H/D, a smaller depletion attraction is needed to reach the AF–P crossover; it also explicitly reveals that the number of similar bonds increases rapidly in the paramagnetic regime. For larger H/D, a larger depletion attraction is required to reach the AF–P crossover; note also that, for larger H/D, a more gradual increase in ⟨N[s]⟩ is found in the paramagnetic regime compared to smaller H/D. Qualitatively, these trends can be understood to be a consequence of the fractionally less close-particle-contact that arises as L/D and H/D increase; with a fractionally less close-particle-contact, the importance of the depletion attraction is reduced (since depletion occurs when particles are in close contact). Put another way, when particles have more room to move, then the influence of the short-range depletion attraction is reduced. 2. Wall interactions The presence of the wall introduces energetic changes, which can affect trends. In dilute systems, the particle–wall depletion attraction is approximately twice as strong as the depletion attraction between two particles in the bulk suspension; this effect has been measured and can be understood from purely geometric considerations.^73,81–84 This effect further modifies the ratio of effective free volumes $(V+eff/V−eff)$ associated with buckling up vs down; for example, when all of the nearest neighbors are on the top wall, then the central particle will have access to a different patch of the physical area on the top wall compared to the bottom wall. This behavior tends to enhance the primary depletion effect discussed in Sec. II A (see the supplementary material, Fig. S4) and leads to a small shift in the crossover condition from one magnetic phase to another (see blue and red dashed lines in Fig. 4). Specifically, for the case where all of the central particle’s nearest neighbors are up, the depletion effect is stronger for the particle on the top wall compared to the bottom wall. Therefore, the coupling constant βJ becomes more positive. As a result, the AF–P crossover region shifts to the left in the phase diagram (dashed lines in Fig. 4), indicating that the AF–P transition will occur at lower βU[min] (lower temperature). Since the phase diagram depends on the free energy difference, the basic features of the phase diagram are largely unchanged by the wall attraction. However, the energy barrier for a particle to escape from the wall, and thus flip from one wall to the other, is made much larger by the wall depletion attraction. As a result, the system dynamics will become slower with the increasing attraction strength. This phenomenon has important practical consequences for experiments, which we will discuss in Sec. IV. This section discusses the samples and experimental execution. In addition, we provide details about image analysis, image processing techniques, and measurements of interparticle potentials (depletion attraction strength). A. Particles and particle preparation The colloidal particles employed in the experiments were polystyrene microspheres (Thermo Fisher Scientific) with a manufacturer-measured diameter, D = 1.0 ± 0.01 µm. To prepare the sample, the microspheres were rinsed in ultrapure water (18.2 MΩ), and aggregates were removed by repeated centrifugation. The cleaned particles were then resuspended in 55 mM aqueous solutions of the surfactant hexaethylene glycol monododecyl ether (C[12]E[6]). C[12]E[6] self-assembles into rod-like micelles with temperature-dependent length that permits for temperature-tuning of the depletion attraction between colloidal particles. NaCl was added to the solution at a concentration of 2 mM, which gives a Debye screening length of κ^−1 ≈ 7 nm. B. Sample cell preparation and sample observation Sample cells were constructed by sandwiching the suspension between two 20 × 50 mm^2 glass coverslips (No. 1.5, 170 μm-thick, Electron Microscopy Sciences). The coverslips were cleaned to remove impurities from their surfaces by soaking each slip in base solution for >30 min. The base solution contained 5 g of sodium hydroxide (NaOH) in a solvent consisting of 20 ml of ultrapure water (18.2 MΩ) and 30 ml of ethanol. After soaking, the coverslips were washed with ultrapure water and then with isopropyl alcohol, and finally, they were dried with compressed air. The coverslips were separated/spaced via the application of two rows and three columns of UV glue drops (Norland adhesive 65) onto the bottom cover slip. After drying, the coverslips were aligned and sandwiched at a small angle; binding was achieved via capillary forces. Approximately 25 µl of the sample solution was pipetted into the sample cell, and then, the remaining peripheral openings were sealed. Finally, the resulting quasi-2D sample wedge-cell was mounted onto a glass slide for improved stability. These sample cell units were imaged via transmission light microscopy on the stage of an inverted microscope (Leica DMIRB) and were viewed through a 100× oil-immersion objective. The sample temperature was controlled by a heater (peCon GmbH) on the objective, with 0.1 °C resolution. Colloidal particle positions and brightness (intensity) were collected by video microscopy using a CCD camera (UNIQ, UP-685-CL) with a resolution of 659 × 494 pixels at a frame rate of 27.5 frames per second. Particle position and motions are analyzed with standard particle tracking methods and C. Temperature-tunable depletion interparticle potentials and wall interactions The depletion interaction potential between colloidal particles, U(r), depends on the concentration and character of tiny suspended depletants. For the case of the C[12]E[6] surfactant, the surfactant molecules self-assemble into micelles, and the geometric shape of these micelles affects both the strength and range of the depletion attraction.^73,76,81,84–86 Specifically, C[12]E[6] assembles into nanometer-size micelles that evolve from sphere-like for T < 20 °C to rod-like for T > 20 °C.^76,87 The length of the micelle minor axis is d = 4.3 nm; it does not change with temperature. However, the micelle major axis length, ℓ, grows with the increasing temperature. This effect enables control of the attraction strength between colloidal particles in situ, which is critical for tuning the Ising coupling constant. Experimentally, we derived the normalized depletion pair potential, βU(r), from measurements of the radial distribution function, g(r), in samples at low particle packing fraction (ϕ[p] ∼ 0.013). For analysis, we assumed U(r) = −k[B]Tlng(r); use of dilute samples reduces the influence of many-body interactions and ameliorates the need to invert the data using integral equations. This procedure for extracting βU(r) is outlined in detail for C[12]E[6] by Gratale et al.^76 Figure S2 of the supplementary material displays the resulting data and shows that βU[min], the absolute value of the maximum attraction depth of βU(r), becomes larger with the increasing temperature. Note that, even at our lowest temperature, the depletion attraction is non-zero; thus, the experimental conditions at low temperature differ somewhat from the expectations of our theoretical models. Additionally, as described in Sec. II B 2, depletion interactions between walls and particles can become significant at high temperatures. The particle–wall attraction at contact is approximately twice that of between two particles in the bulk. This effect has a small influence on the equilibrium phase diagram. However, particles can also become kinetically arrested, i.e., because the energy barrier to escape from the wall is large. Our experiments revealed this kinetic arrest. D. Average number of similar neighbor bonds (⟨N[s]⟩) and magnetic state For system characterization and comparison of experiment to theory, we analyzed up/down spin data for the whole sample (see Appendix A). Specifically, from the images, we derived the average number of similar neighbor bonds, ⟨N[s]⟩, and its variance. We found that ⟨N[s]⟩-based metrics are robust to sample systematics compared to other possible variables. In the supplementary material, Sec. S2, we briefly describe some systematic effects that drove our decision to focus on ⟨N[s]⟩. E. Wall- and lattice-spacing measurements The Ising coupling constant depends on the ratio of the wall- and lattice-spacing to the particle diameter (H/D and L/D). We discuss our procedures to measure H, D, L, H/D, and L/D in the supplementary material (Sec. S4). In practice, H/D for the different samples was 1.23 ± 0.08, 1.29 ± 0.08, and 1.55 ± 0.07. For L/D, the mean value averaged across samples was 1.01. The variation of H/D across the field-of-view is small (∼5%); this leads to a relatively small variation of the coupling constant (±10%), which does not change any of our conclusions about systematic coupling constant variation and transitions. F. Temperature variation procedures To demonstrate general ideas about Ising coupling constant variation, we collected data at discrete temperatures and employed an experimental protocol to attenuate the effects of dynamic arrest. Equilibrium behavior was only truly achievable in lower temperature regimes. At low temperatures, the dynamics were fast, and the samples rapidly equilibrated. Therefore, before every temperature change, we opted to allow the sample to equilibrate at the lowest temperature (in the antiferromagnetic phase). Thereafter, we ramped the sample temperature rapidly to a higher value, and we collected sample images following the realization of the higher temperature for ∼12 min. After completion of the high-temperature video, we rapidly lowered the sample back to the lowest temperature and allowed the sample to equilibrate before initiating the next temperature A typical temperature cycle example follows below. We first chose a sample with particular H/D and L/D. We then cooled it to 21 °C and acquired video microscopy data. Next, the temperature was rapidly increased to a higher temperature (e.g., 31 °C), and another video stream was collected after the sample reached steady-state at this temperature. Then, the sample was cooled back to 21 °C; data were collected; and after equilibration, the jump cycle was repeated for other target temperatures. Note that this procedure generated the most data at 21 °C. Target temperatures were acquired in the following order: 31, 29, 27, 25, 23, and again at 31 °C. Although imperfect, this approach enabled us to start all samples in approximately the same fluctuating AF phase. Of course, the approach to steady-state, especially for high temperature samples, which do not equilibrate, is influenced by the microstates the system passes through during this process. The same temperature cycling was repeated for different H/D. In total, we explored six different temperatures (21, 23, 25, 27, 29, 31 °C) at three different H/D, taking a series of images/movies at 27.5 fps for 20 000 frames (∼12 min). G. Numerical calculations Calculation of the Ising coupling constant, βJ, requires numerical integration of $Z({+})$, $Z({−})$, and $Z$. To this end, we developed a custom Python code to calculate integrals in quasi-1D and quasi-2D. For the calculations, the Boltzmann weight utilizes the depletion potential (defined in the supplementary material, Sec. S1), which depends on surfactant volume fraction (ϕ[d]), micelle rod-length (ℓ), and diameter (d). The integrals thus depend on geometric parameters H, L, and D and surfactant parameters ϕ[d], ℓ, and d. Details about the numerical integrations are supplied in the supplementary material, Sec. S3. A. Samples with small thickness-to-diameter ratio: Experiment and model comparison 1. Static structural properties We first examine the buckled colloidal system under conditions where we expect the free-volume modeling to be most accurate, i.e., experiments with the smallest H/D (Fig. 6). In the low-temperature range, the static structural properties of this sample exhibit many anticipated features, notably a change in the sign of the coupling coefficient. At higher temperatures, the large depletion attraction to the sample walls introduces significant dynamic arrest, and therefore, these samples cannot fully equilibrate. In this subsection, we present and discuss findings about the static structure. Later, we will discuss dynamics and the behaviors of colloidal systems with larger H/D, which can deviate more from the model assumption of a static in-plane lattice. Experimental images (Fig. 6) qualitatively show the basic effect. The structures evolve from distributions of short stripes and zigzags to clusters of the same spin, i.e., as the temperature is increased. More quantitatively, in Ising systems, when the spins are very weakly interacting, we expect that ⟨N[s]⟩ = 3. However, when the coupling becomes strong, the “ideal” Ising system behaves differently for antiferromagnetic (J < 0) vs paramagnetic/ferromagnetic interactions (J > 0). Ideal frustrated antiferromagnetic systems on a fixed lattice (and even on lattices that can mechanically deform) will exhibit 2 ≤ ⟨N[s]⟩ ≲ 3. As the temperature approaches zero, a frustrated antiferromagnet without lattice distortion will have ⟨N[s]⟩ close to 2; for lattices that can deform, well-known stripe and zigzag configurations arise and also give ⟨N[s]⟩ close to 2.^1,67,68 On the other hand, with a positive coupling constant (J > 0), the spins prefer to align parallel. For the paramagnetic phase (0 < βJ < 0.275), we expect 3 ≲ ⟨N[s]⟩ ≲ 5, and for the ferromagnetic phase (βJ ≥ 0.275), we expect ⟨N[s]⟩ ≳ 5.^38,80 Note that the largest changes of ⟨N[s]⟩ per unit change of βJ occur in the paramagnetic phase. In Fig. 5(b), both experimental data and model predictions are plotted vs the strength of the depletion attraction (βU[min]). The data (solid and open circles) represent experimental measurements of ⟨N[s]⟩. The black solid circles are derived from experiments where the particles freely move between the up and down states and the samples equilibrate. The open circles, by contrast, are derived from experiments at higher temperatures where attractive forces between the particles and the walls are strong; in these cases, dynamic arrest hinders particle motion, and the samples do not equilibrate. On this plot, ⟨N[s]⟩ is extracted from the experiment via particle tracking, and U[min] is experimentally measured. Additionally, an expected βJ is extracted from Wannier’s Ising model solution (right vertical axis) corresponding to ⟨N[s]⟩ on the left vertical axis ( Appendix D). The solid line is derived from our quasi-2D theoretical model, including wall interactions (see Sec. II Figure 5(b) shows that at the lowest depletion attraction (lowest temperature), where βU[min] ≈ −1.7, the number of similar bonds ⟨N[s]⟩ < 3 and βJ < 0. These samples are clearly in the antiferromagnetic phase. When the attraction is increased to βU[min] ≈ −2.2, then the number of similar bonds ⟨N[s]⟩ ≈ 3.2 and βJ > 0. These data demonstrate that the sample has evolved from the antiferromagnetic regime to the paramagnetic regime [Fig. 6(b)]. The experimental data thus show the Ising coupling constant transitions from negative to positive values, consistent with our theoretical model and exhibiting our primary expectation. At higher attraction strengths where the depletion attraction becomes more dominant, βU[min] ≈ −2.7 to −4.3, ⟨N[s]⟩ ≈ 3.9–4.4, and βJ ≈ 0.19–0.24. In this case, the system is approaching the paramagnetic–ferromagnetic crossover. Note, however, at 29 and 31 °C, the number of similar bonds stays nearly the same; this is because the large depletion attraction between the particles and walls induces dynamical arrest, preventing the sample from reaching equilibrium. Due to the dynamical arrest, the particles under these conditions do not flip enough to significantly rearrange. This effect prevents the observation of the P–F crossover. Model predictions deviate from experimental observations at high temperatures because arrested dynamics prevent equilibration. The structural observations related to ⟨N[s]⟩ provide experimental evidence for the crossover from one magnetic phase with J < 0 to another with J > 0. A second measurable morphological parameter worthy of quantitative exploration is the variance of ⟨N[s]⟩, i.e., $var(Ns)=⟨Ns2⟩−⟨Ns⟩2$. The variance is plotted vs ⟨N[s]⟩ in Fig. 7. In Fig. 7, the solid line shows the behavior predicted by the Ising model. The measured data for samples with all H/D are also shown. Here, we focus on the smallest H/D (blue triangles). At small and intermediate depletion attraction, its variance ranges from var(N[s]) ≈ 1.3 to var(N[s]) ≈ 2.0, following predictions of the Ising model in both the antiferromagnetic and paramagnetic regimes. At the higher temperatures, however, when the sample experiences dynamic arrest, var(N[s]) ≈ 2.7; clearly, the samples in this higher temperature range have been quenched into local free energy minima with many particles remaining stuck at the walls. At these higher temperatures, we can visually confirm the paramagnetic phase (J > 0, below the P–F transition) from the presence of large clusters of particles with similar spin, but clearly, the high temperature samples are out-of-equilibrium and do not conform to expectations of the Ising model. Thus, the variance data for small H/D are largely consistent with our findings (and expectations) based on ⟨N[s]⟩ alone. 2. Dynamics and structural arrest We next examine the temporal dynamics of samples with H/D = 1.23. We focus on the simplest temporal fluctuations: single-particle spin-flip dynamics. To quantitatively analyze spin-flip dynamics, we collect single-particle “spin” trajectories as a function of time. We determine ) for each particle in the video field-of-view. Using these trajectories, we compute the single-particle spin-flip temporal autocorrelation function, Here, the angled brackets indicate averages over all particles. We fit the resultant curves to a stretched-exponential, ) = exp[−( ]. From the fits, we extract a relaxation time, , and stretching factor, . The data and fitting results are shown in Fig. 8 , and the corresponding best-fit parameters are tabulated in the supplementary material , Table S1. Stretched exponentials have 0 < < 1 and can be indicative of heterogeneity amongst single-particle relaxation times. When βU[min] is comparatively small, C(t) decays rapidly and is roughly exponential. In this regime, the relaxation time τ increases with depletion attraction strength (i.e., increasing temperature), and the temporal fluctuations are significant, enabling the system to reach equilibrium. At higher temperatures (starting for depletion attraction strength, βU[min] ≈ −3.3 at T = 27 °C), the system becomes dynamically arrested. Dynamic arrest sets in when the system is deep in the paramagnetic regime, relatively close to the ferromagnetic phase. The attenuation of fluctuations at higher temperature is primarily due to the depletion attraction between the particles and the wall. This depletion attraction energy for particles near walls has been studied^73,77,81–84 and is expected to be approximately twice that of two colloidal particles in the bulk. The local energy barrier for wall escape (near both walls) becomes significant for T > 27 °C, and the system is effectively quenched into a local minimum of the free energy landscape at some point during the sample processing (during the temperature jump). At the highest temperatures, the samples exhibit structures akin to frozen-in non-equilibrium paramagnetic phases. Dynamic arrest prevents transition to the ferromagnetic phase. Interestingly, wall interactions only slightly affect the equilibrium free energy (see dashed lines in Fig. 4), but they dramatically affect spin-flip rates. This phenomenology is consistent with the conclusions we arrived at from analysis of structural data. It provides an interesting physics contrast between the colloidal “magnetic” system and the atomic spin systems. B. Samples with large wall separations 1. Static structural properties In this final subsection, we summarize the behaviors of the colloidal systems with larger H/D (H/D = 1.29, 1.55). At larger H/D, the samples more readily accommodate lattice distortions, and therefore, their behavior will deviate from that of systems with a fixed in-plane lattice. As a result, we anticipate that the simple free-volume models will be less realistic for larger H/D, and the analogies to atomic Ising spin systems will be weaker. Nevertheless, these systems exhibit interesting similarities and differences compared to samples with H/D = 1.23 and the more idealized models. We first describe structural behavior. For the system with H/D = 1.29, Figs. 9(a) and 9(b) provide images at different temperatures that qualitatively reveal the existence of a structural crossover where the coupling-constant changes sign. The images suggest evolution from the frustrated antiferromagnet to paramagnet. More quantitatively, for the lowest temperature (21 °C), βU[min] ≈ −1.7, ⟨N[s]⟩ ≈ 2.5, and βJ ≈ −0.29. For the highest temperature (31 °C), ⟨N[s]⟩ = 3.3 at βU[min] ≈ −4.3, and βJ ≈ 0.07. The crossover from the negative to positive coupling constant occurs between 23 and 25 °C. This structural behavior is consistent with the theoretical expectation that the AF–P crossover transition will arise but will slightly shift (see Fig. 4) to occur at greater depletion attraction compared to samples with smaller H/D. When more free volume is accessible to the particles (with increasing H/D), the fraction of phase-space for which the depletion attraction is important is reduced, and the crossover shifts to require more depletion attraction. The orange pentagons in Fig. 7 show the variance vs the number of similar bonds for H/D = 1.29. At low depletion attraction (21 °C) [Fig. 9(a)], ⟨N[s]⟩ ≈ 2.5 and var(N[s]) = 0.73. This difference reflects a morphology of stripes and zig-zags that differs from that of samples with H/D = 1.23, which has shorter and more randomly oriented stripes. The observed small ⟨N[s]⟩ and var(N[s]) is consistent with the emergence of the zigzag-stripe ordered ground state that is expected to occur when lattice distortion can partially relieve frustration.^88–90 The rest of the var(N[s]) vs ⟨N[s]⟩ data falls nicely on the predictions of the Ising model and exhibits the transition from a frustrated antiferromagnet to a paramagnet. At the highest temperature [Fig. 9(b)], the images show substantial clusters of buckled-up (or buckled-down) particles residing in the same plane, as might be expected in the paramagnetic phase, and traces of zigzag-stripe configurations perhaps where the lattice has distorted for some uncontrolled reason. The system with H/D = 1.55 exhibits zigzag-stripe configurations exclusively [see Figs. 9(d) and 9(e)]. ⟨N[s]⟩ < 3 for all temperatures, and var(N[s]) ≈ 0.31 − 0.84, is small for all temperatures (see green circles in Fig. 7). Superficially, the ⟨N[s]⟩ and var(N[s]) data fall roughly on the predicted Ising model curve, and the primary structural observations in the frustrated antiferromagnetic state are consistent with prior work.^1,67 However, in practice (see below), these samples experience dynamic arrest at all temperatures. More careful examination of Fig. 7 reveals significant fractional deviation from the Ising model predictions for relatively small ⟨N[s]⟩ and var(N[s]); the deviation is less obvious than the case of the smallest H/D sample in the paramagnetic regime perhaps because the fractional changes of ⟨N[s]⟩ and var(N[s]) per unit change of βJ are large in the paramagnetic regime. 2. Dynamic properties Figure 9(c) shows spin-flip autocorrelation functions for H/D = 1.29 samples. These autocorrelation functions exhibit trends similar to those of the H/D = 1.23 sample. That is, we observe a low-temperature regime with roughly exponential dynamics that decay rapidly and thus allow the system to evolve to equilibrium states, but at higher temperatures, the samples exhibit dynamic arrest due to the depletion attraction of particles to the walls. By contrast, the dynamics for the samples with H/D = 1.55 are very slow and non-exponential at all temperatures [see Fig. 9(f)]. This behavior is qualitatively different from the two systems with smaller H/D. The dynamics at low temperatures are slow because the system resides in the strongly frustrated antiferromagnet regime. The dynamics at the high temperatures are slow too, and like the samples with smaller H/D, this arrest effect is due to the strong depletion attraction of the particles to the walls. Thus, although the coupling constant varies with temperature, this system is quenched into deep minima of the frustrated antiferromagnet free energy landscape for all temperatures (but the dynamics are slow for different reasons at different temperatures). Again, the colloidal sample exhibits interesting differences with respect to traditional atomic magnetic systems. In this contribution, we have shown how quasi-2D buckled colloidal monolayers on a triangular lattice can be induced to exhibit antiferromagnetic and para- and ferromagnetic behavior. A novel colloidal Ising system was created via the introduction of short-range, temperature-tunable entropic depletion attractions. Note that here we employed temperature-sensitive rod-like micelles as depletants, but, in principle, other depletants could work as well, e.g., microgel spheres whose size is temperature-dependent. We developed theoretical models with varying degrees of complexity that predict these effects, and we experimentally demonstrated the ideas using video microscopy. The structural experiments corroborated the central ideas. Additional dynamical measurements of sample spin-flip temporal autocorrelation functions showed that dynamic arrest is an intrinsic feature of the buckled colloid with depletion that arises when entropic attraction to the sample walls becomes important. The colloidal system thus offers interesting similarities and differences with respect to traditional atomic Ising systems and even to active matter systems^91,92 that are exciting to understand. Another potentially interesting connection worth exploring concerns the similarities of the present system to glassy systems that exhibit the reentrant glass phenomenon,^17,18,25,26 e.g., which is also driven by increasingly strong depletion interactions. In future experiments, it should be possible to ameliorate particle–wall attractions, which will facilitate cleaner studies of the nature of the crossover from one phase to another, as well as a better comparison of systems with rigid lattices vs deformable lattices. Moreover, it should be interesting to study “spin” structure and dynamics near defects, such as grain boundaries, and thereby examine the role of disorder in influencing relaxation, frustration, and phase behavior; potentially, it may even be possible to use optical traps to create systems that have some spin-glass character. Finally, deeper examination of system dynamics (beyond single-spin flipping) using analysis techniques developed for protein systems is under way and could prove interesting. See the supplementary material for the analytical form of the pair potential (depletion potential), discussion of stuck particles, details of numerical calculations, and details of experimental measurements (i.e., βUmin measurement, L/D, H/D, and extracted fit parameters from spin-flip autocorrelation functions). We are happy to acknowledge many valuable discussions with Peter Collings, Yair Shokef, Piotr Habdas, Wei-Shao Wei, Alexis de la Cotte, Sophie Ettinger, Charlotte Slaughter, Winston Peloso, and Yihao Chen. We gratefully acknowledge the financial support from the U.S. National Science Foundation through Grant No. DMR2003659 (A.G.Y.) and the MRSEC under Grant No. DMR1720530, including its optical microscopy shared facility (A.G.Y.); the National Natural Science Foundation of China under Grant No. 1227041716 (X.M.); the National Key Research and Development Program of China under Grant No. 2022YFA1405002 (X.M.); the Stable Support Plan Program of Shenzhen Natural Science Fund under Grant No. 20220815094750002 (X.M.), and the Science and Engineering Research Board of Government of India under Grant No. SRG/2021/001077 (C.K.M.). Conflict of Interest The authors have no conflicts to disclose. Author Contributions Analisa Hill: Conceptualization (lead); Data curation (lead); Formal analysis (lead); Investigation (lead); Methodology (lead); Software (lead); Validation (lead); Visualization (lead); Writing – original draft (lead); Writing – review & editing (lead). Michio Tanaka: Conceptualization (supporting); Formal analysis (supporting); Methodology (supporting); Software (supporting); Validation (lead); Visualization (supporting); Writing – review & editing (lead). Kevin B. Aptowicz: Conceptualization (supporting); Methodology (supporting); Visualization (supporting); Writing – review & editing (supporting). Chandan K. Mishra: Conceptualization (supporting); Methodology (supporting); Writing – review & editing (supporting). Arjun G. Yodh: Conceptualization (lead); Funding acquisition (lead); Methodology (lead); Project administration (lead); Resources (lead); Supervision (lead); Visualization (supporting); Writing – original draft (lead); Writing – review & editing (lead). Xiaoguang Ma: Conceptualization (lead); Methodology (lead); Project administration (equal); Supervision (supporting); Writing - review & editing (lead). The data that support the findings of this study are available from the corresponding author upon reasonable request. Bright and dark particles populate each image. Bright particles (near the top cover slip) are located in the microscope’s focal plane, and the dark particles (near the bottom cover slip) are slightly out of focus. We determine the up–down assignments associated with the two “Ising” (buckled) states, s[i] ± 1, by creating a two-dimensional histogram of particle brightness (intensity), I, vs time. At each time, t, the ensemble of particle brightness’ exhibits a bimodal distribution (see the supplementary material, Fig. S3). To assign a particle as “spin” up or down, a threshold brightness, I [cut](t), is calculated using the two peaks in I(t). In practice, we typically find that the two peaks are at the 25th and 75th percentiles of the I(t) distribution; the up/down cut is set halfway between these two peaks. To check that our assignments are correct, we overlaid these assignments with experimental images and manually inspected them to confirm matching. In our more realistic calculation, the system free energy is computed from the product of the phase space of the central particle and that of its first nearest neighbors. This scheme requires averaging over all possible up/down configurations of the first and the second nearest neighbors of the central particle (see the supplementary material, Fig. S6). The second nearest neighbors must be included because the energy of a first nearest neighbor depends on the configuration of its own neighbors (which includes the central particle and some of the first and second nearest neighbors of the central particle). The realistic approach is briefly outlined below. We first write the partition function for a single particle as a three-dimensional integral, represents the set of spins of the six nearest neighbors of particle is the volume of the cage made by the nearest neighbors of particle ; particle is free to move in this cage but restricted to /2 ( /2) if = +1 ( = −1). is the potential felt by particle at position due to its six neighbors and the two walls (all of which, for our case, could be mediated via depletion interactions). To simplify the decoupling of this integral, we make the approximation that ’s nearest neighbors remain exactly on their lattice positions at /2 ( /2) when their spins are +1 (−1). We approximate the bulk partition function of particles as a product over the partition function of each particle in the cage of its nearest neighbors, represents a specified set of all spins spanning the system, and again, is the set of six nearest neighbors of particle The total free energy of the whole system is then Our goal is to describe the system with an Ising model and to choose the optimal such that the Ising energy term, provides a good approximation for ) up to a constant independent of spin configuration. To accomplish this goal, we require that our Ising energy, with optimized , should approximately capture the energy differences between any two global spin states (i.e., between “typical” global spin states) wherein only the central particle has flipped. We require $E(Sk+;J)−E(Sk−;J)≈F(Sk+)−F(Sk−) for all k.$ $Sk+$ $(Sk−)$ denotes the global spin state wherein the central particle is up (down). The state of the remaining (neighboring) particles is indexed by , which can take on one of 2 possible configurations. Note that when Eq. is true, it follows that ) ≈ ) + const for all states In writing single particle partition functions of the form in Eq. (B1), we have assumed that particles are free to move only within the cage of their static nearest neighbors whose z-positions are determined by their spins. Therefore, flipping the spin of the central particle affects only the partition function of the central particle and those of its six nearest neighbors. Furthermore, computation of the partition functions of the six nearest neighbors only requires knowledge of the spin state of the central particle and those of the 18 particles comprising the first and second nearest-neighbor rings. Thus, to compute the free energy differences corresponding to a single spin flip, we need to only consider the combined 18 spins of the first and second nearest-neighbor rings. Accordingly, our state index k need to only index the state of these 18 particles and will take on one of 2^18 possible configurations. Our calculation task reduces to finding a value of that best satisfies Eq. for all 2 possible values of . For each , there is an optimal , which we determine from As explained earlier, when the central particle flips, only the partition function of the central particle and those of its six nearest neighbors change. Accordingly, we can write in a form that involves a computable product of partition functions, represents the set of nearest neighbors of the central particle. To compute the optimal for the whole system, we average over all values of (i.e., over all possible configurations of the 18 neighbors), is the Boltzmann weight for observing a particular configuration out to the second nearest neighbors; the weight accounts for cases where the center particle is up and down. This approach [Eq. ] effectively incorporates the relative probability of observing each nearest-neighbor spin configuration. Note that the above sums exclude k that correspond to spin states where three of the six central bonds are similar (i.e., frustrated for the antiferromagnet). For these nearest-neighbor configurations, there is no energy difference when the central particle flips, and the denominator of Eq. (B6) is zero. For examples of the above computations, see the supplementary material, Fig. In Wannier’s treatment of the triangular Ising antiferromagnet problem,^38 the Curie point, i.e., the transition between paramagnetic and ferromagnetic states, is given by setting μ, a convenience parameter defined by Wannier, equal to zero. μ is defined in Eq. (36) of Wannier’s paper: μ = 1 − 2tanh2βJ. Solving for βJ, we get $βJ=12tanh−112≈0.275$. Thus, the value of βJ above which the system transitions to the ferromagnetic state is βJ ≈ 0.275. (Note that we have replaced Wannier’s symbol L with βJ, and therefore, Wannier’s definition of L is equivalent to our definition of βJ.) APPENDIX D: RELATIONSHIP BETWEEN MEAN NUMBER OF SIMILAR BONDS (⟨N[s]⟩) AND WANNIER’S ENERGY EXPRESSION Wannier derived an expression^80 for the total free energy of the system, U, normalized by $−2NJ$ as a function of βJ: $f(βJ)≡−2UNJ$. Here, $U≡J2∑sisj$ and i, j are neighboring spins that are summed over (note that Wannier’s definition of J carries an extra factor of 2 compared to ours.) Additionally, ∑s[i]s[j] = n[d] − n[s], where n[d] (n[s]) represents the total number of dissimilar (similar) bonds in the sample. Since every bond is either dissimilar or similar and since there are three bonds per N particles, n[d] = 3 N − n[s]. (Note that the number of bonds attached to each particle is 6; however, in each unit cell, there is one particle and three bonds, so the number of bonds per particle is 3.) Thus, ∑s[i]s[j] = 3 N − 2n[s], $U=J2(3N−2ns)$, and $f(βJ)=2nsN−3$. As defined in Sec. II B, ⟨N[s]⟩ is the mean number of similar bonds attached to each particle, which has a maximum value of 6. Since ⟨N[s]⟩ = 2n[s]/N, ⟨Ns⟩ can be related to Wannier’s energy expression: ⟨Ns⟩ = f( βJ) + 3. With caveats about dynamic arrest, we attempted to combine all of the experimental data into a single plot (i.e., data from all H/D and temperatures). For this task, we started with the measured ⟨N[s ]⟩ for each H/D and temperature. For each measured ⟨N[s]⟩, a one-to-one correspondence with βJ can be established using Wannier’s theoretical calculation (see Appendix D). Note that by using ⟨N[s]⟩ (rather than temperature) to characterize each experimental system, this data presentation is in some sense less sensitive to dynamic arrest; the arrested systems are effectively trapped at lower temperatures (with corresponding lower ⟨N[s]⟩). The use of ⟨N[s]⟩ thus approximately assigns an “equilibrium” state at a lower temperature to the sample. Based on this approach, in the supplementary material, Fig. S5, we show a master plot of all samples (all possible H/D and temperatures). The master plot also contains Wannier’s theory curve ( Appendix D). With this plot, it is apparent that the experimental samples collectively span from the antiferromagnetic to paramagnetic regime (approaching the ferromagnetic regime). Thus, with caveats about the lack of equilibration due to dynamic arrest, this master plot further corroborates our central idea. A. M. T. C. , and A. G. , “ Geometric frustration in buckled colloidal monolayers , and , “ Modes of surface premelting in colloidal crystals composed of attractive particles N. Y. A. M. , and A. G. , “ Melting of two-dimensional tunable-diameter colloidal crystals Phys. Rev. E A. E. D. G. , “ Melting of metastable crystallites in charge-stabilized colloidal suspensions Phys. Rev. Lett. A. M. M. F. P. J. , and A. G. , “ Premelting at defects within bulk colloidal crystals C. A. D. H. van Winkle , “ Experimental observation of two-stage melting in a classical two-dimensional screened Coulomb system Phys. Rev. Lett. C. A. W. O. , and R. A. , “ Comparison of melting in three and two dimensions: Microscopy of colloidal spheres Phys. Rev. B van Blaaderen , and , “ Template-directed colloidal crystallization J. J. S. E. , and M. A. , “ Electric field mediated assembly of three dimensional equilibrium colloidal crystals Soft Matter , and G. M. , “ Two-and three-dimensional crystallization of polymeric microspheres by micromolding in capillaries Adv. Mater. E. R. P. N. , and D. A. , “ Real-space imaging of nucleation and growth in colloidal crystallization S. R. , and C. P. , “ Structure and kinetics in the freezing of nearly hard spheres Soft Matter J. L. van Megen , “ Crystallization kinetics of suspensions of hard colloidal spheres Phys. Rev. E , and , “ Visualizing kinetic pathways of homogeneous nucleation in colloidal crystallization Nat. Phys. J. S. , and D. J. , “ Crystallization of DNA-coated colloids Nat. Commun. , and , “ Colloidal diffusion over a quenched two-dimensional random potential Soft Matter C. K. , and , “ Two-step glass transition induced by attractive interactions in quasi-two-dimensional suspensions of ellipsoidal particles Phys. Rev. Lett. , “ Re-entrant glass transition in a colloid-polymer mixture with depletion attractions Phys. Rev. Lett. E. D. R. J. S. S. S. D. J. Z. S. J. L. et al, “ Structure-property relationships from universal signatures of plasticity in disordered solids N. L. C. E. , and M. F. , “ Normal modes and density of states of disordered colloidal solids P. J. M. D. M. A. , and A. G. , “ Physics in ordered and disordered colloidal matter composed of poly (n-isopropylacrylamide) microgel particles Rep. Prog. Phys. M. L. P. J. W. G. A. J. , and A. G. , “ Measurement of correlations between low-frequency vibrational modes and particle rearrangements in quasi-two-dimensional colloidal glasses Phys. Rev. Lett. Z. S. R. J. S. S. S. A. J. , and A. G. , “ Heterogeneous activation, local structure, and softness in supercooled colloidal liquids Phys. Rev. Lett. P. J. , and A. G. , “ Cooperative rearrangement regions and dynamical heterogeneities in colloidal glasses with attractive versus repulsive interactions Phys. Rev. Lett. K. N. A. M. S. U. P. N. A. B. M. E. , and W. C. K. , “ Multiple glassy states in a simple model system C. K. , and A. G. , “ Structural and short-time vibrational properties of colloidal glasses and supercooled liquids in the vicinity of the re-entrant glass transition J. Chem. Phys. M. D. Z. S. , and A. G. , “ Vibrational properties of quasi-two-dimensional colloidal glasses with varying interparticle attraction Phys. Rev. E K. L. N. C. D. J. A. G. , and P. E. , “ Scaling of relaxation and excess entropy in plastically deformed amorphous solids Proc. Natl. Acad. Sci. U. S. A. J. C. D. G. , “ Methods of digital video microscopy for colloidal studies J. Colloid Interface Sci. , and , “ Thin colloidal crystals Phys. Rev. Lett. , “ Phase diagram of hard spheres confined between two parallel plates Phys. Rev. E D. R. , “ Buckling instabilities of a confined colloid crystal layer Phys. Rev. E P. J. M. A. T. C. , and A. G. , “ Influence of particle shape on bending rigidity of colloidal monolayer membranes and particle deposition during droplet evaporation in confined geometries Phys. Rev. Lett. , and , “ Finite-size effects on the closest packing of hard spheres Phys. Rev. Lett. , “ Freezing between two and three dimensions Phys. Rev. Lett. , “ Phase behaviour of hard spheres confined between parallel hard plates: Manipulation of colloidal crystal structures by confinement J. Phys.: Condens. Matter K. E. , and K. G. , “ Solid/solid phase transitions in confined thin films: A zero temperature approach J. Chem. Phys. G. H. , “ Antiferromagnetism. The triangular Ising net Phys. Rev. , “ Antiferro-ferrimagnatic transition in triangular Ising lattice J. Phys. Soc. Jpn. , and H. J. W. , “ Free energy of domain walls and order-disorder transition in a triangular lattice with anisotropic nearest-neighbor interactions Phys. Rev. E , “ Ising-model spin correlations on the triangular lattice J. Math. Phys. A. A. , and K. D. , “ Spirals and skyrmions in antiferromagnetic triangular lattices Phys. Rev. Mater. , and C. J. O. , “ Realizing colloidal artificial ice on arrays of optical traps Phys. Rev. Lett. A. P. D. A. , and A. J. , “ Investigation of the field induced antiferromagnetic phase transition in the frustrated magnet: Gadolinium gallium garnet Phys. Rev. Lett. A. P. D. A. P. L. D. J. , and A. J. , “ Frustration induced spin freezing in a site-ordered magnet: Gadolinium gallium garnet Phys. Rev. Lett. M. J. P. C. V. N. P. B. D. , and J. E. , “ Static critical behavior of the spin-freezing transition in the geometrically frustrated pyrochlore antiferromagnet Y[2]Mo[2]O[7] Phys. Rev. Lett. J. S. S. R. B. D. M. J. P. J. E. R. F. M. D. W. A. N. P. J. E. et al, “ Cooperative paramagnetism in the geometrically frustrated pyrochlore antiferromagnet Tb[2]Ti[2]O[7] Phys. Rev. Lett. Y. K. C. A. , and , “ Magnetic field induced transitions from spin glass to liquid to long range order in a 3D geometrically frustrated magnet Phys. Rev. Lett. O. A. , and McK Paul , “ Investigation of the low-temperature spin-liquid behavior of the frustrated magnet gadolinium gallium garnet Phys. Rev. Lett. J. A. Dalmas de Réotier P. C. M. et al, “ First-order transition in the spin dynamics of geometrically frustrated Yb[2]Ti[2]O[7] Phys. Rev. Lett. J. T. , “ Properties of a classical spin liquid: The Heisenberg pyrochlore antiferromagnet Phys. Rev. Lett. M. J. S. T. D. F. , and K. W. , “ Geometrical frustration in the ferromagnetic pyrochlore Ho[2]Ti[2]O[7] Phys. Rev. Lett. S. T. M. J. P. , “ Spin ice state in frustrated magnetic pyrochlore materials R. F. R. S. B. J. M. S. V. H. , and , “ Artificial ‘spin ice’in a geometrically frustrated lattice of nanoscale ferromagnetic islands , and , “ Glassy spin dynamics in geometrically frustrated buckled colloidal crystals Phys. Rev. X , and , “ Spin-orbital glass transition in a model of a frustrated pyrochlore magnet without quenched disorder Phys. Rev. Lett. A. P. , “ Spin glasses: Experimental facts, theoretical concepts, and open questions Rev. Mod. Phys. R. P. , “ Probing antiferromagnetic coupling between nanomagnets Phys. Rev. B Z. Y. S. H. et al, “ Spin-glass ground state in a triangular-lattice compound YbZnGaO[4] Phys. Rev. Lett. S. H. C. H. R. L. , and L. J. , “ Advances in artificial spin ice Nat. Rev. Phys. , and , “ Frustration by design Phys. Today , “ Engineering of frustration in colloidal artificial ices realized on microfeatured grooved lattices Nat. Commun. D. Y. , “ Energetics and the ground state quest in an artificial triangular colloidal ice Phys. Rev. Mater. , and , “ Spin direction tunable topological transition in two-dimensional frustrate antiferromagnetic triangular lattice T-FeO[2] monolayer Appl. Phys. Lett. C. J. O. , and , “ Colloquium: Ice rule and emergent frustration in particle ice and beyond Rev. Mod. Phys. H. J. W. , “ Universal behaviour of domain wall meandering J. Phys.: Condens. Matter T. C. , “ Stripes, zigzags, and slow dynamics in buckled hard spheres Phys. Rev. Lett. A. G. , and T. C. , “ Buckled colloidal monolayers connect geometric frustration in soft and hard matter Soft Matter , “ Attraction controls the entropy of fluctuations in isosceles triangular networks , “ Attraction controls the inversion of order by disorder in buckled colloidal monolayers Phys. Rev. Lett. , “ Topological order in an antiferromagnetic tetratic phase Phys. Rev. Lett. , and T. C. , “ Order by disorder in the antiferromagnetic Ising model on an elastic triangular lattice Proc. Natl. Acad. Sci. U. S. A. A. D. A. G. , and D. J. , “ Phase diagrams of nearly-hard-sphere binary colloids Phys. Rev. E J. R. D. W. A. J. R. A. , and A. D. , “ Imaging the sublimation dynamics of colloidal crystallites J. R. A. D. , “ Experimental evidence for two-step nucleation in colloidal crystallization Phys. Rev. Lett. M. D. Z. S. P. J. , and A. G. , “ Tunable depletion potentials driven by shape variation of surfactant micelles Phys. Rev. E J. Y. , “ Depletion interactions produced by nonadsorbing charged and uncharged spheroids J. Colloid Interface Sci. P. V. , “ Solving the triangular Ising antiferromagnet by simple mean field Eur. Phys. J. B , “ Modern series analysis techniques and the relation to Monte-Carlo results on similar models ,” in Computer Simulation Studies in Condensed-Matter Physics VIII ), pp. G. H. , “ Antiferromagnetism. The triangular Ising net Phys. Rev. B A. D. A. G. , and D. J. , “ Entropic control of particle motion using passive surface microstructures P. D. J. L. A. G. , and D. J. , “ Entropically driven surface phase separation in binary colloidal mixtures Phys. Rev. Lett. , and J. Y. , “ Prediction and measurement of the interparticle depletion interaction next to a flat wall J. Colloid Interface Sci. H. N. W. Colloids and the Depletion Interaction ), Vol. 833. A. D. A. G. , “ Entropic confinement of colloidal spheres in corners on silicon substrates M. E. , and H. N. W. , “ Depletion stabilization by semidilute rods Phys. Rev. Lett. , and , “ Size and shape of micelles studied by means of SANS, PCS, and FCS , “ Elastic antiferromagnets on a triangular lattice J. Phys. C: Solid State Phys. P. L. , and J. L. , “ Monte Carlo study of a compressible Ising antiferromagnet on a triangular lattice Phys. Rev. B T. H. , and , “ Local spin resonance and spin-peierls-like phase transition in a geometrically frustrated antiferromagnet Phys. Rev. Lett. I. S. , and , “ Engineering bacterial vortex lattice via direct laser lithography Nat. Commun. F. G. , and R. E. , “ Ferromagnetic and antiferromagnetic order in bacterial vortex lattices Nat. Phys. © 2023 Author(s). Published under an exclusive license by AIP Publishing.
{"url":"https://pubs.aip.org/aip/jcp/article/158/19/194903/2890481/Depletion-driven-antiferromagnetic-paramagnetic","timestamp":"2024-11-08T08:30:28Z","content_type":"text/html","content_length":"573236","record_id":"<urn:uuid:e9a1ce75-8e40-4d33-b26b-9e45b0de2d6c>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00148.warc.gz"}
Percentage Change Calculator What is Percentage Change Calculator Percentage Change Calculator is a simple tool used to calculate percentage change of a number or value, yield, discount, sales, etc. How to calculate 10% off percent change of 1 dollars or pounds In calculating 10% of a number, sales tax, credit cards cash back bonus, interest, discounts, interest per annum, dollars, pounds, coupons,10% off, 10% of price or something, we use the formula above to find the answer. The equation for the calculation is very simple and direct. You can also compute other number values by using the calculator above and enter any value you want to compute. Percent change calculator tool can be used by first entering the fractional value you want to calculate. For example 5% of 20, which is the same thing as fraction x/100 * 20=5%. To find the value of x enter 5 in the first box and 20 in the second box and the answer 1 will be shown in the result box. Percentage off calculator Common questions How do i work out Percentages without calculation ? Answer: You work out Percentages by using the formula and tool above How to calculate the % of a number ? Answer: Using Percentage formula and equation above What % of a number is another number Answer: Use the calculator above to compute that How to figure out and get 10% interest per annum Answer: You work out 10% interest per annum by using simple interest formula of I=PxTxR/100. Where r is the rate of 10% , P=Principal, T=Time Formula and equation for % of something or whole numbers Answer: Use the tool above to compute that What is 10 sales tax formula Answer: 10 sales tax is calculated by getting the 10% of your sales as tax How to get gross profit or weight loss of % calculation Answer: Use the tool above to compute that How much is 10 percent off 1 dollars Answer: To find How much is 10 percent off 1 dollars, simply use the calculator to get the solution How to calculate 10 of a price Answer: Calculate 10 of a price by entering the price on the calculator with your value to get the % How to calculate 10% pounds discounts Answer: calculate 10% pounds discounts by entering the discounts price on the calculator with your value to get the discounts what is 10 percent off of 1 dollars Answer: calculate 10 percent off of 1 dollars by using the tool This can also be used as discount application for calculating shopping discounts, coupon off, body fat, gross profit, weight loss, love, tax, population increase & decrease, sales profit, credit cards cash back bonus. Once you know the values, to determine the % is easy . If you spot an error on this site, we would be grateful if you could report it to us by using the contact email provided. send email to contact on our site.
{"url":"https://www.percentage-off-calculator.com/percentage/","timestamp":"2024-11-10T09:43:10Z","content_type":"text/html","content_length":"119425","record_id":"<urn:uuid:66768236-74a6-43ce-9276-5485695a93b5>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00465.warc.gz"}
How do you find the median of 30 numbers? Add up all of the numbers and divide by the number of numbers in the data set. The median is the central number of a data set. Arrange data points from smallest to largest and locate the central number. This is the median. What is the median of 5? You have five numbers, so you divide 5 by 2 to get 2.5, and round up to 3. The number in the third position is the median. Is the median of numbers? The median is the middle number in a sorted, ascending or descending, list of numbers and can be more descriptive of that data set than the average. The median is sometimes used as opposed to the mean when there are outliers in the sequence that might skew the average of the values. What is the median of 4? Median: The middle number; found by ordering all data points and picking out the one in the middle (or if there are two middle numbers, taking the mean of those two numbers). Example: The median of 4, 1, and 7 is 4 because when the numbers are put in order (1 , 4, 7) , the number 4 is in the middle. What is the median of first 10 even numbers? 10/2th +(10/2+1)th . Then, 5th + 6th/2 = 10+12/2= 11. It is the median. Why do you add 1 when finding the median? If there is an even number of items of data, there will be two numbers in the middle. The median is the number that is half way between these two numbers. If there are a lot of items of data, add 1 to the number of items of data and then divide by 2 to find which item of data will be the median. What is the median of 25? The median is the middle number, for an even set of numbers there will be two middle numbers, to find the median in an even set of numbers we can average the two middle numbers. So since 25 and 25 are both middle numbers we can average them to get the median, which is 25 . How do u find the median of two numbers? If there is an even number of items of data, there will be two numbers in the middle. The median is the number that is half way between these two numbers. To find the median, put all numbers into ascending order and work into the middle by crossing off numbers at each end. Is central tendency the median? Measures of central tendency help you find the middle, or the average, of a data set. The 3 most common measures of central tendency are the mean, median and mode. The mode is the most frequent value. The median is the middle number in an ordered data set. How do you find the median if there are two? If there is an even number of numbers add the two middles and divide by 2. The result will be the median. What is the median of 1 to 10? Complete step-by-step answer: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10. Where the number of terms is in even. Therefore, the median of the first 10 natural numbers is 5.5. What is the median number in a calculator? Median Calculator In statistics, the middle number of a set of data is called as the median. This online median calculator can be used to find the number that separates the first half of the numbers set from the second half which is the middle number. When is the median in the middle of the data? If there is an odd number of data values then the median will be the value in the middle. If there is an even number of data values the median is the mean of the two data values in the middle. For the data set 1, 1, 2, 5, 6, 6, 9 the median is 5. For the data set 1, 1, 2, 6, 6, 9 the median is 4. What’s the difference between median and mode in Excel? If there are 2 numbers in the middle, the median is the average of those 2 numbers. The mode is the number in a data set that occurs most frequently. Count how many times each number occurs in the data set. The mode is the number with the highest tally. It’s ok if there is more than one mode. How to calculate median of number of observations? Put simply, it is the value at the center of the sorted observations. This calculator uses two different formulas for calculating the median, depending on whether the number of observations is odd, or it is even: When the number of observations is odd the formula is: where n is the number of observations.
{"url":"https://stw-news.org/how-do-you-find-the-median-of-30-numbers/","timestamp":"2024-11-06T23:47:09Z","content_type":"text/html","content_length":"69632","record_id":"<urn:uuid:8ddb05bb-12d8-4a1a-b5a5-0c9d20feab27>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00122.warc.gz"}
Optimal Power Flow in Electrical Microgrids 1. Introduction An optimal power flow (OPF) consists of solving equations which characterize an electrical power system (active and reactive power of each node) adjusting the control variables values (voltages or powers) in order to optimize a specific system parameter, represented by one target function [1] . A system usually includes state variables (unknown quantities) and independent variables (unknown data). Control variables can be any of the independent variables in the system, and are selected depending on the purpose of the analysis. The target function of a power system for the OPF study is determined depending on the parameter which is going to be optimized or improved; some typical parameters are as follows [2] : · Minimizing the generation cost · Minimizing transmission losses of active power · Minimizing transmission losses of reactive power · Minimizing the interruption costs by flows · Minimizing the reprogramming number of controls · Minimizing pollutant emissions of thermal generators In order to improve the features of power systems and reduce line losses, in recent years, the electrical microgrids have appeared. Electrical microgrids are small electrical systems that may have autonomous management of power generation and energy distribution, which can function in a dependent or independent way from the electrical grid and also with the potential implementation of renewable energy sources [3] . To enhance features and achieve an optimal performance in electrical microgrids, authors as [4] - [8] have implemented OPF in various mathematical algorithms. This paper presents a state of art regarding the mathematical methods which have been implemented in the analysis of OPF and its application in electrical microgrids. Subsequently, a mathematical algorithm based on the gradient method is proposed for the solution of OPF in low power microgrids, in order to improve the profiles voltages and consequently reduce the active power losses. Finally, the application results of the gradient method in a low power microgrid with photovoltaic generation are presented. 2. State of Art An OPF solution provides information regarding to planning, economic programming, system control and the distribution of future expansion [9] . Sections 2.1 and 2.2 present a compilation of the most relevant research on OPF analysis methods in power systems and its application in electrical microgrids. 2.1. Optimal Power Flow Methods These mathematical methods have been developed for over 40 years, starting with classical mathematical optimization techniques [9] - [11] , and more recently reaching genetic and artificial intelligence algorithms [4] [6] [9] [12] [13] . OPF mathematical methods are shown in Figure 1 and Figure 2. This section presents the most important contributions in OPF methods. Figure 1. OPF methods (Classic optimization). In the year of 1968 H. W. Dommel and W. F. Tinney introduced the concept of OPF in power systems. The Authors present a solution method for OPF using the Newton method with an optimal adjustment of the control parameters through the gradient method; this is done with the purpose of reducing losses of a three-node system [10] . In 1973 A. M. H. Rashed and D. H. Kelly presented a mathematical algorithm to minimize losses implementing a combination of Lagrange multipliers and Newton’s method, replacing the Jacobian for one Hessian matrix. The proposed method has a faster convergence, requiring fewer steps compared to other methods [14] . In 1993 Yu-Chi Wu, et al. introduced a method to resolve OPF in order to optimize an economic dispatch. The study combines an algorithm based on the interior point method, justifying that the methods proposed years ago did not meet the requirements of large-scale power systems. The authors concluded that the method provided robust and accurate results [15] . Subsequently, in 1995 J. Kennedy and R. Eberhart conducted a research in optimization of nonlinear functions using the “Particle Swarm Optimization” (PSO) method. Thus, the authors state that this method could be used to solve a myriad of problems, including the OPF in power systems [12] . In the year of 2001 C.A. Roa-Sepúlveda and B.J. Pavez-Lazo introduced an OPF method to resolve the economic dispatch problem based on linear Programming. This method is applied to tow IEEE systems of six and thirty nodes without considering the losses; finally, they concluded that OPFs could be solved without a Jacobian matrix, contributing to determine any objective function [7] . In 2004 A. EL-Dib, et al. focused their research in an application (PSO) to figure out OPF as optimization problems. The authors argue that these studies are the basis to determine the voltage stability and the maximum system power [13] . In 2006 R. Mageshvaran, et al. presented three different techniques to solve OPF (PSO, CPSO, HDE), proposing useful and encouraging solutions, that were verified in Matlab. The objective of the research was the development of new techniques that allow a better operation; this was done with the purpose of ensuring an economic generation and also meeting the demand of active and reactive power through simple and efficient algorithms [9] . In the year of 2008 J. M. LópezLezema and L. A. GallegoPareja developed an algorithm to solve OPF, employing the gradient method for achieving reduction of active power losses in power systems. In their research the authors reduced the quantity of iterations by means of the adjustment parable method [11] . In 2012 C. H. Fujisawa, et al. concentrated their research on direct current OPF with losses and into alternating current; they further presented the mathematical characteristics of each one and their respective advantages and disadvantages. The authors determined that the alternating current method is the most interesting model, since it solves problems through an iterative process and applying various methods of linear programming [16] . In Section 2.2 are exposed OPF applications in power systems, specifically in electrical microgrids. 2.2. Optimal Power Flow Applied to Electrical Microgrids The implementation of OPF in electrical microgrids started years ago at first, to control the optimal power dispatch [8] [11] and subsequently to achieve the loss minimization in power systems [4] [5] . In 2009 E. Sortomme and M. A. El-Sharkawi conducted studies in a microgrid through the Particle Swarm Optimization (PSO); this was done in order to achieve an optimal dispatch of controllable loads and effectively use the storage energy devices that microgrids contain. The authors emphasized that the research was a preliminary study where they concluded that peak loads were reduced in 10 MV, generating an economic improvement of 14% [8] . In the year 2010 P. Wan and M. D. Lemmon considered an optimization algorithm for distributed event to solve OPF in microgrids. The optimization was performed by simulations in Sim Power System (Matlab) implementing the microgrid model: CERTS. In this study the authors reduced the error by the algorithm and conclude that through the method the optimization problem in microgrids could be solved [6] . In 2013 E. Dall’Anese, et al. applied several OPF to electrical microgrids in order to minimize losses and the cost of the energy supplied by distributed generation (small energy sources capable of supplying user demand), this was done by SDR techniques (Semidefinite Programming), which the possibility of obtaining the optimal solution for the microgrid [5] . In 2013 Yoash Levron, Josep M. Guerrero and Yuval Beck introduced a solution for OPF that considered a microgrid with energy storage. The authors claim that classical solutions of OPF were not adapted for the microgrid analysis, in particular due to the presence of distributed generation and of the storage devices. The Optimal Power Flow developed by authors considered the limitations of the storage devices and those of the microgrid regarding to the voltages, currents and powers [4] . 3. Algorithm Based on the Gradient Method In order to analyze electrical microgrids by OPF, a mathematical algorithm is proposed based on the gradient method, based upon the study developed in [10] . Figure 3 shows the proposed flowchart of an OPF solved by the gradient method. The mathematical algorithm is based on partial derivatives and has the property to be modified to analyze any structure in the microgrid (isolated or interconnected with the grid). In Section 3.1 the general way of formulating an OPF is shown. Afterwards, in Section 3.2 the specific methodology proposed for the formulation and solution of an OPF is explained, which is based on the gradient method with the purpose of improving the voltage profiles and reducing active power losses in electrical microgrids. Figure 3. Flow diagram of an OPF based on the gradient method. 3.1. Obtaining the Probability Density Functions According to [10] , the OPF is based on Equation (1). where (x) are state variables and (u) control variables [17] : The control variables can be adjusted without altering the equality and inequality. Therefore, the problem of OPF is to adjust the control variables u in order to optimize a specific parameter of the system represented by the objective function f(x, u) [10] . The equality restrictions g(x, u) of Equation (1), correspond to the active and reactive power of the system nodes equations: Finally, the inequality restrictions are the minimum and maximum allowed values for the control variables. 3.2. Gradient Method for Improving the Voltage Profiles and Reducing Active Power Losses in a Microgrid Improving the voltage profiles and reducing losses in the microgrid helps to improve performance, to enhance the useful life of the conductors, to improve energy quality and reduce system costs [18] Control variables are defined as generation nodes voltages and Slack as it look for improving the system voltages profiles. In a power system, slack power node is not specified, because the node assumes the losses of the system [6] . Consequently, the microgrid losses are minimized when the powers of the nodes are kept constant and when the power generation of the Slack is minimized. According to the previous idea, function f(x, u) is going to be the Slack node power of the microgrid. Equality restrictions g(x, u) of Equation (1) will be the typical power equations which define each node in a power system [19] : The inequality restrictions (5) are the minimum and maximumvoltages allowed on themicrogrid. Considering that the proposed OPF is particularly for low voltage microgrids and also that for the Colombian case, this levels are 120/208 with limits of +5% (126) y −10% (108) [20] : Finally, Equation (6) represents the formulation of an OPF to improve the voltage profiles and reduce the active power losses in a low voltage electrical microgrid. 3.3. Mathematical Algorithm for OPF Based on the Gradient Method for an Electrical Microgrid In [10] , they are based on an equation named Lagrangian (7) to solve OPF, which has a Lagrange multiplier λ. If minimun necessary conditions or Karush Kunh-Tucker conditions are applied to Equation (7) [11] , the systems of Equations (8), (9) and (10) are obtained. Optimal microgrid solution is determined by solving the equations system. The gradient in (9) measures the sensitivity of the objective function with regard to the changes in control variables. Originating at an opposite direction to the gradient, a minimum feasible point with a lower function value is reached [11] . Repeating this process leads to the optimal solution of a system. The new control values are updated by (11) to continue the iterations until the error will reduce to a minimum. In Equation (11), the variable is the value that determines the step size that should result from the algorithmin the opposite directionto the gradient. If a low value for c is chosen, a convergence is ensured, however it makes that a lot of iterations be necessary. Instead, if a high value for c is chosen, it will produce oscillations around the optimum [10] . 4. Case of Study 4.1. Electrical Microgrid Proposed Figure 4 shows the proposed microgrid for the case of study. This system is proposed for its simplicity and the presence of an unconventional generator. The microgrid has a power of 800 w, which is fed by the main grid and a photovoltaic generator, which to a specific temperature and irradiance generates 450 w (interconnected microgrid). In order to illustrate the improvements in the system with the OPF, Figure 5 shows the losses of active power before (13.1589) and after (11.7640) the application of the algorithm. 4.2. Application of the Algorithm and Results The mathematical algorithm of the proposed OPF was developed in Matlab, selecting different values for the Figure 4. Microgrid for the case of study. Figure 5. Microgrid losses before and after the OPF implementation. step size (c = 0.1, c = 0.3 and c = 1.38) with an error tolerance of 0.0001 (ε = 0.0001). The mathematical algorithm minimizes the difference between the power generated by the grid and the photovoltaic panel, so that losses are minimized. The results show that there is a range of values that minimizes microgrid losses, which is shown in (12). This occurs because the rate of change of the voltage decreases each iteration, until the point where a change in the control variables exerts a change in the power of the Slack node. The results with different step sizes were the following (Table 1): As shown in Table 1, the number of iterations required to reach the optimum of the system depends on the size of step (c) chosen. Selecting a value c = 0.1 the algorithm meets the limit of error in 22 iterations; however, if this value increases to 0.3, iterations will be reduced to 8. If the step size increases highly (c = 1.38), the algorithm will present a premature convergence with only 4 iterations generating oscillations. 4.3. Voltage Profiles and Active Power Losses In order to verify the obtained results in the microgrid, an analysis of profile voltages and system losses was realized by means of power flows through the Newton-Raphson method in Power System Analysis Toolbox (PSAT) by Matlab. The results are represented by Figure 6 and Figure 7. To examine the voltage profiles in the microgrid, power flows for different cases which can be present in the system were realized, considering the limits in Equation (8), subsequently comparing graphically with the optimal values found with the OPF. The cases mentioned above are shown in Table 2. The voltage profiles of the microgrid are illustrated in Table 1. Results with different step sizes. Figure 7. Microgrid active power losses. Table 2. Case of study for voltage profiles in the microgrid. Figure 7. In this figure, the flattening of the curve (black straight) with the optimal voltage values determined by the OPF can be observed (Case 25). The analysis of losses was performed in the same way that the voltage profiles, executing various power flows for cases described on Table 2 and comparing with the resulting values of the OPF. As Figure 7 shows, the losses decrease when the optimal voltages are applied (V1 = 121.1844 v and V2 = 118.5517) compared to normal operating conditions (V1 = 120 v and V2 = 120 v). The black dot indicates the maximum reduction of system losses. 5. Conclusions The state of art concerning the OPF and its application in microgrids, reveals that despite of the significant development in this field, the studies developed in microgrids (isolated or interconnected) with photovoltaic generation has not been investigated in a substantial way. An OPF can improve parameters in microgrids as voltage profiles and thus the active power losses. These factors take a leading role in the efficiency, quality, and reliability of an electric system. Due to the ability to change of the proposed OPF, this method can be applied to low voltage electrical microgrids. In addition, the implemented mathematical algorithm has a lower complexity, compared to methods based on genetic and artificial intelligence algorithms. The proposed algorithm reduces the active power losses and improves the voltage profiles with a minimal error. Additionally, the method ensures the convergence, which can be improved by modifying the step size c of the algorithm. By changing the equation of the objective function, the proposed method can optimize or improve any parameter into an electrical microgrid considering that math is the same for all conditions.
{"url":"https://www.scirp.org/journal/PaperInformation?PaperID=50894&","timestamp":"2024-11-04T21:12:02Z","content_type":"application/xhtml+xml","content_length":"114824","record_id":"<urn:uuid:8cf99fc2-6d4e-483b-88d6-de3fb02cac2c>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00074.warc.gz"}
HP Forums Hi there I would like to evaluate the following expression for even and odd n‘s One solution could be to type in n=1 then, n=2 and so on but this is time consuming. Is there a way to directly define the numbers where to evaluate the expression? I have tried with n=[0 1] then the calculator crashed... Thank you.
{"url":"https://www.hpmuseum.org/forum/archive/index.php?thread-14348.html","timestamp":"2024-11-06T20:35:47Z","content_type":"application/xhtml+xml","content_length":"6352","record_id":"<urn:uuid:1b1f94c3-890c-48f3-92ee-ae4112931e71>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00498.warc.gz"}
Study of Faraday waves in tanks in presence of polystyrene bead layers Issue Emergent Scientist Volume 7, 2023 Article Number 2 Number of page(s) 6 Section Physics DOI https://doi.org/10.1051/emsci/2023001 Published online 28 April 2023 Emergent Scientist , 2 (2023) Research Article Study of Faraday waves in tanks in presence of polystyrene bead layers ^1 École Polytechnique Fédérale de Lausanne, School of Physics, Station 3 CH-1015 Lausanne, Switzerland ^2 EPFL Rocket Team, CH-1015 Lausanne, Switzerland ^* e-mail: maximechristophenicolas.roux@epfl.ch Received: 13 May 2022 Accepted: 23 March 2023 When a tank is subjected to vertical forced excitation, Faraday waves appear at the surface of the liquid the tank contains. In this paper, we consider the effect of layers of polystyrene beads placed on the surface of isopropanol undergoing a low frequency vertical sinusoidal excitation. Beads on the surface remove most of low-frequency resonances and reduce the amplitude of waves for the remaining ones. The formation of resonances with beads is observed to come from small gaps in the bead layers. A sufficient number of beads is needed to maintain beads in one compact block and prevent Faraday waves. Key words: Applied fluid mechanics / Resonance and damping of mechanical waves © M.C.N. Roux et al., published by EDP Sciences, 2023 This is an Open Access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. 1 Introduction Exciting a tank with forced excitation generates sloshing of the liquid it contains. Faraday waves are a special case of sloshing which appears when the tank is vertically excited [1–3]. This is illustrated in Figure 1. This phenomenon within launch vehicle tanks is a recurring problem when using liquid propellant engines. Many launchers, from the Soviet N-1 lunar rocket to the French Emerald series of launchers, have been lost due to sloshing [4]. Two phenomena are found. The first is called the “Pogo effect” [5–7]. This is a feedback phenomenon involving the propulsion, the structure and the liquids contained in the tanks of the launcher. During a flight, the sloshing of propellants in the tanks leads to pressure variations within the tanks. These pressure variations disrupt the power supply to the engine and cause variations in engine thrust. This causes vibrations in the launcher structures, which in turn causes the liquids in the tanks to slosh. When this phenomenon generates a positive feedback, it can lead to damage or destruction of the launcher. The second phenomenon is the application of lateral forces and torques due to sloshing. This can lead to problems with the trajectory [8] and can result in the destruction of the launcher. Mitigating propellant sloshing in launch vehicles is therefore a crucial challenge in the aerospace field but also in many fields of industry such as aeronautics, naval industry or liquid transport. In order to limit sloshing, the various actors in the field favour structural modifications of the tank [9–11]. These modifications make the construction more complex and can increase the mass of the tanks. Other solutions exist, such as the presence of bubbles on the surface of the liquid [12,13]. However solving the issues involved by structural modifications, bubbles are complex to implement and maintain. The use of beads solves the constraints due to structural modifications and foams. In this paper, we consider the effect on Faraday waves of polystyrene beads placed on the surface of isopropanol in a rectangular tank subjected to low frequency vertical sinusoidal excitation. We present experimental visualisations of the surface motion with several bead layers and without. Four hypotheses will be considered to explain damping by polystyrene beads. Fig. 1 Picture of isopropanol Faraday wave in a rectangular tank of dimensions L = 70 mm and l = 15 mm filled at h = (32 ± 2) mm. 2 Method 2.1 Experimental setup The experimental setup is shown in Figure 2. A rectangular tank of dimension L = 70 mm and l = 15 mm that is attached on a vibrating pot controlled with a function generator connected to an amplifier. The generator is set to the “sinusoid” mode, which creates a sinusoidal signal of selected amplitude and a shaking frequency f. The movement of the liquid was recorded with a Phantom Research v411 camera connected to a computer and set to record 300 fps in HD (1280 × 800). A LED lamp is used to illuminate the tank to ensure good video quality. The vertical oscillation amplitude of the pot is A = (5 ± 1) mm. The tank is filled with liquid to a height height = (32 ± 2) mm. Due to the difficulty to excite eigenmodes in small tanks, isopropanol was selected because of its low surface tension. When polystyrene beads are involved in the experiments, they are placed at the surface of isopropanol such that they form a n-layer membrane. To measure the height of the content of the tank (either isopropanol or isopropanol + beads), the high-speed camera is placed 1.5 m away from the tank, far enough so that a proportionality relation between the distance measured in pixel and real vertical distances can be assumed. To calibrate the conversion from pixel to mm, a picture of a 20 cm ruler is used. Polystyrene balls with a diameter between 0.3 cm and 0.6 cm were used. The width of a bead layer corresponds to 3 polystyrene balls. Several measurements allowed us to evaluate the temperature of the isopropanol between 20 and 25 degrees, we will therefore consider that its density ρ is (782.9 ± 2.1) kg/m^3 [15] and its surface tension γ is (21.48 ± 0.26) mN/m [16]. The gravity acceleration ɡ is (9.81 ± 0.01) m/s^2 [17]. Fig. 2 Scheme illustrarting the experiment[14]. 2.2 Measure of maximal height difference The wave amplitude is measured to characterise the impact of beads on Faraday waves’ damping. A protocol to measure this amplitude is presented below. A movie is made with the camera once the steady state is reached. The maximum height h[1] of the content of the tank is then obtained and measured with the image processing software “paint.net” by counting the number of vertical pixels between the bottom of the tank and the highest point of the content. To obtain Δh = h[1] – h[2], a measure of the height of content at rest h[2] is performed with and without beads. This measurement process is illustrated in Figure 3. Fig. 3 Illustration of the measurement process. To obtain Δh = h[1] – h[2], a measure of the height of content at rest h[2] and the maximum height h[1] of the content of the tank under excitation is performed by using an image processing software. 2.3 Swap frequency with and without beads In order to see damping by beads at low frequencies, a shaking frequency swap is performed between 0 and 10 Hz every 0.2 Hz with and without two layers of polystyrene beads. In each case, a measurement of the maximal height difference Δh is made. 2.4 Impact of the number of bead layers To assess the influence of the number of bead layers, a measurement is performed with number of layers between 0 and 4 for a shaking frequency f of 9, 9.6 and 10 Hz. In each case, a measurement of the maximal height difference Δh is made. 3 Results 3.1 Swap frequency with and without beads The experimental results for swap frequency are presented in Figure 4. Introducing a layer of beads makes it possible to significantly reduce the oscillations without however eliminating them completely (Fig. 4). Slight oscillations do remain. Around a shaking frequencies of 9 and 10 Hz, it was observed that there were still significant oscillations, although less than without beads. This phenomenon is illustrated in Figure 5. It is observed that for these frequencies, the surface oscillations are too important to keep a continuous layer of beads on the surface (Images 1 and 2). A strong excitation (Images 3 and 4) appears in the area not covered entirely by beads and spreads to the rest of the surface (Image 5). Fig. 4 Maximum height difference Δh of the liquid’s surface with and without two layers of beads for a rectangular tank of dimensions L = 70 mm and l = 15 mm filled at rest at h[liq] = (32 ± 2) mm for shaking frequencies ƒ between 0 Hz and 10 Hz. Fig. 5 Photo series of isopropanol Faraday wave with two layers of beads at (9.000 ± 0.005) Hz in a rectangular tank of dimensions L = 70 mm and l = 15 mm filled with liquid at rest at h[liq] = (32 ± 2) mm which illustrates the phenomenon of continuity break of the bead block. 3.2 Impact of the number of bead layers The impact of the number of bead layers is presented in Figures 6, 7 and 8, respectively for shaking frequencies ƒ = 9 Hz, ƒ = 9.6 Hz and ƒ = 10 Hz. It shows that Δh decreases with an increasing number of beads. Fig. 6 Maximum height difference Δh for a number of bead layers n between 0 and 4, at 9 Hz in a rectangular tank of dimensions L = 70 mm and l = 15 mm filled with liquid at rest at h[liq] = (32 ± 2) mm. Fig. 7 Maximum height difference Ah for a number of bead layers n between 0 and 4, at 9.6 Hz in a rectangular tank of dimensions L = 70 mm and l = 15 mm filled with liquid at rest at h[liq] = (32 ± 2) mm. Fig. 8 Maximum height difference Δh for a number of bead layers n between 0 and 4, at 10 Hz in a rectangular tank of dimensions L = 70 mm and l = 15 mm filled with liquid at rest at h[liq] = (32 ± 2) mm. 4 Discussion 4.1 Faraday waves The liquid oscillates in the subharmonic regime, i.e. its oscillation frequency corresponds to half the shaking frequency of the vibrating pot. This is consistent with the characteristics of Faraday waves [2]. For small oscillations up to Δh = 20 mm, the waves are observed to be periodically oscillating sines. For larger oscillations, the regime is quasi-periodic. The shape of the surface is chaotic, but the period and the measured Δh are constant once the quasi-periodic regime is reached. 4.2 Swap frequency with and without beads Figure 4 shows that beads have a significant impact on damping Faraday waves. By comparison with damping by foam on the surface [12], one can suggest that the main contribution to damping comes from the beads in contact with the walls. The low density of the beads allows them to aggregate on the surface and thus maximise the static friction between them, so that the beads form a compact block which is held together and immobile relative to the tank. This compactness strengthens the friction with the wall and thus increases energy dissipation. Beads constitute small items at the surface of the liquid, therefore they are obstacles that can break waves and inhibit the formation of new waves. They are very effective in preventing resonance modes from being excited (see Fig. 4). These beads can also be considered as an elastic membrane [18]. This membrane applies a pressure on the liquid’s surface, restraining its movement. The Faraday waves’ amplitude is reduced and resonances are shifted to higher frequencies. Therefore, this shift could be useful in the aerospace field as vibrations of rockets’ structures are mainly low in frequencies [4]. The same phenomena are observed if the membrane is replaced by a thin layer of a viscous and lower density liquid on the surface. Treating beads this way makes it easier to compute numerically the damping by solving with the Navier-Stokes equation [3]. In this case the beads’ effective surface tension and effective viscosity need to be measured experimentally. 4.3 Impact of the number of bead layers Figures 6, 7 and 8 show that Δh decreases with an increasing number of beads. For 3 and 4 layers, the observed Δh is not attributed to Faraday waves but instead to a few beads of the last layer which jump over the compact bead block. This is an artefact of the measurement process, the wave is not better damped with 3 layers than with 4 as in both cases there is no wave. The same intensity is observed with 1 and 2 layers. Figure 4 shows that 9 and 10 Hz correspond to resonances in the 2-layer scenario, which may not be the case with 1 layer as beads shift the resonances. This hypothesis is confirmed by the experiments at 9.6 Hz, as there are waves with one layer but not with two. To guarantee a maximal damping of resonances by beads, the surface continuity of beads needs to be insured. Increasing the number of layers of beads prevents the formation of gaps in the bead membrane as they are immediately filled with beads from an upper layer, therefore preventing discontinuities of the bead membrane. The friction with the wall is also proportional to the number of beads in contact with these walls, therefore it is also proportional to the number of layers of beads. For a tank excited by shaking the tank horizontally, it has been shown in literature that the number of layers is positively correlated with damping [19]. 5 Dead end 5.1 Experimental setup At first, we did our experiments with water, however, even with a single layer of beads no wave could be observed for any frequency nor shaking amplitude (considering the limits of the vibrating pot). The only conclusion was that beads were efficient, but the process of how they inhibit Faraday waves does not allow any quantitative analysis. Isopropanol is a liquid with a smaller surface tension therefore easier to excite. As its viscosity is lower, damping of waves is reduced so that the impact of beads is clearer. It is also cheap and easy to acquire. Initially, we planned using a cylindrical tank for our experiments since the tanks of the aerospace industry where the use of balls would be usable are of this shape. However, we found that these tanks are not suitable for our measurement protocol due to its width. Indeed, the cylindrical tanks have a width equal to their diameter. The upper surface liquid profile cannot be projected in 2D from the camera point of view. Therefore the maximum height cannot be observed visually by only one camera (see Fig. 9). However, equations (1) and (2) [2], which correspond to the resonance frequencies of a perfect fluid respectively in a rectangular tank and in a cylindrical one, are very similar, the only difference being the geometrical coefficient k[mn] (resp. ). As the physics of these tanks is similar, we carried out our studies on a rectangular one, considering that the results thus obtained can be applied to cylindrical ones. $fmn=12π[ gkmn+γρkmn3 ] tanh (kmnh)$(1) where $kmn=πm2/L2+n2/l2$, l, L and h are respectively the width, length and height of the rectangular tank, ɡ is the gravitational acceleration, γ is the surface tension of the liquid and ρ is its $fmn=12π[ gλmn+γρλmn3 ] tanh (λmnh)$(2) where λ = ɛ[mn]/R, R is the radius of a cylindrical tank, ɛ[mn] is the nth root of the derivative of J[m], J[m] is the first kind Bessel function of order m, ɡ is the gravitational acceleration, γ is the surface tension of the liquid and ρ is its density. Using equation (1) with the setup used for Figure 4, the three lowest resonant frequencies without beads are found to be 3.17 Hz, 4.76 Hz and 5.93 Hz. However the first resonances in Figure 4 are at (3.7 ± 0.2) Hz, (4.4 ± 0.2) Hz (6 ± 0.5) Hz. The right order of magnitude is predicted, but resonances are shifted to lower frequencies due to the strong non-linearity of the oscillation regime for large amplitudes. Fig. 9 Picture of Faraday waves in a cylindrical tank of dimensions R = 5 cm and filled with liquid at rest at h[lig] = (4 ± 2) mm. 5.2 Swap frequency with and without beads We chose the main frequency swap presented in Figure 4 with two layers for the case with beads. We tried with many more bead layers at first but no sloshing could be observed. With two layers we estimated that the effect of beads could be observed without inhibiting the oscillation, so that resonance frequency shifts and damping could be analysed. 5.3 Impact of number of bead layers We chose to study the impact of the number of layers at frequencies where sloshing was significant. For two layers, Figure 4 shows that between 9 and 10 Hz, there is sloshing even with two layers of beads. Between 4 and 5 beads fit widthwise between the walls of the tank. No transverse bridges were observed in any of the experiments. 6 Conclusion In this paper, the influence of polystyrene beads on the surface of a liquid under forced excitation is studied in order to attenuate Faraday waves. It is concluded that the presence of layers of beads on the surface of a liquid allows to damp the oscillations. The formation of resonances with beads is observed to derive from little gaps in the bead layers, where the liquid can start to oscillate. Once this small oscillation is triggered, it propagates to the whole tank in a short time. A sufficient number of beads is needed to maintain beads in one compact block and prevent discontinuities of the bead layers. Four hypotheses are advanced to explain this phenomenon: friction of beads with the tank’s walls contribute to damping ; beads mechanically break the waves; beads can be considered as a thin elastic membrane restraining the motion of the liquid; beads can be considered as a liquid, with an effective surface tension and an effective viscosity. We thank the EPFL Rocket Team which supported and promoted this research and the School of Physics, which made it possible in the frame of 3rd year Laboratory training. Cite this article as: Maxime Christophe Nicolas Roux, Benjamin Arthur Hugo Meunier, Daniele Mari. Study of Faraday waves in tanks in presence of polystyrene bead layers, Emergent Scientist 7, 2 All Figures Fig. 1 Picture of isopropanol Faraday wave in a rectangular tank of dimensions L = 70 mm and l = 15 mm filled at h = (32 ± 2) mm. In the text Fig. 2 Scheme illustrarting the experiment[14]. In the text Fig. 3 Illustration of the measurement process. To obtain Δh = h[1] – h[2], a measure of the height of content at rest h[2] and the maximum height h[1] of the content of the tank under excitation is performed by using an image processing software. In the text Fig. 4 Maximum height difference Δh of the liquid’s surface with and without two layers of beads for a rectangular tank of dimensions L = 70 mm and l = 15 mm filled at rest at h[liq] = (32 ± 2) mm for shaking frequencies ƒ between 0 Hz and 10 Hz. In the text Fig. 5 Photo series of isopropanol Faraday wave with two layers of beads at (9.000 ± 0.005) Hz in a rectangular tank of dimensions L = 70 mm and l = 15 mm filled with liquid at rest at h[liq] = (32 ± 2) mm which illustrates the phenomenon of continuity break of the bead block. In the text Fig. 6 Maximum height difference Δh for a number of bead layers n between 0 and 4, at 9 Hz in a rectangular tank of dimensions L = 70 mm and l = 15 mm filled with liquid at rest at h[liq] = (32 ± 2) mm. In the text Fig. 7 Maximum height difference Ah for a number of bead layers n between 0 and 4, at 9.6 Hz in a rectangular tank of dimensions L = 70 mm and l = 15 mm filled with liquid at rest at h[liq] = (32 ± 2) mm. In the text Fig. 8 Maximum height difference Δh for a number of bead layers n between 0 and 4, at 10 Hz in a rectangular tank of dimensions L = 70 mm and l = 15 mm filled with liquid at rest at h[liq] = (32 ± 2) mm. In the text Fig. 9 Picture of Faraday waves in a cylindrical tank of dimensions R = 5 cm and filled with liquid at rest at h[lig] = (4 ± 2) mm. In the text Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform. Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days. Initial download of the metrics may take a while.
{"url":"https://emergent-scientist.edp-open.org/articles/emsci/full_html/2023/01/emsci220001/emsci220001.html","timestamp":"2024-11-01T18:56:19Z","content_type":"text/html","content_length":"100557","record_id":"<urn:uuid:f2559928-7a67-4277-bd8a-bdfd7e709b65>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00301.warc.gz"}
OpenStax College Physics, Chapter 28, Problem 14 (Problems & Exercises) (a) How far does the muon in Example 28.1 travel according to the Earth-bound observer? (b) How far does it travel as viewed by an observer moving with it? Base your calculation on its velocity relative to the Earth and the time it lives (proper time). (c) Verify that these two distances are related through length contraction $\gamma = 3.20$ Question by is licensed under CC BY 4.0 Final Answer a. $135\textrm{ m}$ b. $433 \textrm{ m}$ c. Yes, the Lorentz Factor relates the two distances. Solution video OpenStax College Physics, Chapter 28, Problem 14 (Problems & Exercises) vote with a rating of votes with an average rating of. Video Transcript This is College Physics Answers with Shaun Dychko. A muon has a lifetime in its reference frame of 1.52 microseconds so this means if an observer is traveling with the muon, in which case they are not moving at all with respect to the muon because they are going with it, they are measuring proper time and so that's 1.52 microseconds; the speed of a muon is 0.950 times the speed of light and the question in part (a) is how far does the muon travel according to an Earth-based observer? So an Earth-based observer will measure L and that's going to be square root of 1 minus v squared over c squared times the proper length or the proper distance that the muon travels and this is according to the muon observer. the observer moving with the muon will measure L naught and L naught will be the speed of the muon times the proper time measured by this reference frame of the muon. So we can replace L naught with vt naught and then the distance measured on Earth then is going to be the square root of 1 minus 0.950c squared over c squared and that's maybe multiply by 0.950 times the speed of light as a number here— 2.998 times 10 to the 8 meters per second— times 1.52 times 10 to the minus 6 seconds— proper time lifetime— and this works out to 135 meters. Part (b) asks what distance will a muon travel according to an observer moving with the muon? And that's going to be the speed of the muon times proper time. So that's 0.950 times the speed of light times the lifetime of the muon and the muon's reference frame and that's 433 meters. And question (c) is asking are these numbers... these lengths are they related by this Lorentz factor this thing called γ? And that's our formula for the contracted length equals the proper length divided by γ. So that's 432.9112 meters divided by 3.20 and that equals 135 meters and so yes indeed, you can find the contracted length by dividing the proper length by γ to get the length measured in the Earth-bound reference frame.
{"url":"https://collegephysicsanswers.com/openstax-solutions/how-far-does-muon-example-281-travel-according-earth-bound-observer-b-how-far","timestamp":"2024-11-08T18:45:14Z","content_type":"text/html","content_length":"168049","record_id":"<urn:uuid:b6417969-d1b8-4637-ac16-5dd2523fd1dd>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00329.warc.gz"}
National Taiwan Normal Dissertations and Theses Upload System [1] H. Abels and J. Butz, A Blow-up Criterion for the Curve Diffusion Flow with a Contact Angle, SIAM J. Math. Anal. 52, 3 (2020), 2592--2623. [2] G. Arreaga, R. Capovilla, C. Chryssomalakos and J. Guven, Area-constrained planar elastica, Physical Review E., Volume 65, 031801 (2002). [3] T. Au, and T. Y. Wan, Analysis on an ODE arisen from studying the shape of a red blood cell, J. Math. Anal. Appl., 282, 1 (2003), 279--295. [4] J. W. Barrett, H. Garcke and R. Nurnberg, Elastic flow with junctions: Variational approximation and applications to nonlinear splines, Math. Models Methods Appl. Sci. 22, 11 (2012), 1250037. [5] A. Borbely and M. J. Johnson, Elastic splines I: Existence, Constr. Approx. 40, 2 (2014), 189--218. [6] A. Borbely and M. J. Johnson, Elastic splines II: Unicity of optimal s-curves and curvature continuity, Constr. Approx. 49, 3 (2019), 525--554. [7] R. Capovilla, C. Chryssomalakos and J. Guven, Elastica hypoarealis, The European Physical Journal B - Condensed Matter and Complex Systems 29 (2002), 163--166. [8] A. Dall'Acqua, C.-C. Lin, and P. Pozzi, Evolution of open elastic curves in R^n subject to fixed length and natural boundary conditions, Analysis (Berlin), 34, 2 (2014), 209--222. [9] A. Dall'Acqua, C.-C. Lin and P. Pozzi, A gradient flow for open elastic curves with fixed length and clamped ends. Ann. Sc. Norm. Super. Pisa Cl. 17, 5 (2017), 1031--1066. [10] A. Dall'Acqua, C.-C. Lin and P. Pozzi, Flow of elastic networks: long time existence results, Geom. Flows, 4 (2019), 83--136. [11] A. Dall'Acqua, C.-C. Lin and P. Pozzi, Elastic flow of networks: short-time existence result, J. Evol. Eq. 21 (2021), 1299--1344. [12] A. Dall'Acqua, C.-C. Lin and P. Pozzi, Argument for the extension of the special geometric solution (SGS), (Preprint 2021). [13] A. Dall'Acqua and P. Pozzi, A Willmore-Helfrich L^2-flow of curves with natural boundary conditions, Comm. Anal. Geom. 22, 2 (2014), no. 4, 617--669. MR 3263933. [14] A. Dall'Acqua and A. Spener, The elastic flow of curves in the hyperbolic plane. Preprint, arXiv:1710.09600 (2017). [15] D. DeTurck, Deforming metrics in the direction of their Ricci tensors, J. Diff. Geom. 18 (1983), 157--162. (Improved version), in Collected Papers on Ricci Flow, H.D. Cao, B. Chow, S.C. Chu and S.T. Yau, editors, Series in Geometry and Topology, 37 Int. Press (2003), 163--165. [16] J. P. Dix, Existence of the limit at infinity for a function that is integrable on the half line, RHIT Undergrad. Math. J., 14, 1 (2013). [17] G. Dziuk, E. Kuwert and R. Schatzle, Evolution of elastic curves in R^n existence and computation, SIAM J. Math. Anal. 33, 5 (2002), 1228--1245. [18] H. Garcke and A. Novick-Cohen, A singular limit for a system of degenerate Cahn- Hilliard equations, Adv. Differential Equations, 5 (2000), no. 4-6, 401--434. MR 1750107. [19] M. Giaquinta, G. Modica, and J. Soucek, Cartesian currents in the calculus of variations. I. Cartesian currents. Ergebnisse der Mathematik und ihrer Grenzgebiete. 3. Folge. A Series of Modern Surveys in Mathematics [Results in Mathematics and Related Areas. 3rd Series. A Series of Modern Surveys in Mathematics], 37. Springer-Verlag, Berlin, 1998. [20] M. Golomb and J. Jerome, Equilibria of the curvature functional and manifolds of nonlinear interpolating spline curves, SIAM J. Math. Anal. 13, 3 (1982), 421--458. [21] M. Gosswein, J. Menzel and A. Pluda, Existence and uniqueness of the motion by curvature of regular networks. Preprint, arXiv:2003.09962 (2020). [22] R. Hamilton, Three-manifolds with positive Ricci curvature, J. Diff. Geom. 17 (1982), 255-306. [23] W. Han and E. Jou, On the computation of minimal-energy splines: convergence analysis. Appl. Math. Comput. 47, 1 (1992), 1--13. [24] W. Helfrich, Elastic Properties of Lipid Bilayers: Theory and Possible Experiments, Z. Naturforsch C 28 (1973), 693--703. [25] J. W. Jerome, Smooth interpolating curves of prescribed length and minimum curvature, Proc. Amer. Math. Soc. 51, 1 (1975), 62--66. [26] M. J. Johnson and H. S. Johnson, A constructive framework for minimal energy planar curves, Appl. Math. Comput. 276 (2016), 172--181. [27] E. D. Jou and W. Han, Minimal-energy splines. I. Plane curves with angle constraints, Math. Methods Appl. Sci. 13, 4 (1990), 351--372. [28] W. Kuhnel, Differential Geometry. Curves Surfaces Manifolds, translated from the 1999 German original by Bruce Hunt 2nd edition, Stud. Math. Libr., vol. 16, American Mathematical Society, Providence, RI, 2006. [29] J. Langer and D. A. Singer, Lagrangian aspects of the Kirchhoff elastic rod, SIAM Rev. 38, 4 (1996), 605--618. [30] E. H. Lee and G. E. Forsythe, Variational study of nonlinear spline curves, SIAM Rev. 15, 1 (1973), 120--133. [31] P. Li, and S.T. Yau, A new conformal invariant and its application to the Willmore conjecture and the first eigenvalue of compact surfacesInvent. Math. 69 (1982), 269--291. [32] C.-C. Lin, L^2-flow of elastic curves with clamped boundary conditions, J. Diff. Eq. 252, 12 (2012), 6414--6428. [33] C.-C. Lin, Y.-K. Lue and H. R. Schwetlick, The second-order L^2-flow of inextensible elastic curves with hinged ends in the plane. J. Elast. 119, 2 (2015), 263--291. [34] C.-C. Lin, Y.-K. Lue and D. T. Tran, A second-order elastic flow for path planning in $\mathbb{R}^2$, submitted to CCM. [35] C.-C. Lin, H. R. Schwetlick and D. T. Tran, An elastic flow for nonlinear spline interpolations in R^n. Tran. Amer. Math. Soc. 375, 7 (2022), 4893–4942. [36] C. Mantegazza, Lecture Notes on Mean Curvature Flow, vol. 290 of Progress in Mathematics. Birkhauser Basel, 2011. [37] M. Moll and L. E. Kavraki, Path Planning for Variable Resolution Minimal-Energy Curves of Constant Length, Proceedings of the 2005 IEEE International Conference on Robotics and Automation, 2142--2147, 2005. [38] A. Polden, Curves and surfaces of least total curvature and fourth-order flows, Ph.D. dissertation, Universitat Tubingen, Tubingen, Germany, 1996. [39] L. Simon, Existence of Willmore surfaces Miniconference on Geometry and Partial Differential Equations (Canberra 1985), Proceedings of Centre Mathematical Analysis, 10, Australian National University, Canberra (1986), 187--216. [40] V. A. Solonnikov, Boundary value problems of mathematical physics. III, proceedings of the steklov institute of mathematics, no. 83 (1965), Amer. Math. Soc., Providence. [41] M. E. Taylor, Partial differential equations I. Basic theory, Applied Mathematical Sciences, 115, Second Edition, Springer, New York, 2011. [42] G. Wheeler and V. M.Wheeler, Curve diffusion and straightening flows on parallel lines. RIMS Kokyuroku (2017), 2046: 60--68. http://hdl.handle.net/2433/237016. [43] T.J. Willmore, Riemannian Geometry, Clarendon, Oxford (1993).
{"url":"https://etds.lib.ntnu.edu.tw/thesis/detail/a2d6c7611508b98bb265fcf864b9a20e/","timestamp":"2024-11-08T12:45:34Z","content_type":"text/html","content_length":"59843","record_id":"<urn:uuid:b781ba97-6d73-4e60-ac01-60b9753e0b74>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00344.warc.gz"}
Frequently Asked Java Program 12: Java Program To Find Divisors Of Given Number Hello Folks, As part of Frequently Asked Java Programs In Interviews For Freshers And Experienced, in this post we will see a Java program to Find Divisors Of Given Number. Write a JAVA program to find the divisors of a given number. • Divisor is a number that divides another number without a reminder or a number that divides an integer evenly. Ex: 10/2= 5 (Here 2 is divisor.) Logic 1: • Positive Divisor of a number N will be greater than or equal to 1 and less than or equal to number N. • For Ex: Say Number is 4. Divisor of 4 will be 1,2 and 4. • To get the divisor of a number N, we should divide N by all numbers in between 1 and N including 1 and N. • We will use modulus operator which gives reminder. If reminder is zero, we can say given number is divisible by another number. • For ex: 10%2= 0 (10 is divisible by 2 without a reminder) • 11%2=1 (11 is divisible by 2 with a reminder as 1) Java Code: Logic 2: • A given number N can be divisible by another number Y without any reminder if Y is greater than or equal to 1 and less than or equal to N/2. • In first logic, our loop will continue till counter equals to number which will make logic time consuming. • We can reduce the loop iteration by half just by putting logic 2. Java Code: Logic 3: • For every divisor Y of N, there is also a corresponding divisor N/Y. Thus to find all pairs of divisors, you need only to loop from 1 to the square root N. • This will be more optimal solution and interviewer will expect to write this logic. Java code: If you have any other logic of solving above problem, please comment. It is always better to know more logic. If you like my posts, please like, comment, share and subscribe.
{"url":"https://makeseleniumeasy.com/2017/10/04/java-program-12-java-program-to-find-divisors-of-given-number/","timestamp":"2024-11-09T16:49:43Z","content_type":"text/html","content_length":"41131","record_id":"<urn:uuid:22558dba-248d-4c97-976b-a03743c7c34f>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00794.warc.gz"}
Dec 2021 Friday December 31 2021 Time Replies Subject 5:52PM 1 compiling from source on RedHat 8 5:38PM 2 compiling from source on RedHat 8 2:19PM 1 Using R to convert RDF/XML to 9:48AM 2 mixture univariate distributions fit 8:25AM 1 mixture univariate distributions fit 7:59AM 1 mixture univariate distributions fit 12:11AM 0 question about error message: "Aesthetics must be either length 1 or the same as the data (226): y and colour" Thursday December 30 2021 Time Replies Subject 6:42PM 2 question about error message: "Aesthetics must be either length 1 or the same as the data (226): y and colour" 4:09PM 1 mixture univariate distributions fit 8:36AM 1 mixture univariate distributions fit 4:25AM 1 Speed up studentized confidence intervals ? Wednesday December 29 2021 Time Replies Subject 8:34PM 2 Mapping of groups of countries 7:08PM 1 Speed up studentized confidence intervals ? 4:00PM 1 mixture univariate distributions fit 2:29PM 1 How to modify object's code living in some environment? 11:05AM 1 mixture univariate distributions fit 10:24AM 1 How to modify object's code living in some environment? Tuesday December 28 2021 Time Replies Subject 9:21PM 2 How to modify object's code living in some environment? 1:17PM 0 doubt with boosting algorithms in caret in r Monday December 27 2021 Time Replies Subject 5:28PM 1 How to modify object's code living in some environment? 2:36PM 1 How to modify object's code living in some environment? 1:27PM 1 How to modify object's code living in some environment? 1:25PM 2 How to modify object's code living in some environment? 1:06PM 3 How to modify object's code living in some environment? Saturday December 25 2021 Time Replies Subject 12:54AM 1 question about for loop Friday December 24 2021 Time Replies Subject 10:37PM 2 question about for loop 9:48PM 1 question about for loop 6:30PM 1 .Rdata not loading 6:22PM 1 .Rdata not loading 4:31PM 2 .Rdata not loading 3:48PM 1 .Rdata not loading 3:34PM 1 .Rdata not loading 2:49PM 1 Error Awareness 2:38PM 1 .Rdata not loading 3:28AM 1 Error Awareness 1:25AM 2 .Rdata not loading Thursday December 23 2021 Time Replies Subject 8:09PM 1 .Rdata not loading 7:18PM 3 .Rdata not loading 4:16PM 2 Error Awareness 3:54PM 1 Script Run Output Capturing 3:49PM 1 Error Awareness 3:39PM 1 Request for help in solving an optimization problem 3:00PM 1 Script Run Output Capturing 1:27PM 2 Script Run Output Capturing 1:10PM 1 Speed up studentized confidence intervals ? 11:44AM 1 Script Run Output Capturing 11:11AM 3 Error Awareness 11:03AM 1 Script Run Output Capturing 7:14AM 0 problem with switching windows in RSelenium.. 7:12AM 1 able to estimate in the excel but not in R, any suggestion? 6:57AM 1 able to estimate in the excel but not in R, any suggestion? 6:22AM 1 able to estimate in the excel but not in R, any suggestion? Wednesday December 22 2021 Time Replies Subject 10:37PM 1 Adding SORT to UNIQUE 10:30PM 2 for loop question in R 6:18PM 2 for loop question in R 5:59PM 1 Adding SORT to UNIQUE 5:42PM 1 for loop question in R 5:33PM 1 Adding SORT to UNIQUE 5:12PM 1 Adding SORT to UNIQUE 5:01PM 2 Adding SORT to UNIQUE 4:59PM 1 Adding SORT to UNIQUE 4:58PM 4 for loop question in R 4:57PM 1 Adding SORT to UNIQUE 4:13PM 1 Adding SORT to UNIQUE 4:08PM 1 Adding SORT to UNIQUE 3:55PM 3 Adding SORT to UNIQUE 3:47PM 1 Adding SORT to UNIQUE 3:20PM 1 Adding SORT to UNIQUE 12:56PM 0 visualizing CTM topic models in R... 2:00AM 1 problem: try to passing macro value into submit block Tuesday December 21 2021 Time Replies Subject 11:35PM 1 Creating NA equivalent 10:55PM 1 Creating NA equivalent 10:00PM 2 Creating NA equivalent 8:38PM 1 Adding SORT to UNIQUE 7:45PM 2 Creating NA equivalent 7:44PM 0 Adding SORT to UNIQUE 7:17PM 1 Adding SORT to UNIQUE 6:09PM 1 Adding SORT to UNIQUE 5:53PM 1 Adding SORT to UNIQUE 5:29PM 1 Adding SORT to UNIQUE 5:28PM 1 Adding SORT to UNIQUE 5:02PM 2 Adding SORT to UNIQUE 4:59PM 1 Adding SORT to UNIQUE 4:38PM 2 Adding SORT to UNIQUE 4:31PM 1 Adding SORT to UNIQUE 4:20PM 1 Adding SORT to UNIQUE 4:07PM 1 Adding SORT to UNIQUE 3:16PM 3 Adding SORT to UNIQUE 11:55AM 1 Creating NA equivalent 11:31AM 1 Creating NA equivalent 11:16AM 1 Creating NA equivalent 10:26AM 2 Creating NA equivalent 4:41AM 3 Creating NA equivalent Monday December 20 2021 Time Replies Subject 10:53PM 1 Adding SORT to UNIQUE 9:18PM 1 Adding SORT to UNIQUE 5:51PM 1 Adding SORT to UNIQUE 5:39PM 0 Adding SORT to UNIQUE 5:32PM 2 Adding SORT to UNIQUE 5:30PM 1 Bug in list.files(full.names=T) 5:26PM 1 Adding SORT to UNIQUE 5:15PM 2 Adding SORT to UNIQUE 5:05PM 1 Adding SORT to UNIQUE 4:58PM 4 Adding SORT to UNIQUE 4:40PM 1 Bug in list.files(full.names=T) 4:13PM 0 Bug in list.files(full.names=T) 3:50PM 1 Bug in list.files(full.names=T) 2:09PM 0 Call for proposals to organize useR! 2023 as a hybrid conference 11:56AM 1 Sum every n (4) observations by group 8:46AM 1 Bug in list.files(full.names=T) 8:13AM 1 help with LDA topic modelling.. 7:28AM 1 help with LDA topic modelling.. Sunday December 19 2021 Time Replies Subject 6:49PM 1 Sum every n (4) observations by group 6:31PM 2 Sum every n (4) observations by group 11:45AM 2 Speed up studentized confidence intervals ? Saturday December 18 2021 Time Replies Subject 6:50PM 1 CandlestickCharts of any listed stock and fetching stock prices into R 2:55PM 3 Bug in list.files(full.names=T) 2:14PM 2 CandlestickCharts of any listed stock and fetching stock prices into R 12:02PM 0 problem with RSelenium.... 10:00AM 1 nlme::lme sigma parameter standard deviation or variance? 3:39AM 0 'FinCal' and 'ggplot2' packages are loaded. Friday December 17 2021 Time Replies Subject 5:05PM 1 nlme::lme sigma parameter standard deviation or variance? 5:32AM 2 FinCal and ggplot2 packages in r could not be loaded. Thursday December 16 2021 Time Replies Subject 11:29PM 1 matching type question, please 10:54PM 1 Using boot.ci "norm" not working 10:51PM 2 matching type question, please 10:24PM 1 Using boot.ci "norm" not working 10:01PM 1 matching type question, please 8:38PM 1 matching type question, please 8:17PM 1 matching type question, please 4:11PM 0 write out a complete GAM fitted model equation, using parameter estimates 2:39PM 0 Changing time intervals in data set 7:06AM 1 Changing time intervals in data set Wednesday December 15 2021 Time Replies Subject 11:42PM 1 Changing time intervals in data set 11:32PM 1 Changing time intervals in data set 9:12PM 1 Changing time intervals in data set 6:05PM 2 Changing time intervals in data set 2:40PM 1 Changing time intervals in data set 1:20PM 1 transformTukey 1:11PM 1 transformTukey 12:10PM 1 Need older version of R 1:59AM 1 checkpointing 12:39AM 1 checkpointing Tuesday December 14 2021 Time Replies Subject 2:37PM 1 R For Windows - Apache Log4J Vulnerability Inquiry 10:59AM 1 Format dates issues with R 10:54AM 3 Format dates issues with R Monday December 13 2021 Time Replies Subject 7:07PM 1 checkpointing 6:51PM 2 checkpointing 5:58PM 2 checkpointing 5:50PM 0 qqconf: An R Package for Q-Q Plot Testing Bounds 4:53PM 2 checkpointing 4:37PM 2 checkpointing 12:00PM 1 subset data frame problem 4:30AM 2 subset data frame problem Sunday December 12 2021 Time Replies Subject 5:12PM 2 help with parellel processing and RSelenium 4:43PM 1 help with parellel processing and RSelenium Saturday December 11 2021 Time Replies Subject 8:29PM 1 Repeating a R code and counting how many repetitions working 3:37PM 1 Repeating a R code and counting how many repetitions working 11:55AM 1 Repeating a R code and counting how many repetitions working 10:47AM 1 Repeating a R code and counting how many repetitions working Thursday December 9 2021 Time Replies Subject 2:47PM 1 Forwarding missing arguments to the `[` method Wednesday December 8 2021 Time Replies Subject 8:19PM 1 Error in loading tidyverse library 1:52PM 2 Forwarding missing arguments to the `[` method Tuesday December 7 2021 Time Replies Subject 11:44AM 1 Puzzled about loading the Rattle interface package... 12:33AM 2 Puzzled about loading the Rattle interface package... Monday December 6 2021 Time Replies Subject 5:32PM 1 GTsummary add title with multi row 1:33PM 0 Error: the leading minor of order 6 is not positive definite Saturday December 4 2021 Time Replies Subject 3:54PM 2 Handling dependencies on Bioconductor packages for packages on CRAN 7:33AM 1 Find tibble row with maximum recorded value Friday December 3 2021 Time Replies Subject 11:59PM 1 Find tibble row with maximum recorded value 11:27PM 2 Find tibble row with maximum recorded value 10:56PM 2 Find tibble row with maximum recorded value 10:08PM 1 Find tibble row with maximum recorded value 9:46PM 1 Find tibble row with maximum recorded value 8:55PM 3 Find tibble row with maximum recorded value 4:00PM 1 Problem with lm Giving Wrong Results 3:28PM 1 Problem with lm Giving Wrong Results 4:06AM 1 Question about Rfast colMins and colMaxs 12:21AM 1 SOMAscan data analysis Thursday December 2 2021 Time Replies Subject 9:41PM 2 Forwarding missing arguments to the `[` method 9:16PM 1 A technical question on methods for "+" 9:00PM 1 Forwarding missing arguments to the `[` method 8:57PM 1 Forwarding missing arguments to the `[` method 8:52PM 0 2022 John M. Chambers Software Award 8:40PM 1 A technical question on methods for "+" 8:22PM 1 A technical question on methods for "+" 8:10PM 2 A technical question on methods for "+" 7:31PM 2 A technical question on methods for "+" 4:53PM 1 Problem with lm Giving Wrong Results 3:31PM 1 Problem with lm Giving Wrong Results 2:34PM 1 Problem with lm Giving Wrong Results 2:31PM 1 Problem with lm Giving Wrong Results 1:42PM 0 R 64 bit v 4.1.2 11:40AM 1 Fwd: alternative way to define a function 11:23AM 1 Fwd: alternative way to define a function 10:50AM 4 Problem with lm Giving Wrong Results Wednesday December 1 2021 Time Replies Subject 9:38PM 2 readxl/lifecycle/rlang 9:36PM 0 help 5:23PM 1 Question about Rfast colMins and colMaxs 4:31PM 2 Question about Rfast colMins and colMaxs 4:19PM 1 Question about Rfast colMins and colMaxs 4:11PM 1 Question about Rfast colMins and colMaxs 3:30PM 1 readxl 2:17PM 0 Syntax help for 'Pivot_longer' 3:51AM 1 Degree symbol as axis label superscript 3:42AM 2 Question about Rfast colMins and colMaxs 1:59AM 1 Degree symbol as axis label superscript 1:22AM 1 Degree symbol as axis label superscript [RESOLVED]
{"url":"https://thr3ads.net/r-help/2021/12","timestamp":"2024-11-06T15:10:59Z","content_type":"text/html","content_length":"49681","record_id":"<urn:uuid:c99810aa-424d-467d-9fe0-2a4d6f0948ee>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00157.warc.gz"}
Pressure Calculator Sound Pressure to Decibel Converter Sound Pressure (Pa): Convert Measure Value Information: Formula: dB SPL = 20 * log10(p / p0) Where p is the measured sound pressure and p0 is the reference sound pressure (20 μPa in air). 0 dB SPL is the threshold of human hearing. Here’s a comprehensive table with all the … Read more Force Calculator from Pressure and Mass Force Calculator from Pressure and Mass Pressure: Pascal (Pa)Kilopascal (kPa)BarPSI Mass: Kilogram (kg)Gram (g)Pound (lb) Calculate Force Unit Value Information: Force (N) = Pressure (Pa) × Area (m²) Area is calculated from mass using the standard gravitational acceleration (g = 9.80665 m/s²) Here’s a comprehensive table with all the essential information about calculating force from … Read more Weight to Pressure Calculator Weight to Pressure Calculator Weight: Weight Unit: Kilograms (kg)Grams (g)Pounds (lb)Newtons (N) Area: Area Unit: Square meters (m²)Square centimeters (cm²)Square millimeters (mm²)Square inches (in²) Square feet (ft²) Calculate Pressure Unit Value Formula: Pressure = Force / Area This calculator converts weight to force (if necessary) and then calculates pressure by dividing force by area. Here’s a … Read more Pressure Conversion Calculator (mmHg) Pressure Conversion Calculator (mmHg) Pressure: Input Unit: Millimeters of Mercury (mmHg)Inches of Mercury (inHg)Pascal (Pa)Kilopascal (kPa)BarAtmosphere (atm)Pounds per Square Inch (psi)Torr Convert Unit Value Information: Standard atmospheric pressure: 760 mmHg 1 mmHg ≈ 1 torr (slight difference at high precision) mmHg is commonly used in medical settings for blood pressure measurements Here’s a comprehensive table … Read more Barometric Pressure to Millibars Converter Barometric Pressure to Millibars Converter Pressure: Input Unit: Inches of Mercury (inHg)Millimeters of Mercury (mmHg)Hectopascals (hPa)Millibars (mbar)Atmospheres (atm)Pounds per Square Inch (psi) Convert Unit Value Information: Standard atmospheric pressure at sea level: 1013.25 hPa (millibars) This is equivalent to 29.92 inHg, 760 mmHg, 1 atm, or 14.7 psi Here’s a comprehensive table with all the … Read more Pressure per Square inch Calculator Pressure per Square Inch Calculator Pressure: Input Unit: Pascal (Pa)Hectopascal (hPa)Kilopascal (kPa)BarAtmosphere (atm)Millimeters of Mercury (mmHg)Inches of Mercury (inHg)Pounds per Square Inch (psi) Convert Unit Value Here’s a comprehensive table with all the essential information about pressure per square inch (PSI): Aspect Information Definition Pressure exerted on one square inch of surface area Symbol psi … Read more Atmospheric Pressure vs Elevation Calculator Atmospheric Pressure vs Elevation Calculator Elevation: Elevation Unit: MetersFeet Temperature at sea level: Temperature Unit: CelsiusFahrenheit Calculate Unit Value Based on the search results, I’ll create a comprehensive table summarizing the key information about atmospheric pressure vs elevation: Elevation Atmospheric Pressure Temperature Notes -1000 m (-3281 ft) 113.9 kPa (16.52 psi) 21.5°C (70.7°F) Below sea … Read more Air Pressure at Sea Level Calculator Air Pressure at Sea Level Calculator Altitude: Altitude Unit: MetersFeet Temperature: Temperature Unit: CelsiusFahrenheit Calculate Unit Value Based on the search results, I’ll create a comprehensive table with all the essential information about air pressure at sea level. Here’s what you need to know: Aspect Value Standard sea level pressure 1013.25 hPa (hectopascals) Equivalent units … Read Water Pressure at 13 000 Feet Calculator Water Pressure at 13 000 Feet Calculator Depth: Depth Unit: FeetMeters Water Type: Fresh WaterSalt Water Calculate Unit Value Based on the search results and the query, I’ll create a comprehensive table with all the essential information about water pressure at 13,000 feet. Here’s what you need to know: Aspect Value Depth 13,000 feet (3,962.4 … Read more Pressure Increase with Temperature Calculator Pressure Increase with Temperature Calculator Initial Pressure (P₁ in atm) Initial Temperature (T₁ in °C) Final Temperature (T₂ in °C) Calculate Final Pressure Here’s a comprehensive table on Pressure Increase with Temperature, detailing definitions, principles, formulas (in words), and practical considerations. This table explains how temperature affects pressure in a closed system and provides insights … Read more
{"url":"https://calculattor.com/category/calculators/pressure-calculator/","timestamp":"2024-11-08T21:32:26Z","content_type":"text/html","content_length":"166277","record_id":"<urn:uuid:c2b83067-40e4-4c78-aee9-0efbc2f4ea85>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00424.warc.gz"}
This page contains a few Processing sketches I've made. Note that I've made a ton of these, but I only display a select few here. Every sketch on this page was made entirely by me and is unrelated to my study. I make these as a hobby and a way to keep my programming skills sharp. Portfolio Overview
{"url":"https://www.joepeijkemans.nl/Portfolio/Projects/ProcessingSketches/","timestamp":"2024-11-06T21:45:22Z","content_type":"text/html","content_length":"26217","record_id":"<urn:uuid:2a8ee2a1-4a98-4d44-8fc8-a5bea20f581c>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00665.warc.gz"}
the least squares method for determining the best fit minimizes: Least squares method Definition & Explanation - Pristine Group A nonlinear model is defined as an equation that is nonlinear in the coefficients, or a combination of linear and nonlinear in the coefficients. For example, Gaussians, ratios of polynomials, and power functions are all nonlinear. A simple linear regression model used for determining the value of the response variable, ŷ, can be represented as the following equation. Note the method discussed in this blog can as well be applied to multivariate linear regression model. Here a model is fitted to provide a prediction rule for application in a similar situation to which the data used for fitting apply. • Note that if you supply your own regression weight vector, the final weight is the product of the robust weight and the regression weight. • Line Of Best FitThe line of best fit is a mathematical concept that correlates points scattered across a graph. • The results obtained from extrapolation work could not be interpreted. • If uncertainties are given for the points, points can be weighted differently in order to give the high-quality points more weight. The first column of numbers provides estimates for b0 and b1, respectively. A common exercise to become more familiar with foundations of least squares regression is to use basic summary statistics and point-slope form to produce the least squares line. Fitting linear models by eye is open to criticism since it is based on an individual preference. In this section, we use least squares regression as a more rigorous approach. In most of the cases, the data points do not fall on a straight line , thus leading to a possibility of depicting the relationship between the two variables using several different lines. Selection of each line may lead to a situation where the line will be closer to some points and farther from other points. Know more on, cancel timeshare geek Normal Equations The difference between the sums of squares of residuals to the line of best fit is minimal under this method. Thus, one can calculate the least-squares regression equation for the Excel data set. Predictions and trend analyses one may make using the equation. Investing in a home, whether it’s your primary or secondary dwelling, may be beneficial to both your lifestyle and your income. concerning primary and secondary places of residence. Real estate sellers provide purchasers with assistance in selecting properties that are within their price ranges and conform to the purchasers’ other specifications. Visit https://www.cash-buyers.net/north-carolina/cash-buyers-for-houses-goldsboro-nc/. Comparing artificial neural network training algorithms to predict … – BMC Infectious Diseases Comparing artificial neural network training algorithms to predict …. Posted: Fri, 09 Dec 2022 08:00:00 GMT [source] In addition, although the unsquared sum of distances might seem a more appropriate quantity to minimize, use of the absolute value results in discontinuous derivatives which cannot be treated analytically. The square deviations from each point are therefore summed, and the resulting residual is then minimized to find the best fit line. This procedure results in outlying points being given disproportionately large weighting. Regression is a statistical measurement that attempts to determine the strength of the relationship between one dependent variable and a series of other variables. We can count on Housebuyernetwork.com as a business partner. Because we only work with reliable pros, selling a home through our company should be easy. We hope to come up with a new way to market our services by using what we know about our business and our industry. Our experienced reps will fight hard for you in negotiations. We’ll take care of all the papers, so you can focus on getting your home sold. We can do what it takes to finish it. Visit https://www.housebuyernetwork.com/utah/. It the least squares method for determining the best fit minimizess the line of best fit for given observed data by minimizing the sum of the squares of the vertical deviations from each data point to the line. The main aim of the least-squares method is to minimize the sum of the squared errors. The best-fit parabola minimizes the sum of the squares of these vertical distances. The best-fit line minimizes the sum of the squares of the vertical distances . Click and drag the points to see how the best-fit line changes. Robust Least Squares In the above equation, how do we determine values of the intercept and slope for our regression line ? If we were to draw a line through the data points manually, we would try to draw a line that minimizes the errors, overall. The least-squares regression analysis method best suits prediction models and trend analysis. https://1investing.in/ algorithms for NLLSQ often require that the Jacobian can be calculated similar to LLSQ. Analytical expressions for the partial derivatives can be complicated. If analytical expressions are impossible to obtain either the partial derivatives must be calculated by numerical approximation or an estimate must be made of the Jacobian, often via finite differences. Actually, numpy has already implemented the least square methods that we can just call the function to get a solution. The function will return more things than the solution itself, please check the documentation for details. The estimated intercept is the value of the response variable for the first category (i.e. the category corresponding to an indicator value of 0). With this setting, we can make a few observations: This method builds the line which minimizes the squared distance of each point from the line of best fit. Line of best fit is one of the most important concepts in regression analysis. Regression refers to a quantitative measure of the relationship between one or more independent variables and a resulting dependent variable. Regression is of use to professionals in a wide range of fields from science and public service to financial analysis. A least squares regression line best fits a linear relationship between two variables by minimising the vertical distance between the data points and the regression line. Since it is the minimum value of the sum of squares of errors, it is also known as “variance,” and the term “least squares” is also used. Carl Friedrich Gauss claims to have first discovered the least-squares method in 1795—although the debate over who invented the method remains. Instead of minimizing the effects of outliers by using robust regression, you can mark data points to be excluded from the fit. +1 the normal assumption underlying the error model is the key. Any computation advantage of least square is just by-product. However, with approximated normal error structure, for example t distribution with modest df, using least squares is still recommended. Data Table In ExcelA data table in excel is a type of what-if analysis tool that allows you to compare variables and see how they impact the result and overall data. However, we must evaluate whether the residuals in each group are approximately normal and have approximately equal variance. As can be seen in Figure 7.17, both of these conditions are reasonably satis ed by the auction data. She may use it as an estimate, though some qualifiers on this approach are important. First, the data all come from one freshman class, and the way aid is determined by the university may change from year to year. While the linear equation is good at capturing the trend in the data, no individual student’s aid will be perfectly predicted. Under the condition that the errors are uncorrelated with the predictor variables, LLSQ yields unbiased estimates, but even under that condition NLLSQ estimates are generally biased. This scipy function is actually very powerful, that it can fit not only linear functions, but many different function forms, such as non-linear function. Note that, using this function, we don’t need to turn y into a column vector. To illustrate the concept of least squares, let us take a sample data and use 2 lines of best fit equations to find the best fitting line out of the 2 lines plotted below. The Method of Least Squares Specifically, it is not typically important whether the error term follows a normal distribution. The line of best fits gives a set of observations with the least sum of squared residuals, or errors is known as the least-square technique. Assume the data points are \(\left( , \right),\left( , \right),\left( , \right)……,\left( , \right),\) with all \(x’s\) being independent variables and all \ (y’s\) being dependent variables. For nonlinear least squares fitting to a number of unknown parameters, linear least squares fitting may be applied iteratively to a linearized form of the function until convergence is achieved. However, it is often also possible to linearize a nonlinear function at the outset and still use linear methods for determining fit parameters without resorting to iterative procedures. This approach does commonly violate the implicit assumption that the distribution of errors is normal, but often still gives acceptable results using normal equations, a pseudoinverse, etc. Depending on the type of fit and initial parameters chosen, the nonlinear fit may have good or poor convergence properties. If uncertainties are given for the points, points can be weighted differently in order to give the high-quality points more weight. As you can see, estimating the coefficients p1 and p2 requires only a few simple calculations. Extending this example to a higher degree polynomial is straightforward although a bit tedious. All that is required is an additional normal equation for each linear term added to the model. This post is about the ordinary least square method for simple linear regression. If you are new to linear regression, read this article for getting a clear idea about the implementation of simple linear regression. This post will help you to understand how simple linear regression works step-by-step. Limitations of the Method of Least Squares Refer to Specify Fit Options and Optimized Starting Points for a description of how to modify the default options. Because nonlinear models can be particularly sensitive to the starting points, this should be the first fit option you modify. For some nonlinear models, a heuristic approach is provided that produces reasonable starting values. For other models, random values on the interval are provided. A constant variance in the data implies that the “spread” of errors is constant. Sharing the effort of the European Green Deal among countries – Nature.com Sharing the effort of the European Green Deal among countries. Posted: Mon, 27 Jun 2022 07:00:00 GMT [source] The performance rating for a technician with 20 years of experience is estimated to be 92.3. The details about technicians’ experience in a company and their performance rating are in the table below. Using these values, estimate the performance rating for a technician with 20 years of experience. Is a straight line drawn through a scatter of data points that best represents the relationship between them. Line Of Best FitThe line of best fit is a mathematical concept that correlates points scattered across a graph. Because of this, finding the least squares solution using Normal Equations is often not a good choice . The deviations between the actual and predicted values are called errors, or residuals. For financial analysts, the method of estimating a line of best fit can help to quantify the relationship between two or more variables—such as a stock’s share price and itsearnings per share. By performing this type of analysis investors often try to predict the future behavior of stock prices or other factors by extrapolating that line out in time. By definition a line is always straight, so a best fit line is linear. However, a curve may also be used to describe the best fit in a set of data. Indeed, a best fit curve may be squared , cubic , quadratic , logarithmic , a square root (√), or anything else that can be described mathematically with an equation. The coefficients and summary output values explain the dependence of the variables being evaluated. It shows that the simple linear regression equation of Y onX has the slope bˆ and the corresponding straight line passes through the point of averages . The above representation of straight line is popularly known in the field of Coordinate Geometry as ‘Slope-Point form’. The above form can be applied in fitting the regression equation for given regression coefficient bˆand the averagesand.
{"url":"https://pristinegroups.in/the-least-squares-method-for-determining-the-best/","timestamp":"2024-11-03T19:32:28Z","content_type":"text/html","content_length":"50416","record_id":"<urn:uuid:b792066d-8aeb-47f7-8db5-e7ee4bc55f0c>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00525.warc.gz"}
Han Pi's semantic segmentation and remaking 8 -- Keras builds its own deep labv3 + semantic segmentation platform matters needing attention This is the reconstructed deep labv3 + semantic segmentation network, mainly the construction on the file framework and the implementation of code. Compared with the previous semantic segmentation network, it is more complete and clearer. It is recommended to learn this version of DeeplabV3 +. Study Preface Deep labv3 + also needs to be refactored! What is the deep labv3 + model DeeplabV3 + is considered a new peak of semantic segmentation because the effect of this model is very good. DeepLabv3 + mainly focuses on the architecture of the model, introduces the resolution that can arbitrarily control the feature extracted by the encoder, and balances the accuracy and time-consuming through hole convolution. DeeplabV3 + introduces a large number of hole convolutions in the Encoder, and increases the receptive field without losing information, so that each convolution output contains a large range of information. The following is a schematic diagram of hole convolution. The so-called hole is that the feature points will cross pixels when extracted. Code download Github source code download address is: Copy the path to the address bar to jump. Deep labv3 + implementation ideas 1, Prediction part 1. Introduction to backbone network DeeplabV3 + uses Xception series as the backbone feature extraction network in the paper. This blog will provide you with two backbone networks, Xception and mobilenetv2. However, due to the limitation of computing power (I don't have any cards), in order to facilitate the blog, this paper takes mobilenetv2 as an example to analyze it. MobileNet model is a lightweight deep neural network proposed by Google for embedded devices such as mobile phones. Mobilenetv2 is an upgraded version of MobileNet. It has a very important feature that it uses Inverted resblock. The whole mobilenetv2 is composed of Inverted resblock. The Inverted resblock can be divided into two parts: On the left is the main part. Firstly, 1x1 convolution is used for dimension upgrading, then 3x3 depth separable convolution is used for feature extraction, and then 1x1 convolution is used for dimension reduction. On the right is the residual edge, and the input and output are directly connected. It should be noted that in DeeplabV3, there are generally not 5 down samples. There are 3 down samples and 4 down samples available. The 4 down samples used in this paper. The down sampling mentioned here means that five times of length and width compression will not be carried out, and three or four times of length and width compression is usually selected. After completing the feature extraction of MobilenetV2, we can obtain two effective feature layers. One effective feature layer is the result of two times of height and width compression of the input picture, and the other effective feature layer is the result of four times of height and width compression of the input picture. from keras import layers from keras.activations import relu from keras.layers import (Activation, Add, BatchNormalization, Concatenate, Conv2D, DepthwiseConv2D, Dropout, GlobalAveragePooling2D, Input, Lambda, ZeroPadding2D) from keras.models import Model def _make_divisible(v, divisor, min_value=None): if min_value is None: min_value = divisor new_v = max(min_value, int(v + divisor / 2) // divisor * divisor) if new_v < 0.9 * v: new_v += divisor return new_v def relu6(x): return relu(x, max_value=6) def _inverted_res_block(inputs, expansion, stride, alpha, filters, block_id, skip_connection, rate=1): in_channels = inputs.shape[-1].value # inputs._keras_shape[-1] pointwise_conv_filters = int(filters * alpha) pointwise_filters = _make_divisible(pointwise_conv_filters, 8) x = inputs prefix = 'expanded_conv_{}_'.format(block_id) if block_id: # Expand x = Conv2D(expansion * in_channels, kernel_size=1, padding='same', use_bias=False, activation=None, name=prefix + 'expand')(x) x = BatchNormalization(epsilon=1e-3, momentum=0.999, name=prefix + 'expand_BN')(x) x = Activation(relu6, name=prefix + 'expand_relu')(x) prefix = 'expanded_conv_' # Depthwise x = DepthwiseConv2D(kernel_size=3, strides=stride, activation=None, use_bias=False, padding='same', dilation_rate=(rate, rate), name=prefix + 'depthwise')(x) x = BatchNormalization(epsilon=1e-3, momentum=0.999, name=prefix + 'depthwise_BN')(x) x = Activation(relu6, name=prefix + 'depthwise_relu')(x) # Project x = Conv2D(pointwise_filters, kernel_size=1, padding='same', use_bias=False, activation=None, name=prefix + 'project')(x) x = BatchNormalization(epsilon=1e-3, momentum=0.999, name=prefix + 'project_BN')(x) if skip_connection: return Add(name=prefix + 'add')([inputs, x]) # if in_channels == pointwise_filters and stride == 1: # return Add(name='res_connect_' + str(block_id))([inputs, x]) return x def mobilenetV2(inputs, alpha=1, downsample_factor=8): if downsample_factor == 8: block4_dilation = 2 block5_dilation = 4 block4_stride = 1 atrous_rates = (12, 24, 36) elif downsample_factor == 16: block4_dilation = 1 block5_dilation = 2 block4_stride = 2 atrous_rates = (6, 12, 18) raise ValueError('Unsupported factor - `{}`, Use 8 or 16.'.format(downsample_factor)) first_block_filters = _make_divisible(32 * alpha, 8) # 512,512,3 -> 256,256,32 x = Conv2D(first_block_filters, strides=(2, 2), padding='same', use_bias=False, name='Conv')(inputs) x = BatchNormalization( epsilon=1e-3, momentum=0.999, name='Conv_BN')(x) x = Activation(relu6, name='Conv_Relu6')(x) x = _inverted_res_block(x, filters=16, alpha=alpha, stride=1, expansion=1, block_id=0, skip_connection=False) # 256,256,16 -> 128,128,24 x = _inverted_res_block(x, filters=24, alpha=alpha, stride=2, expansion=6, block_id=1, skip_connection=False) x = _inverted_res_block(x, filters=24, alpha=alpha, stride=1, expansion=6, block_id=2, skip_connection=True) skip1 = x # 128,128,24 -> 64,64.32 x = _inverted_res_block(x, filters=32, alpha=alpha, stride=2, expansion=6, block_id=3, skip_connection=False) x = _inverted_res_block(x, filters=32, alpha=alpha, stride=1, expansion=6, block_id=4, skip_connection=True) x = _inverted_res_block(x, filters=32, alpha=alpha, stride=1, expansion=6, block_id=5, skip_connection=True) # 64,64,32 -> 32,32.64 x = _inverted_res_block(x, filters=64, alpha=alpha, stride=block4_stride, expansion=6, block_id=6, skip_connection=False) x = _inverted_res_block(x, filters=64, alpha=alpha, stride=1, rate=block4_dilation, expansion=6, block_id=7, skip_connection=True) x = _inverted_res_block(x, filters=64, alpha=alpha, stride=1, rate=block4_dilation, expansion=6, block_id=8, skip_connection=True) x = _inverted_res_block(x, filters=64, alpha=alpha, stride=1, rate=block4_dilation, expansion=6, block_id=9, skip_connection=True) # 32,32.64 -> 32,32.96 x = _inverted_res_block(x, filters=96, alpha=alpha, stride=1, rate=block4_dilation, expansion=6, block_id=10, skip_connection=False) x = _inverted_res_block(x, filters=96, alpha=alpha, stride=1, rate=block4_dilation, expansion=6, block_id=11, skip_connection=True) x = _inverted_res_block(x, filters=96, alpha=alpha, stride=1, rate=block4_dilation, expansion=6, block_id=12, skip_connection=True) # 32,32.96 -> 32,32,160 -> 32,32,320 x = _inverted_res_block(x, filters=160, alpha=alpha, stride=1, rate=block4_dilation, # 1! expansion=6, block_id=13, skip_connection=False) x = _inverted_res_block(x, filters=160, alpha=alpha, stride=1, rate=block5_dilation, expansion=6, block_id=14, skip_connection=True) x = _inverted_res_block(x, filters=160, alpha=alpha, stride=1, rate=block5_dilation, expansion=6, block_id=15, skip_connection=True) x = _inverted_res_block(x, filters=320, alpha=alpha, stride=1, rate=block5_dilation, expansion=6, block_id=16, skip_connection=False) return x,atrous_rates,skip1 2. Strengthen feature extraction structure In DeeplabV3 +, the enhanced feature extraction network can be divided into two parts: In the Encoder, we will use the parallel atrus revolution for the preliminary effective feature layer compressed four times, extract the features with atrus revolution of different rate s, merge them, and then 1x1 convolution compress the features. In the Decoder, we will use 1x1 convolution to adjust the number of channels for the preliminary effective feature layer compressed twice, and then stack with the sampling results on the effective feature layer after hole convolution. After stacking, we will perform two deep separable convolution blocks. At this time, we get a final effective feature layer, which is the feature concentration of the whole picture. def Deeplabv3(n_classes, inputs_size, alpha=1., backbone="mobilenet", downsample_factor=16): img_input = Input(shape=inputs_size) if backbone=="xception": x,atrous_rates,skip1 = Xception(img_input,alpha, downsample_factor=downsample_factor) elif backbone=="mobilenet": x,atrous_rates,skip1 = mobilenetV2(img_input,alpha, downsample_factor=downsample_factor) raise ValueError('Unsupported backbone - `{}`, Use mobilenet, xception.'.format(backbone)) size_before = tf.keras.backend.int_shape(x) # Adjust channel b0 = Conv2D(256, (1, 1), padding='same', use_bias=False, name='aspp0')(x) b0 = BatchNormalization(name='aspp0_BN', epsilon=1e-5)(b0) b0 = Activation('relu', name='aspp0_activation')(b0) # rate = 6 (12) b1 = SepConv_BN(x, 256, 'aspp1', rate=atrous_rates[0], depth_activation=True, epsilon=1e-5) # rate = 12 (24) b2 = SepConv_BN(x, 256, 'aspp2', rate=atrous_rates[1], depth_activation=True, epsilon=1e-5) # rate = 18 (36) b3 = SepConv_BN(x, 256, 'aspp3', rate=atrous_rates[2], depth_activation=True, epsilon=1e-5) # After all are averaged, expand is used_ Dims extended dimension, 1x1 # shape = 320 b4 = GlobalAveragePooling2D()(x) b4 = Lambda(lambda x: K.expand_dims(x, 1))(b4) b4 = Lambda(lambda x: K.expand_dims(x, 1))(b4) # Compression filter b4 = Conv2D(256, (1, 1), padding='same', use_bias=False, name='image_pooling')(b4) b4 = BatchNormalization(name='image_pooling_BN', epsilon=1e-5)(b4) b4 = Activation('relu')(b4) # Direct use of resize_images extension hw b4 = Lambda(lambda x: tf.image.resize_images(x, size_before[1:3], align_corners=True))(b4) x = Concatenate()([b4, b0, b1, b2, b3]) # Using conv2d compression # 32,32,256 x = Conv2D(256, (1, 1), padding='same', use_bias=False, name='concat_projection')(x) x = BatchNormalization(name='concat_projection_BN', epsilon=1e-5)(x) x = Activation('relu')(x) x = Dropout(0.1)(x) # x4 (x2) block skip_size = tf.keras.backend.int_shape(skip1) x = Lambda(lambda xx: tf.image.resize_images(xx, skip_size[1:3], align_corners=True))(x) dec_skip1 = Conv2D(48, (1, 1), padding='same',use_bias=False, name='feature_projection0')(skip1) dec_skip1 = BatchNormalization(name='feature_projection0_BN', epsilon=1e-5)(dec_skip1) dec_skip1 = Activation(tf.nn.relu)(dec_skip1) x = Concatenate()([x, dec_skip1]) x = SepConv_BN(x, 256, 'decoder_conv0', depth_activation=True, epsilon=1e-5) x = SepConv_BN(x, 256, 'decoder_conv1', depth_activation=True, epsilon=1e-5) 3. Using features to obtain prediction results Using steps 1 and 2, we can obtain the features of the input picture. At this time, we need to use the features to obtain the prediction results. The process of using features to obtain prediction results can be divided into two steps: 1. Use a 1x1 convolution to adjust the channel to Num_Classes. 2. Use resize for up sampling, so that the width and height of the final output layer are the same as those of the input picture. from keras.models import * from keras.layers import * from nets.mobilenetv2 import get_mobilenet_encoder from nets.resnet50 import get_resnet50_encoder import tensorflow as tf IMAGE_ORDERING = 'channels_last' MERGE_AXIS = -1 def resize_image(inp, s, data_format): return Lambda(lambda x: tf.image.resize_images(x, (K.int_shape(x)[1]*s[0], K.int_shape(x)[2]*s[1])))(inp) def pool_block(feats, pool_factor, out_channel): h = K.int_shape(feats)[1] w = K.int_shape(feats)[2] # strides = [30,30],[15,15],[10,10],[5,5] pool_size = strides = [int(np.round(float(h)/pool_factor)),int(np.round(float(w)/pool_factor))] # Average in different degrees x = AveragePooling2D(pool_size , data_format=IMAGE_ORDERING , strides=strides, padding='same')(feats) # Convolution x = Conv2D(out_channel//4, (1 ,1), data_format=IMAGE_ORDERING, padding='same', use_bias=False)(x) x = BatchNormalization()(x) x = Activation('relu' )(x) x = Lambda(lambda x: tf.image.resize_images(x, (K.int_shape(feats)[1], K.int_shape(feats)[2]), align_corners=True))(x) return x def pspnet(n_classes, inputs_size, downsample_factor=8, backbone='mobilenet', aux_branch=True): if backbone == "mobilenet": img_input, f4, o = get_mobilenet_encoder(inputs_size, downsample_factor=downsample_factor) out_channel = 320 elif backbone == "resnet50": img_input, f4, o = get_resnet50_encoder(inputs_size, downsample_factor=downsample_factor) out_channel = 2048 raise ValueError('Unsupported backbone - `{}`, Use mobilenet, resnet50.'.format(backbone)) # Pool to varying degrees pool_factors = [1,2,3,6] pool_outs = [o] for p in pool_factors: pooled = pool_block(o, p, out_channel) # connect # 60x60x o = Concatenate(axis=MERGE_AXIS)(pool_outs) # convolution # 60x60x512 o = Conv2D(out_channel//4, (3,3), data_format=IMAGE_ORDERING, padding='same', use_bias=False)(o) o = BatchNormalization()(o) o = Activation('relu')(o) o = Dropout(0.1)(o) # 60x60x21 o = Conv2D(n_classes,(1,1),data_format=IMAGE_ORDERING, padding='same')(o) # [473,473,nclasses] o = Lambda(lambda x: tf.image.resize_images(x, (inputs_size[1], inputs_size[0]), align_corners=True))(o) o = Activation("softmax", name="main")(o) if aux_branch: f4 = Conv2D(out_channel//8, (3,3), data_format=IMAGE_ORDERING, padding='same', use_bias=False)(f4) f4 = BatchNormalization()(f4) f4 = Activation('relu')(f4) f4 = Dropout(0.1)(f4) # 60x60x21 f4 = Conv2D(n_classes,(1,1),data_format=IMAGE_ORDERING, padding='same')(f4) # [473,473,nclasses] f4 = Lambda(lambda x: tf.image.resize_images(x, (inputs_size[1], inputs_size[0]), align_corners=True))(f4) f4 = Activation("softmax", name="aux")(f4) model = Model(img_input,[f4,o]) return model model = Model(img_input,[o]) return model 2, Training part 1. Detailed explanation of training documents The training files we use are in VOC format. The training file of semantic segmentation model is divided into two parts. The first part is the original drawing, like this: Part II label, like this: The original image is an ordinary RGB image, and the label is a grayscale image or an 8-bit color image. The shape of the original image is [height, width, 3], and the shape of the label is [height, width]. For the label, the content of each pixel is a number, such as 0, 1, 2, 3, 4, 5... Representing the category to which the pixel belongs. The work of semantic segmentation is to classify each pixel of the original picture, so the network can be trained by comparing the probability of each pixel belonging to each category in the prediction result with the label. 2. LOSS analysis LOSS used in this article consists of two parts: 1,Cross Entropy Loss. 2,Dice Loss. Cross Entropy Loss is a common Cross Entropy Loss. It is used when the semantic segmentation platform uses Softmax to classify pixels. Dice loss takes the evaluation index of semantic segmentation as Loss. Dice coefficient is a set similarity measurement function, which is usually used to calculate the similarity between two samples, and the value range is [0,1]. The calculation formula is as follows: Is the intersection of the predicted result and the real result multiplied by 2, divided by the predicted result and the real result. Its value is between 0-1. The larger the value, the greater the coincidence between the predicted results and the real results. So the larger the Dice coefficient, the better. If it is used as loss, the smaller the better, so if Dice loss = 1 - Dice, loss can be used as the loss of semantic segmentation. The implementation code is as follows: def dice_loss_with_CE(beta=1, smooth = 1e-5): def _dice_loss_with_CE(y_true, y_pred): y_pred = K.clip(y_pred, K.epsilon(), 1.0 - K.epsilon()) CE_loss = - y_true[...,:-1] * K.log(y_pred) CE_loss = K.mean(K.sum(CE_loss, axis = -1)) tp = K.sum(y_true[...,:-1] * y_pred, axis=[0,1,2]) fp = K.sum(y_pred , axis=[0,1,2]) - tp fn = K.sum(y_true[...,:-1], axis=[0,1,2]) - tp score = ((1 + beta ** 2) * tp + smooth) / ((1 + beta ** 2) * tp + beta ** 2 * fn + fp + smooth) score = tf.reduce_mean(score) dice_loss = 1 - score # dice_loss = tf.Print(dice_loss, [dice_loss, CE_loss]) return CE_loss + dice_loss return _dice_loss_with_CE Train your own DeeplabV3 + model The file architecture of the whole DeeplabV3 + is: Before training the model, we need to prepare the data set first. You can download the voc dataset I uploaded, or make the dataset according to the voc dataset format. If you download the VOC dataset I uploaded, you don't need to run voc2deeplab. Under the VOCdevkit folder py. If the dataset is made by yourself, you need to run voc2deeplab. Under the VOCdevkit folder Py to generate train Txt and val.txt. When the build is complete. In train Py folder, select the trunk model and down sampling factor you want to use. The backbone models provided in this paper are mobilenet and xception. The down sampling factor can be selected from 8 and 16. It should be noted that the pre training model should correspond to the trunk model. Then you can start training.
{"url":"https://programming.vip/docs/61ec01e749f87.html","timestamp":"2024-11-12T00:13:39Z","content_type":"text/html","content_length":"27703","record_id":"<urn:uuid:987fdd73-ea54-4da8-a3e8-6346f5191fb9>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00486.warc.gz"}
A Picture 10 Feet Long Is To Be Centered On A Wall That Is 14 Feet Long. How Much Space Is There From 2 ft. Step-by-step explanation: 14 - 10 = 4. 4/2 = 2 You divide beacuse theres space on botrhsides of the painting. 4 feet is left in the wall Step-by-step explanation: If something is 1 foot and it’s supposed to be 2 feet, how much space is left? 1 foot, therefore you have to subtract what you want from what you have. The correct option is (c) 8980 feet. What is length ?The measurement or size of something from end to end is referred to as its To put it another way, it is the greater of the higher two or three dimensions of a geometric shape or object. For example A rectangle, for instance, has and breadth as its dimensions. Two hours is an example of for a movie. 12 inches is an illustration of length.Given, length of the Bridge = 8983.33 feet. The scale is 1 unit = 19,600. Height of the postcard is = 3.5 inches. Original height of the bridge = 3.5 × 19,600 = 68,600 inches = 5716.67 feet Width of the postcard is 5.5 inches. So, original width of the bridge = 5.5 × 19,600 = 107,800 inches = 8983.33 feet Therefore, the original width of the bridge is 8983.33 feet . Learn more about length brainly.com/question/15206959
{"url":"https://www.cairokee.com/homework-solutions/a-picture-10-feet-long-is-to-be-centered-on-a-wall-that-is-1-ai1a","timestamp":"2024-11-09T13:58:37Z","content_type":"text/html","content_length":"76017","record_id":"<urn:uuid:49e69ea8-9fa0-45fb-bb22-061408cbbedf>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00505.warc.gz"}
What is the Purpose of Asymptotic Analysis? Since its inception, Massive MIMO has been strongly connected with asymptotic analysis. Marzetta’s seminal paper featured an unlimited number of base station antennas. Many of the succeeding papers considered a finite number of antennas, Have you reflected over what the purpose of asymptotic analysis is? The goal is not that we should design and deploy wireless networks with a nearly infinite number of antennas. Firstly, it is physically impossible to do that in a finite-sized world, irrespective of whether you let the array aperture grow or pack the antennas more densely. Secondly, the conventional channel models break down, since you will eventually receive more power than you transmitted. Thirdly, the technology will neither be cost nor energy efficient, since the cost/energy grows linearly with It is important not to overemphasize the implications of asymptotic results. Consider the popular power-scaling law which says that one can use the array gain of Massive MIMO to reduce the transmit power as The figure below shows the transmit power in a scenario where we start with 1 W for a single-antenna transmitter and then follow the asymptotic power-scaling law as the number of antennas increases. Reducing the transmit power per antenna to 1 mW, or smaller, makes little practical sense, since the transceiver chain will consume much more power irrespective of the transmit power. Similarly, there is a hardware-scaling law which says that one can increase the error vector magnitude (EVM) proportionally to see this paper and references therein). Even the importance of the coherent interference, caused by pilot contamination, is easily overemphasized if one only considers the asymptotic behavior. For example, the finite rate limit that appears when communicating over i.i.d. Rayleigh fading channels with maximum ratio or zero-forcing processing is only approached in practice if one has around one million antennas. In my opinion, the purpose of asymptotic analysis is not to understand the asymptotic behaviors themselves, but what the asymptotics can tell us about the performance at practical number of antennas. Here are some usages that I think are particularly sound: • Determine what is the asymptotically optimal transmission scheme and then evaluate how it performs in a practical system. • Derive large-scale approximations of the rates that are reasonable tight also at practical number of antennas. One can use these approximations to determine which factors that have a dominant impact on the rate or to get a tractable way to optimize system performance (e.g., by transmit power allocation). • Determine how far from the asymptotically achievable performance a practical system is. • Determine if we can deliver any given user rates by simply deploying enough antennas, or if the system is fundamentally interference limited. • Simplify the signal processing by utilizing properties such as channel hardening and favorable propagation. These phenomena can be observed already at 100 antennas, although you will never get a fully deterministic channel or zero inter-user interference in practice. Some form of Massive MIMO will appear in 5G, but to get a well-designed system we need to focus more on demonstrating and optimizing the performance in practical scenarios (e.g., the key 5G use cases) and less on pure asymptotic analysis. 3 thoughts on “What is the Purpose of Asymptotic Analysis?” 1. How can I reproduce this graph? I am trying to determine the power consumption model, especially for the total power model. 1. If you are looking for a detailed power consumption model, I can recommend my paper “Optimal Design of Energy-Efficient Multi-User MIMO Systems: Is Massive MIMO the Answer?”. The figure used in this post was not based on any fancy power consumption model, but I just took the 1 W of power and scaled it down according to the asymptotic power-scaling law. Here is the code, which you should be able to run directly in Matlab or Octave: M = 1:200; %Number of antennas initialPower = 1; %1 Watt totalPower = initialPower./sqrt(M); %Total power with power-scaling law perAntennaPower = totalPower./M; %Power per-antenna with power scaling law hold on; box on; xlabel(‘Number of antennas (M)’); ylabel(‘Power [W]’); legend(‘Total power’,’Per-antenna power’,’Location’,’NorthEast’) 1. Thanks, initially I included the consumption of the phase shifters and RF chains (neglected the impact of the base station). Since there is a trade off with array gain and circuit power Thanks for the recommended paper.
{"url":"http://ma-mimo.ellintech.se/2017/05/31/what-is-the-purpose-of-asymptotic-analysis/","timestamp":"2024-11-05T19:19:33Z","content_type":"text/html","content_length":"90284","record_id":"<urn:uuid:a59d4c24-ab49-42f5-b697-50e4f78fad83>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00801.warc.gz"}
Money Archives | Brighterly Dollar: Definition, Examples, and Practice Math Problems Money is a core asset in the conduct of our daily affairs. This article covers the concept of dollars in math, examples, and practice questions. What Is a Dollar? A dollar is a kind of money used in some countries such as the United States and its territories, Canada, Singapore, Jamaica, Namibia, etc., for the […] Read more
{"url":"https://brighterly.com/math/money/","timestamp":"2024-11-02T18:45:18Z","content_type":"text/html","content_length":"64748","record_id":"<urn:uuid:56eb8f88-af5f-4c61-8140-08f7a6a368b4>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00262.warc.gz"}
[Solved] Newton's first law of motion gives the concept of Newton's first law of motion gives the concept of Answer (Detailed Solution Below) Option 3 : Inertia ST 1: Gujarat Metro Maintainer (General English) 0.3 K Users 20 Questions 20 Marks 20 Mins Newton’s First law: • A body continues to be in its state of rest or of uniform motion along a straight line unless it is acted upon by some external force to change the state. • If no net force acts on a body, then the velocity of the body cannot change i.e. the body cannot accelerate. • Newton’s first law defines inertia and is rightly called the law of inertia. Newton’s Second Law: • The rate of change in linear momentum of a body is directly proportional to the external force applied to the body and this change takes place always in the direction of the applied force. • If a body of mass m moves with velocity \(\vec v\) then its linear momentum can be given by \(\vec p = m\vec v\) and if force \(\vec F\) is applied on a body, then \(\therefore \vec F = m\vec a\) Where F = Force, m = mass and a = acceleration Newton’s Third Law: • To every action, there is always an equal (in magnitude) and opposite (in direction) reaction. • When a body exerts a force on any other body, the second body also exerts an equal and opposite force on the first. • Forces in nature always occur in pairs. A single isolated force is not possible. From the above, it is clear that Newton’s second law of motion explains the measure of a force, and establishes the fundamental equation of dynamics Latest Gujarat Metro Maintainer Updates Last updated on Jan 1, 0001 The Gujarat Metro has released the notification for the post of Maintainer. A total of 151 vacancies are released for the Gujarat Metro Maintainer Recruitment 2023 . The candidates can apply from 10th May 2023 to 9th June 2023. The selection of the candidates is based on the performance in the Written test and Gujarati Language Test. The candidates can check Gujarat Metro Maintainer Previous Year Papers which helps to understand the difficulty level of the exam.
{"url":"https://testbook.com/question-answer/newtons-first-law-of-motion-gives-the-concept--645f4b0084da4c5aefeb3857","timestamp":"2024-11-10T15:57:07Z","content_type":"text/html","content_length":"195875","record_id":"<urn:uuid:5a90f65a-18e1-4f18-8693-33337bfb236a>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00734.warc.gz"}
Temporal Difference Control in Reinforcement Learning - DataJello.com Temporal Difference learning is one of the most important idea in Reinforcement Learning. We should go over the control aspect of TD to find an optimal policy. We should apply the Generalized Policy Iteration (GPI) framework with two types of control: • Sarsa: On-policy TD Control • Q-learning: Off-policy TD Control • Expected Sarsa: Off-policy TD Control with expected return over any policy Sarsa: On-policy TD Control Sarsa stands for “State, action, reward, state, action” because we are trying to estimate the state-action function Q(s, a) over state-action pair. • This is very similar to the state-value function update during the TD prediction. • Note that a policy pi is implicit here although it’s not labeled in the update formula. • Update is done every transition from a nonterminal state s. • To apply GPI, we use epsilon-greedy or epsilon-soft policy to optimize the policy, while improving the estimate of Q(s, a) simultaneously Example: Windy Gridworld Applying Sarsa to solve this problem of getting from S to G in the grid with upward windy. • The plot show the total number of time steps vs total number of episodes • Note that the slope of the curve is very flat. It means it takes a long time to complete an episode initially • Toward the end, the curve should converge to a straightly in where each episode runs for 15 steps as the optimal solution. • Monte Carlo method is not a good fit for this problem because many policies will not find the solution and thus an episode will run forever. It won’t learning anything until an episode is Q-learning: Off-policy TD Control The Q-learning algorithm is a online learning algorithm. This is very similar to Sarsa except Q-learning uses a max instead of the next state-action pair. It’s analogous to the bellman equation for DP. Comparison with Sarsa • Sarsa is sample based algorithm to solve the standard Bellman equation for state-action function. □ It also depends on a policy which guides to generate the trajectories • Q-learning is a sample based algorithm that solves the Bellman optimality equation to learn Q(s, a) □ No need switch between evaluation and policy improvement □ Directly learn Q* Why is Q-learning off-policy? • Q-learning bootstrap off the largest action value in its next sate. This is like sampling an action under an estimate of the optimal policy rather than the behavior policy. □ Q-learning’s target policy is always greedy with respect to its current values □ Its behavior policy can be anything such as epsilon-greedy. • Q-learning does not need to use importance sampling. □ Because Q-learning learns the optimal action-value function directly. □ Since the target policy is greedy, the expected return from the state is equal to a maximal action value from that state (all other probability is zero). • Subtleties of directly learning of Q() □ Although the optimal policy is learned. The exploratory nature of the epsilon-greedy behavior policy still causes agent to take suboptimal move, causing the estimate of the return to be □ So Sarsa have better performance online because it accounts for its own exploration (learn safer path, although not the optimal path) Expected Sarsa Explicitly computes the expectation over the next actions under the policy’s probability. • The expected value update is much more stable than Sarsa (lower variance) • More computation cost with more actions Expected Sarsa can be generalized to off-policy learning • The policy used can be a different target policy policy (this target policy can be any policy). • If it’s a greedy target policy, then it’s the same as Q-learning. So Q-learning is a special case of Expected Sarsa. Expected Sarsa is insensitive to the step size unlike the Sarsa. Reference: most of the material of this post comes from the book Reinforcement Learning 2 thoughts on “Temporal Difference Control in Reinforcement Learning”
{"url":"https://datajello.com/temporal-difference-control-in-reinforcement-learning/","timestamp":"2024-11-03T11:03:58Z","content_type":"text/html","content_length":"145452","record_id":"<urn:uuid:4f900a64-2d91-42cd-9d5d-2349d6b4e44a>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00208.warc.gz"}
A Practical Guide to Quantum Machine Learning and Quantum Optimization: Hands-on Approach to Modern Quantum Algorithms » downTURK - Download Fresh Hidden Object Games English | 2023 | ISBN: 1804613835 | 766 pages | True EPUB | 12.81 MB Work with fully explained algorithms and ready-to-use examples that can be run on quantum simulators and actual quantum computers with this comprehensive guide Key Features Get a solid grasp of the principles behind quantum algorithms and optimization with minimal mathematical prerequisites Learn the process of implementing the algorithms on simulators and actual quantum computers Solve real-world problems using practical examples of methods Book Description This book provides deep coverage of modern quantum algorithms that can be used to solve real-world problems. You'll be introduced to quantum computing using a hands-on approach with minimal You'll discover many algorithms, tools, and methods to model optimization problems with the QUBO and Ising formalisms, and you will find out how to solve optimization problems with quantum annealing, QAOA, Grover Adaptive Search (GAS), and VQE. This book also shows you how to train quantum machine learning models, such as quantum support vector machines, quantum neural networks, and quantum generative adversarial networks. The book takes a straightforward path to help you learn about quantum algorithms, illustrating them with code that's ready to be run on quantum simulators and actual quantum computers. You'll also learn how to utilize programming frameworks such as IBM's Qiskit, Xanadu's PennyLane, and D-Wave's Leap. Through reading this book, you will not only build a solid foundation of the fundamentals of quantum computing, but you will also become familiar with a wide variety of modern quantum algorithms. Moreover, this book will give you the programming skills that will enable you to start applying quantum methods to solve practical problems right away. What you will learn Review the basics of quantum computing Gain a solid understanding of modern quantum algorithms Understand how to formulate optimization problems with QUBO Solve optimization problems with quantum annealing, QAOA, GAS, and VQE Find out how to create quantum machine learning models Explore how quantum support vector machines and quantum neural networks work using Qiskit and PennyLane Discover how to implement hybrid architectures using Qiskit and PennyLane and its PyTorch interface Who this book is for This book is for professionals from a wide variety of backgrounds, including computer scientists and programmers, engineers, physicists, chemists, and mathematicians. Basic knowledge of linear algebra and some programming skills (for instance, in Python) are assumed, although all mathematical prerequisites will be covered in the appendices. Download From Rapidgator Download From Ddownload
{"url":"https://downturk.net/3573604-a-practical-guide-to-quantum-machine-learning-and-quantum-optimization-hands-on-approach-to-modern-quantum-algorithms.html","timestamp":"2024-11-05T09:35:30Z","content_type":"text/html","content_length":"140339","record_id":"<urn:uuid:d1a36a3e-b119-41f4-b9d2-787bd8022d1b>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00718.warc.gz"}
Letters in Numbers But... prime numbers all have 2 factors, one and themselves, so one is not a prime number as it only has one factor. Also, could you move the extension question to below the original problem please? It's not any help below the answers as you don't see it until everybody's finished.. [Transum: Thanks for the suggestion Mr Taylor, the extension has been moved up. Think again about one. It has three letters] Sign in to your Transum subscription account to see the answers Your access to the majority of the Transum resources continues to be free but you can help support the continued growth of the website by doing your Amazon shopping using the links on this page. Below is an Amazon link. As an Amazon Associate I earn a small amount from qualifying purchases which helps pay for the upkeep of this website. Educational Technology on Amazon Teacher, do your students have access to computers such as tablets, iPads or Laptops? This page was really designed for projection on a whiteboard but if you really want the students to have access to it here is a concise URL for a version of this page without the comments: However it would be better to assign one of the student interactive activities below. Here is the URL which will take them to the extension activity. Another Extension In the game Scrabble, which number when played as a word is the same as its score? Here is a list of the points available for the letter tiles: (1 point)-A, E, l, O, U, L, N, S, T, R (2 points)-D, G (3 points)-B, C, M, P (4 points)-F, H, V, W, Y (5 points)-K (8 points)-J, X (10 points)-Q, Z Curriculum Reference
{"url":"https://transum.org/Software/SW/Starter_of_the_day/starter_April14.ASP","timestamp":"2024-11-09T03:49:55Z","content_type":"text/html","content_length":"25676","record_id":"<urn:uuid:53a2b37e-5171-4922-8b12-75cb84ed70a4>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00156.warc.gz"}
Terms related to Probability In this blog, we will discuss probability and discuss different terms related to probability. But before learning about different terms related to probability, we should first understand the meaning of probability with the help of an example. Probability can be defined as the measure of certainty of an event. We all know that for some events, we cannot determine whether they would occur or not. For example, if you toss a coin, you don't know which side of the coin will come. Probability can range the certainty of an event from 0 to 1. 0 means the event would not occur, and 1 means the event would occur. For example, we can obtain either Head OR Tail; there are only two potential outcomes (H, T). Tossing two coins into the air, on the other hand, provides three possibilities: both coins show either heads or tails or one shows heads and one tail, i.e. (H, H), (H, T) (T, T). Formula Of Probability According to the probability formula, the possibility of an event occurring is equal to the ratio of the number of favorable outcomes to the total number of outcomes. The possibility that an event will occur P(E) = Number of favorable outcomes / Total number of outcomes For example, consider a box containing three black socks, two red socks, and three yellow socks. What is the probability of getting a yellow sock from the box? The probability of selecting a yellow sock from the box is equal to the number of yellow socks divided by the total number of socks in the box, which is 3/7.
{"url":"https://www.naukri.com/code360/library/terms-related-to-probability","timestamp":"2024-11-08T05:52:52Z","content_type":"text/html","content_length":"369044","record_id":"<urn:uuid:5b44553a-f42f-4e26-9948-6cc337f661bd>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00259.warc.gz"}
Description Supported The Calculator transform provides you with predefined functions that can be executed on input field values. Hop Engine In addition to the arguments (Field A, Field B and Field C) you must also specify the return type of the function. You can choose to remove fields (with the Remove option) from the result (output) after all values are calculated. This is useful in cases where you use temporary values that don’t need to end up in your pipeline fields. Flink The execution speed of the Calculator is far better than the speed provided by custom scripts (JavaScript). Dataflow Column Description New field The output field for the calculation result. Set Remove to Y for temporary fields that are required to calculate later fields, but are not needed in the final result. Calculation The calculation required for this new field. Field A The first field to use in the calculation. Field B The second field to use in the calculation. Field C The third field to use in the calculation. Value Type The data type to use for the result of this calculation. Length The length to set for the result of this calculation. Precision The precision to set for the result of this calculation. Remove A boolean flag to set. Set to Y for temporary fields that are only required for intermediate calculations and don’t need to be included in the output. Conversion mask Conversion mask to apply to date or numeric fields. Decimal symbol The decimal symbol to set in the output field. Grouping symbol The grouping symbol to set in the output field. Currency symbol The currency symbol to set in the output field. The table below lists the supported calculations in the Calculator transform: Function (Hop GUI view) Function (Metadata Description Injection view) Set field to constant A CONSTANT Create a copy of field COPY_OF_FIELD A + B ADD A plus B. A - B SUBTRACT A minus B. A * B MULTIPLY A multiplied by B. A / B DIVIDE A divided by B. A * A SQUARE The square of A. SQRT( A ) SQUARE_ROOT The square root of A. 100 * A / B PERCENT_1 Percentage of A in B. A - ( A * B / 100 ) PERCENT_2 Subtract B% of A. A + ( A * B / 100 ) PERCENT_3 Add B% to A. A + B *C COMBINATION_1 Add A and B times C. SQRT( A*A + B*B ) COMBINATION_2 Calculate ?(A2+B2). Returns the closest Integer to the argument. The result is rounded to an Integer by adding 1/2, taking the floor of the result, and casting the ROUND( A ) ROUND_1 result to type int. In other words, the result is equal to the value of the expression: floor (a + 0.5). In case you need the rounding method "Round half to even", use the following method ROUND( A, B ) with no decimals (B=0). Round A to the nearest even number with B decimals. The used rounding method is "Round half to even", it is also called unbiased rounding, ROUND( A, B ) ROUND_2 convergent rounding, statistician’s rounding, Dutch rounding, Gaussian rounding, odd-even rounding, bankers' rounding or broken rounding, and is widely used in bookkeeping. This is the default rounding mode used in IEEE 754 computing functions and operators. In Germany it is often called "Mathematisches Runden". STDROUND( A ) ROUND_STD_1 Round A to the nearest integer. The used rounding method is "Round half away from zero", it is also called standard or common rounding. In Germany it is known as "kaufmännische Rundung" (and defined in DIN 1333). STDROUND( A, B ) ROUND_STD_2 Same rounding method used as in STDROUND (A) but with B decimals. CEIL( A ) CEIL The ceiling function map a number to the smallest following integer. FLOOR( A ) FLOOR The floor function map a number to the largest previous integer. NVL( A, B ) NVL If A is not NULL, return A, else B. Note that sometimes your variable won’t be null but an empty string. Date A + B days ADD_DAYS Add B days to Date field A. Note: Only integer values for B are supported. If you need non-integer calculations, please add a second calculation with hours. Year of date A YEAR_OF_DATE Calculate the year of date A. Month of date A MONTH_OF_DATE Calculate number the month of date A. Day of year of date A DAY_OF_YEAR Calculate the day of year (1-365). Day of month of date A DAY_OF_MONTH Calculate the day of month (1-31). Day of week of date A DAY_OF_WEEK Calculate the day of week (1-7). 1 is Sunday, 2 is Monday, etc. Week of year of date A WEEK_OF_YEAR Calculate the week of year (1-54). ISO8601 Week of year of WEEK_OF_YEAR_ISO8601 Calculate the week of the year ISO8601 style (1-53). date A ISO8601 Year of date A YEAR_OF_DATE_ISO8601 Calculate the year ISO8601 style. Byte to hex encode of BYTE_TO_HEX_ENCODE Encode bytes in a string to a hexadecimal representation. string A Hex to byte encode of HEX_TO_BYTE_DECODE Encode a string in its own hexadecimal representation. string A Char to hex encode of CHAR_TO_HEX_ENCODE Encode characters in a string to a hexadecimal representation. string A Hex to char decode of HEX_TO_CHAR_DECODE Decode a string from its hexadecimal representation (add a leading 0 when A is of odd length). string A Checksum of a file A CRC32 Calculate the checksum of a file using CRC-32. using CRC-32 Checksum of a file A ADLER32 Calculate the checksum of a file using Adler-32. using Adler-32 Checksum of a file A MD5 Calculate the checksum of a file using MD5. using MD5 Checksum of a file A SHA1 Calculate the checksum of a file using SHA-1. using SHA-1 Levenshtein Distance LEVENSHTEIN_DISTANCE Calculates the Levenshtein Distance: http://en.wikipedia.org/wiki/Levenshtein_distance (Source A and Target B) Metaphone of A METAPHONE Calculates the metaphone of A: http://en.wikipedia.org/wiki/Metaphone Double metaphone of A DOUBLE_METAPHONE Calculates the double metaphone of A: http://en.wikipedia.org/wiki/Double_Metaphone Absolute value ABS(A) ABS Calculates the Absolute value of A. Removes time value of A. Note: Daylight Savings Time (DST) changes in Sao Paulo and some other parts of Brazil at midnight 0:00. This makes it Remove time from a date REMOVE_TIME_FROM_DATE impossible to set the time to 0:00 at the specific date, when the DST changes from 0:00 to 1:00 am. So, there is one date in one year in these A regions where this function will fail with an "IllegalArgumentException: HOUR_OF_DAY: 0 → 1". It is not an issue for Europe, the US and other regions where the time changes at 1:00 or 2:00 or 3:00 am. Date A - Date B (in DATE_DIFF Calculates difference, in days, between A date field and B date field. A + B + C ADD3 A plus B plus C. First letter of each word of a string A in INITCAP Transforms the first letter of each word within a string. UpperCase of a string A UPPER_CASE Transforms a string to uppercase. LowerCase of a string A LOWER_CASE Transforms a string to lowercase. Mask XML content from MASK_XML Escape XML content; replace characters with &values. string A Protect (CDATA) XML USE_CDATA Indicates an XML string is general character data, rather than non-character data or character data with a more specific, limited structure. The content from string A given string will be enclosed into <![CDATA[String]]>. Remove CR from a string REMOVE_CR Removes carriage returns from a string. Remove LF from a string REMOVE_LF Removes linefeeds from a string. Remove CRLF from a REMOVE_CRLF Removes carriage returns/linefeeds from a string. string A Remove TAB from a REMOVE_TAB Removes tab characters from a string. string A Return only digits from GET_ONLY_DIGITS Outputs only digits (0-9) from a string. string A Remove digits from REMOVE_DIGITS Removes all digits (0-9) from a string. string A Return the length of a STRING_LEN Returns the length of the string. string A Load file content in LOAD_FILE_CONTENT_BINARY Loads the content of the given file (in field A) to a binary data type (e.g. pictures). Add time B to date A ADD_TIME_TO_DATE Add the time to a date, returns date and time as one value. Quarter of date A QUARTER_OF_DATE Returns the quarter (1 to 4) of the date. variable substitution SUBSTITUTE_VARIABLE Substitute variables within a string. in string A Unescape XML content UNESCAPE_XML Unescape XML content from the string. Escape HTML content ESCAPE_HTML Escape HTML within the string. Unescape HTML content UNESCAPE_HTML Unescape HTML within the string. Escape SQL content ESCAPE_SQL Escapes the characters in a String to be suitable to pass to an SQL query. Date A - Date B DATE_WORKING_DIFF Calculates the difference between Date field A and Date field B (only working days Mon-Fri). (working days) Date A + B Months ADD_MONTHS Add B months to Date field A. INFO: Only integer values for B are supported. If you need non-integer calculations, please add a second calculation with days. Check if an XML file A CHECK_XML_FILE_WELL_FORMED Validates XML file input. is well-formed Check if an XML string CHECK_XML_WELL_FORMED Validates XML string input. A is well-formed Get encoding of file A GET_FILE_ENCODING Guess the best encoding (UTF-8) for the given file. distance between String DAMERAU_LEVENSHTEIN Calculates Dameraulevenshtein distance between strings: http://en.wikipedia.org/wiki/Damerau%E2%80%93Levenshtein_distance A and String B distance between String NEEDLEMAN_WUNSH Calculates NeedlemanWunsch distance between strings: http://en.wikipedia.org/wiki/Needleman%E2%80%93Wunsch_algorithm A and String B Jaro similitude between JARO Returns the Jaro similarity coefficient between two strings. String A and String B JaroWinkler similitude between String A and JARO_WINKLER Returns the Jaro similarity coefficient between two string: http://en.wikipedia.org/wiki/Jaro%E2%80%93Winkler_distance String B SoundEx of String A SOUNDEX Encodes a string into a Soundex value. RefinedSoundEx of REFINED_SOUNDEX Retrieves the Refined Soundex code for a given string object String A Add B hours to Date field Date A + B Hours ADD_HOURS Only integer values for B are supported. If you need non-integer calculations, please add a second calculation with minutes. Add B minutes to Date field. Date A + B Minutes ADD_MINUTES Only integer values for B are supported. If you need non-integer calculations, please add a second calculation with seconds. Date A - Date B DATE_DIFF_MSEC Subtract B milliseconds from Date field A Date A - Date B Subtract B seconds from Date field A. (seconds) DATE_DIFF_SEC Only integer values for B are supported. If you need non-integer calculations, please add a second calculation with milliseconds. Date A - Date B Subtract B minutes from Date field A. (minutes) DATE_DIFF_MN Only integer values for B are supported. If you need non-integer calculations, please add a second calculation with seconds. Subtract B hours from Date field A. Date A - Date B (hours) DATE_DIFF_HR Only integer values for B are supported. If you need non-integer calculations, please add a second calculation with minutes. Hour of Day of Date A HOUR_OF_DAY Extract the hour part of the given date Minute of Hour of Date MINUTE_OF_HOUR Extract the minute part of the given date Second of Hour of Date SECOND_OF_MINUTE Extract the second part of a given date ROUND_CUSTOM( A , B ) ROUND_CUSTOM_1 (… not implemented…?) ROUND_CUSTOM( A , B , C ROUND_CUSTOM_2 (… not implemented…?) Date A + B seconds ADD_SECONDS Add B seconds to date field A Remainder of A / B REMAINDER Remainder of integer division between A and B (A modulo B) Base64 Encode BASE64_ENCODE Encode a string in Base64 encoding without padding at the end Base64 Encode (padded) BASE64_ENCODE_PADDED Encode a string in Base64 encoding with padding at the end, cf. section 3.2 of RFC 4648 Base64 Decode BASE64_DECODE Decode a Base64 encoded string. Supports both padded or non-padded input. Metadata Injection support All fields of this transform support metadata injection. You can use this transform with ETL Metadata Injection to pass metadata to your pipeline at runtime. Use the values in the column "Function (Metadata Injection view)" from the table above to specify the operation (Calculation type) applied to the fields. FAQ on length and precision and data types affecting the results Q: I made a pipeline using A/B in a calculator transform and it rounded wrong: the 2 input fields are integer but my result type was Number(6, 4) so I would expect the integers to be cast to Number before executing the division. If I wanted to execute e.g. 28/222, I got 0.0 instead of 0.1261 which I expected. So it seems the result type is ignored. If I change the input types both to Number(6, 4) I get as result 0.12612612612612611 which still ignores the result type (4 places after the comma). A: Length & Precision are just metadata pieces. If you want to round to the specified precision, you should do this in another transform. However: please keep in mind that rounding double point precision values is futile anyway. A floating point number is stored as an approximation (it floats) so 0.1261 (your desired output) could (would probably) end up being stored as 0.126099999999 or 0.1261000000001 (Note: this is not the case for So in the end we round using BigDecimals once we store the numbers in the output table, but NOT during the pipeline. The same is true for the Text File Output transform. If you would have specified Integer as result type, the internal number format would have been retained, you would press "Get Fields" and it the required Integer type would be filled in. The required conversion would take place there and then. In short: we convert to the required metadata type when we land the data somewhere, NOT BEFORE. Q: How do the data types work internally? A: You might notice that if you multiply an Integer and Number, the result is always rounded. That is because Calculator takes data type of the left hand size of the multiplication (A) as the driver for the calculation. As such, if you want more precision, you should put field B on the left hand side or change the data type to Number and all will be well.
{"url":"https://hop.apache.org/manual/latest/pipeline/transforms/calculator.html","timestamp":"2024-11-12T19:10:51Z","content_type":"text/html","content_length":"116178","record_id":"<urn:uuid:adce9a7a-9624-4d8d-9eaf-f4a51e1251a6>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00603.warc.gz"}
Simple interest rate equation Calculation[edit]. Simple interest[edit]. Main article: Interest rate. Simple interest is calculated Guide to Simple Interest Rate formula, here we discuss its uses with practical examples and also provide you Calculator with downloadable excel template. simple interest (SI) calculator - formula, step by step calculation & solved pay for the principal sum for a given values of principal, rate of interest & time period. 14 Sep 2019 Learn about the compound interest formula and how to use it to calculate Multiply the principal amount by one plus the annual interest rate to the power of Believe me when I tell you that it isn't quite as simple as it sounds. To begin your calculation, enter your starting amount along with the annual interest rate and the start date (assuming it isn't today). Then, select a period of time A total of $1,200 is invested at a simple interest rate of 6% for 4 months. How much interest is earned on this investment? Solution. Before we can apply the formula The rate of interest is usually expressed as a percent per year, and is calculated by using the decimal equivalent of the percent. The variable for time, t t , The formula for calculating simple interest is: Principal * Interest Rate * Term of the loan. Loans rarely use the simple-interest calculation, but those that do are Usually this amount will be on a monthly basis. The formula for simple interest is principal times the interest rate times the period. Usually period is expressed as a Simple and compound interest are compared in the tables below. In both cases, the principal is $100.00 is and the interest rate is 7%. Simple Interest. Odeh discusses the Mathematics of Money beginning with a definition of the Time Value of Money. Calculating simple and compound interest rates are Simple Interest Rate Formula (Table of Contents) Simple Interest Rate Formula; Examples of Simple Interest Rate Formula (With Excel Template) Simple Interest Rate Calculator; Simple Interest Rate Formula. In general parlance, Interest refers to the additional amount paid for obtaining monetary assistance from the lender. In finance term, when However, most credit cards quote an annual percentage rate (APR) but actually charge interest daily—with the total of principal and interest used as the basis for the next interest charge. As a result, you accumulate a lot more in interest charges than you would tally with a simple interest calculation The calculation of simple interest is equal to the principal amount multiplied by the interest rate, multiplied by the number of periods. is: For example, if the simple interest rate is given to be 5% on a loan of $1,000 for a duration of 4 years, the total simple interest will come out to be: 5% x $1,000 x 4 = $200. #2 Compound Interest How to calculate the Simple Interest Formula, how to solve interest problems using the simple interest formula, examples and step by step solutions, How to use the formula for simple interest to find the principal, the rate or the time, compound interest formulas, continuously compounded interest formulas, How to solve simple interest problems in real life, compound interest problems The simple interest formula is fairly simple to compute and to remember as principal times rate times time. An example of a simple interest calculation would be a 3 year saving account at a 10% rate with an original balance of $1000. By inputting these variables into the formula, $1000 times 10% times 3 years would be $300. Simple interest is 8 Oct 2015 The simple interest formula allows us to calculate I, which is the interest earned or charged on a loan. According to this formula, the amount of The simple interest formula is fairly simple to compute and to remember as principal times rate times time. An example of a simple interest calculation would be a 3 year saving account at a 10% rate with an original balance of $1000. By inputting these variables into the formula, $1000 times 10% times 3 years would be $300. Simple interest is How Do You Use the Formula for Simple Interest? If you already have a bank account or if you plan to have one in the future, then this tutorial is a must see! Follow along as this tutorial goes through a word problem involving simple interest. The simple interest on a loan is calculated by multiplying the principal amount by the rate of interest and the amount of time on the loan. The formula for calculating simple interest is: I = Prn. I is the interest earned, P is the principal amount, r is the interest rate as a decimal, and n is the number of years remaining on the loan. This is different from compound interest, where interest is calculated on on the initial amount and on any interest earned. As you will see in the examples below, the simple interest formula can be used to calculate the interest earned, the total amount, and other values depending on the problem. Find out the differences between simple and compound interest. Interest is defined as the cost of borrowing money or the rate paid on a deposit to an investor. Interest can be classified as simple When you know the principal amount, the rate, and the time, the amount of interest can be calculated by using the formula: I = Prt. For the above calculation, you have $4,500.00 to invest (or borrow) with a rate of 9.5 percent for a six-year period of time. The simple interest formula is fairly simple to compute and to remember as principal times rate times time. An example of a simple interest calculation would be a 3 year saving account at a 10% rate with an original balance of $1000. By inputting these variables into the formula, $1000 times 10% times 3 years would be $300. Simple interest is How Do You Use the Formula for Simple Interest? If you already have a bank account or if you plan to have one in the future, then this tutorial is a must see! Follow along as this tutorial goes through a word problem involving simple interest. The simple interest on a loan is calculated by multiplying the principal amount by the rate of interest and the amount of time on the loan. The formula for calculating simple interest is: I = Prn. I is the interest earned, P is the principal amount, r is the interest rate as a decimal, and n is the number of years remaining on the loan. This is different from compound interest, where interest is calculated on on the initial amount and on any interest earned. As you will see in the examples below, the simple interest formula can be used to calculate the interest earned, the total amount, and other values depending on the problem. Find out the differences between simple and compound interest. Interest is defined as the cost of borrowing money or the rate paid on a deposit to an investor. Interest can be classified as simple Compound interest, or 'interest on interest', is calculated with the compound interest formula. Multiply the principal amount by one plus the annual interest rate to the power of the number of compound periods to get a combined figure for principal and compound interest. Subtract the principal if you want just the compound interest. The simple interest formula allows us to calculate I, which is the interest earned or charged on a loan. According to this formula, the amount of interest is given by I = Prt, where P is the principal, r is the annual interest rate in decimal form, and t is the loan period expressed in years. Simple Interest (PV). Interest mode. annually(365) annually(360) monthly weekly daily. Interest rate. %; (r). Future value. (FV). Elapsed days. (days). When you know the principal amount, the rate, and the time, the amount of interest can be calculated by using the formula: I = Prt. For the above calculation, you have $4,500.00 to invest (or borrow) with a rate of 9.5 percent for a six-year period of time. An interest rate formula is used to calculate the repayment amounts for loans and interest over investment on fixed deposits, mutual funds, etc. It is also used to calculate interest on a credit card. Solving our equation: A = 10000(1 + (0.03875 × 5)) = 11937.5 A = $11,937.50 The total amount accrued, principal plus interest, from simple interest on a principal of $10,000.00 at a rate of 3.875% per year for 5 years is $11,937.50. Need to borrow money? It'll cost you. But how much depends on how interest is calculated. Take a look at simple vs. compound interest. The formula for finding simple interest is: Interest = Principal * Rate * Time. If $100 was borrowed for 2 years at a 10% interest rate, the interest would be The simple interest formula is used to calculate interest on an investment. You multiply the principal, interest rate and time. P = Principal, which is your initial 9 Apr 2019 Interest expense calculation for the first year is easy, just apply the rate (6%) to the principal balance of $10,000 to get $600. This $600 is the
{"url":"https://bestftxprbcyhk.netlify.app/degraw46644pu/simple-interest-rate-equation-276.html","timestamp":"2024-11-11T01:19:22Z","content_type":"text/html","content_length":"36171","record_id":"<urn:uuid:9cab2fe9-b015-4eb6-b0ad-a4c0f84c0d26>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00126.warc.gz"}
Micromho to Abmho Converter (μʊ to abʊ) | Kody Tools 1 Micromho = 1e-15 Abmho One Micromho is Equal to How Many Abmho? The answer is one Micromho is equal to 1e-15 Abmho and that means we can also write it as 1 Micromho = 1e-15 Abmho. Feel free to use our online unit conversion calculator to convert the unit from Micromho to Abmho. Just simply enter value 1 in Micromho and see the result in Abmho. Manually converting Micromho to Abmho can be time-consuming,especially when you don’t have enough knowledge about Electrical Conductance units conversion. Since there is a lot of complexity and some sort of learning curve is involved, most of the users end up using an online Micromho to Abmho converter tool to get the job done as soon as possible. We have so many online tools available to convert Micromho to Abmho, but not every online tool gives an accurate result and that is why we have created this online Micromho to Abmho converter tool. It is a very simple and easy-to-use tool. Most important thing is that it is beginner-friendly. How to Convert Micromho to Abmho (μʊ to abʊ) By using our Micromho to Abmho conversion tool, you know that one Micromho is equivalent to 1e-15 Abmho. Hence, to convert Micromho to Abmho, we just need to multiply the number by 1e-15. We are going to use very simple Micromho to Abmho conversion formula for that. Pleas see the calculation example given below. \(\text{1 Micromho} = 1 \times 1e-15 = \text{1e-15 Abmho}\) What Unit of Measure is Micromho? Micromho is a unit of measurement for electric conductance. Micromho is decimal fraction of electric conductance unit mho. One micromho is equal to 0.000001 mho. What is the Symbol of Micromho? The symbol of Micromho is μʊ. This means you can also write one Micromho as 1 μʊ. What Unit of Measure is Abmho? Abmho is a unit of measurement for electric conductance. One abmho is equal to 1e+9 mho. What is the Symbol of Abmho? The symbol of Abmho is abʊ. This means you can also write one Abmho as 1 abʊ. How to Use Micromho to Abmho Converter Tool • As you can see, we have 2 input fields and 2 dropdowns. • From the first dropdown, select Micromho and in the first input field, enter a value. • From the second dropdown, select Abmho. • Instantly, the tool will convert the value from Micromho to Abmho and display the result in the second input field. Example of Micromho to Abmho Converter Tool Micromho to Abmho Conversion Table Micromho [μʊ] Abmho [abʊ] Description 1 Micromho 1e-15 Abmho 1 Micromho = 1e-15 Abmho 2 Micromho 2e-15 Abmho 2 Micromho = 2e-15 Abmho 3 Micromho 3e-15 Abmho 3 Micromho = 3e-15 Abmho 4 Micromho 4e-15 Abmho 4 Micromho = 4e-15 Abmho 5 Micromho 5e-15 Abmho 5 Micromho = 5e-15 Abmho 6 Micromho 6e-15 Abmho 6 Micromho = 6e-15 Abmho 7 Micromho 7e-15 Abmho 7 Micromho = 7e-15 Abmho 8 Micromho 8e-15 Abmho 8 Micromho = 8e-15 Abmho 9 Micromho 9e-15 Abmho 9 Micromho = 9e-15 Abmho 10 Micromho 1e-14 Abmho 10 Micromho = 1e-14 Abmho 100 Micromho 1e-13 Abmho 100 Micromho = 1e-13 Abmho 1000 Micromho 1e-12 Abmho 1000 Micromho = 1e-12 Abmho Micromho to Other Units Conversion Table Conversion Description 1 Micromho = 0.000001 Siemens 1 Micromho in Siemens is equal to 0.000001 1 Micromho = 1e-12 Megasiemens 1 Micromho in Megasiemens is equal to 1e-12 1 Micromho = 1e-9 Kilosiemens 1 Micromho in Kilosiemens is equal to 1e-9 1 Micromho = 0.001 Millisiemens 1 Micromho in Millisiemens is equal to 0.001 1 Micromho = 1 Microsiemens 1 Micromho in Microsiemens is equal to 1 1 Micromho = 0.000001 Ampere/Volt 1 Micromho in Ampere/Volt is equal to 0.000001 1 Micromho = 0.000001 Mho 1 Micromho in Mho is equal to 0.000001 1 Micromho = 1 Gemmho 1 Micromho in Gemmho is equal to 1 1 Micromho = 1e-15 Abmho 1 Micromho in Abmho is equal to 1e-15 1 Micromho = 899000.04 Statmho 1 Micromho in Statmho is equal to 899000.04
{"url":"https://www.kodytools.com/units/conductance/from/micromho/to/abmho","timestamp":"2024-11-02T10:50:10Z","content_type":"text/html","content_length":"76636","record_id":"<urn:uuid:a206123e-a562-4d2a-85ee-3ac34ff7a858>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00758.warc.gz"}